Student Modeling and Machine Learning
Sison , Raymund; Shimura , Masamichi
1998-01-01
After identifying essential student modeling issues and machine learning approaches, this paper examines how machine learning techniques have been used to automate the construction of student models as well as the background knowledge necessary for student modeling. In the process, the paper sheds light on the difficulty, suitability and potential of using machine learning for student modeling processes, and, to a lesser extent, the potential of using student modeling techniques in machine le...
Tunnel Boring Machine Performance Study. Final Report
1984-06-01
Full face tunnel boring machine "TBM" performance during the excavation of 6 tunnels in sedimentary rock is considered in terms of utilization, penetration rates and cutter wear. The construction records are analyzed and the results are used to inves...
Formal modeling of virtual machines
Cremers, A. B.; Hibbard, T. N.
1978-01-01
Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
Parallel Boltzmann machines : a mathematical model
Zwietering, P.J.; Aarts, E.H.L.
1991-01-01
A mathematical model is presented for the description of parallel Boltzmann machines. The framework is based on the theory of Markov chains and combines a number of previously known results into one generic model. It is argued that parallel Boltzmann machines maximize a function consisting of a
Model-Agnostic Interpretability of Machine Learning
Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos
2016-01-01
Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred f...
A Multiple Model Prediction Algorithm for CNC Machine Wear PHM
Directory of Open Access Journals (Sweden)
Huimin Chen
2011-01-01
Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.
Thermal models of pulse electrochemical machining
International Nuclear Information System (INIS)
Kozak, J.
2004-01-01
Pulse electrochemical machining (PECM) provides an economical and effective method for machining high strength, heat-resistant materials into complex shapes such as turbine blades, die, molds and micro cavities. Pulse Electrochemical Machining involves the application of a voltage pulse at high current density in the anodic dissolution process. Small interelectrode gap, low electrolyte flow rate, gap state recovery during the pulse off-times lead to improved machining accuracy and surface finish when compared with ECM using continuous current. This paper presents a mathematical model for PECM and employs this model in a computer simulation of the PECM process for determination of the thermal limitation and energy consumption in PECM. The experimental results and discussion of the characteristics PECM are presented. (authors)
Small machine tools for small workpieces final report of the DFG priority program 1476
Sanders, Adam
2017-01-01
This contributed volume presents the research results of the program “Small machine tools for small work pieces” (SPP 1476), funded by the German Research Society (DFG). The book contains the final report of the priority program, presenting novel approached for size-adapted, reconfigurable micro machine tools. The target audience primarily comprises research experts and practitioners in the field of micro machine tools, but the book may also be beneficial for graduate students.
Prototype-based models in machine learning
Biehl, Michael; Hammer, Barbara; Villmann, Thomas
2016-01-01
An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of
Surface Inspection Machine Infrared (SIMIR). Final CRADA report
Energy Technology Data Exchange (ETDEWEB)
Powell, G.L. [Lockheed Martin Energy Systems, Inc., Oak Ridge, TN (United States); Neu, J.T.; Beecroft, M. [Surface Optics Corp., San Diego, CA (United States)
1997-02-28
This Cooperative Research and Development Agreement was a one year effort to make the surface inspection machine based on diffuse reflectance infrared spectroscopy (Surface Inspection Machine-Infrared, SIMIR), being developed by Surface Optics Corporation, perform to its highest potential as a practical, portable surface inspection machine. The design function of the SIMIR is to inspect metal surfaces for cleanliness (stains). The system is also capable of evaluating graphite-resin systems for cure and heat damage, and for measuring the effects of moisture exposure on lithium hydride, corrosion on uranium metal, and the constituents of and contamination on wood, paper, and fabrics. Over the period of the CRADA, extensive experience with the use of the SIMIR for surface cleanliness measurements have been achieved through collaborations with NASA and the Army. The SIMIR was made available to the AMTEX CRADA for Finish on Yarn where it made a very significant contribution. The SIMIR was the foundation of a Forest Products CRADA that was developed over the time interval of this CRADA. Surface Optics Corporation and the SIMIR have been introduced to the chemical spectroscopy on-line analysis market and have made staffing additions and arrangements for international marketing of the SIMIR as an on-line surface inspection device. LMES has been introduced to a wide range of aerospace applications, the research and fabrication skills of Surface Optics Corporation, has gained extensive experience in the areas of surface cleanliness from collaborations with NASA and the Army, and an extensive introduction to the textile and forest products industries. The SIMIR, marketed as the SOC-400, has filled an important new technology need in the DOE-DP Enhanced Surveillance Program with instruments delivered to or on order by LMES, LANL, LLNL, and Pantex, where extensive collaborations are underway to implement and improve this technology.
Surface Inspection Machine Infrared (SIMIR). Final CRADA report
International Nuclear Information System (INIS)
Powell, G.L.; Neu, J.T.; Beecroft, M.
1997-01-01
This Cooperative Research and Development Agreement was a one year effort to make the surface inspection machine based on diffuse reflectance infrared spectroscopy (Surface Inspection Machine-Infrared, SIMIR), being developed by Surface Optics Corporation, perform to its highest potential as a practical, portable surface inspection machine. The design function of the SIMIR is to inspect metal surfaces for cleanliness (stains). The system is also capable of evaluating graphite-resin systems for cure and heat damage, and for measuring the effects of moisture exposure on lithium hydride, corrosion on uranium metal, and the constituents of and contamination on wood, paper, and fabrics. Over the period of the CRADA, extensive experience with the use of the SIMIR for surface cleanliness measurements have been achieved through collaborations with NASA and the Army. The SIMIR was made available to the AMTEX CRADA for Finish on Yarn where it made a very significant contribution. The SIMIR was the foundation of a Forest Products CRADA that was developed over the time interval of this CRADA. Surface Optics Corporation and the SIMIR have been introduced to the chemical spectroscopy on-line analysis market and have made staffing additions and arrangements for international marketing of the SIMIR as an on-line surface inspection device. LMES has been introduced to a wide range of aerospace applications, the research and fabrication skills of Surface Optics Corporation, has gained extensive experience in the areas of surface cleanliness from collaborations with NASA and the Army, and an extensive introduction to the textile and forest products industries. The SIMIR, marketed as the SOC-400, has filled an important new technology need in the DOE-DP Enhanced Surveillance Program with instruments delivered to or on order by LMES, LANL, LLNL, and Pantex, where extensive collaborations are underway to implement and improve this technology
Understanding and modelling Man-Machine Interaction
International Nuclear Information System (INIS)
Cacciabue, P.C.
1991-01-01
This paper gives an overview of the current state of the art in man machine systems interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to design and analysis of Man-Machine Interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans and their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (author)
Understanding and modelling man-machine interaction
International Nuclear Information System (INIS)
Cacciabue, P.C.
1996-01-01
This paper gives an overview of the current state of the art in man-machine system interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to the design and analysis of man-machine interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans an their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (orig.)
Electromechanical model of machine for vibroabrasive treatment of machine parts
Gorbatiyk, Ruslan; Palamarchuk, Igor; Chubyk, Roman
2015-01-01
A lot of operations on trimming clean and finishing – stripping up treatment, first of all, removing of burrs, rounding and processing of borders, until recently time was carried out by hand, and hardly exposed to automation and became a serious obstacle in subsequent growth of the labor productivity. Machines with free kinematics connection between a tool and the treating parts is provided by the printing-down of all of the surface of the machine parts, that allows us to effectively treat bo...
Food labeling; calorie labeling of articles of food in vending machines. Final rule.
2014-12-01
To implement the vending machine food labeling provisions of the Patient Protection and Affordable Care Act of 2010 (ACA), the Food and Drug Administration (FDA or we) is establishing requirements for providing calorie declarations for food sold from certain vending machines. This final rule will ensure that calorie information is available for certain food sold from a vending machine that does not permit a prospective purchaser to examine the Nutrition Facts Panel before purchasing the article, or does not otherwise provide visible nutrition information at the point of purchase. The declaration of accurate and clear calorie information for food sold from vending machines will make calorie information available to consumers in a direct and accessible manner to enable consumers to make informed and healthful dietary choices. This final rule applies to certain food from vending machines operated by a person engaged in the business of owning or operating 20 or more vending machines. Vending machine operators not subject to the rules may elect to be subject to the Federal requirements by registering with FDA.
Model-Driven Engineering of Machine Executable Code
Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira
Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.
VIRTUAL MODELING OF A NUMERICAL CONTROL MACHINE TOOL USED FOR COMPLEX MACHINING OPERATIONS
Directory of Open Access Journals (Sweden)
POPESCU Adrian
2015-11-01
Full Text Available This paper presents the 3D virtual model of the numerical control machine Modustar 100, in terms of machine elements. This is a CNC machine of modular construction, all components allowing the assembly in various configurations. The paper focused on the design of the subassemblies specific to the axes numerically controlled by means of CATIA v5, which contained different drive kinematic chains of different translation modules that ensures translation on X, Y and Z axis. Machine tool development for high speed and highly precise cutting demands employment of advanced simulation techniques witch it reflect on cost of total development of the machine.
comparative study of moore and mealy machine models adaptation
African Journals Online (AJOL)
user
automata model was developed for ABS manufacturing process using Moore and Mealy Finite State Machines. Simulation ... The simulation results showed that the Mealy Machine is faster than the Moore ..... random numbers from MATLAB.
Screening for Prediabetes Using Machine Learning Models
Directory of Open Access Journals (Sweden)
Soo Beom Choi
2014-01-01
Full Text Available The global prevalence of diabetes is rapidly increasing. Studies support the necessity of screening and interventions for prediabetes, which could result in serious complications and diabetes. This study aimed at developing an intelligence-based screening model for prediabetes. Data from the Korean National Health and Nutrition Examination Survey (KNHANES were used, excluding subjects with diabetes. The KNHANES 2010 data (n=4685 were used for training and internal validation, while data from KNHANES 2011 (n=4566 were used for external validation. We developed two models to screen for prediabetes using an artificial neural network (ANN and support vector machine (SVM and performed a systematic evaluation of the models using internal and external validation. We compared the performance of our models with that of a screening score model based on logistic regression analysis for prediabetes that had been developed previously. The SVM model showed the areas under the curve of 0.731 in the external datasets, which is higher than those of the ANN model (0.729 and the screening score model (0.712, respectively. The prescreening methods developed in this study performed better than the screening score model that had been developed previously and may be more effective method for prediabetes screening.
Machine Directional Register System Modeling for Shaft-Less Drive Gravure Printing Machines
Directory of Open Access Journals (Sweden)
Shanhui Liu
2013-01-01
Full Text Available In the latest type of gravure printing machines referred to as the shaft-less drive system, each gravure printing roller is driven by an individual servo motor, and all motors are electrically synchronized. The register error is regulated by a speed difference between the adjacent printing rollers. In order to improve the control accuracy of register system, an accurate mathematical model of the register system should be investigated for the latest machines. Therefore, the mathematical model of the machine directional register (MDR system is studied for the multicolor gravure printing machines in this paper. According to the definition of the MDR error, the model is derived, and then it is validated by the numerical simulation and experiments carried out in the experimental setup of the four-color gravure printing machines. The results show that the established MDR system model is accurate and reliable.
Conceptual models in man-machine design verification
International Nuclear Information System (INIS)
Rasmussen, J.
1985-01-01
The need for systematic methods for evaluation of design concepts for new man-machine systems has been rapidly increasing in consequence of the introduction of modern information technology. Direct empirical methods are difficult to apply when functions during rare conditions and support of operator decisions during emergencies are to be evaluated. In this paper, the problems of analytical evaluations based on conceptual models of the man-machine interaction are discussed, and the relations to system design and analytical risk assessment are considered. Finally, a conceptual framework for analytical evaluation is proposed, including several domains of description: 1. The problem space, in the form of a means-end hierarchy; 2. The structure of the decision process; 3. The mental strategies and heuristics used by operators; 4. The levels of cognitive control and the mechanisms related to human errors. Finally, the need for models representing operators' subjective criteria for choosing among available mental strategies and for accepting advice from intelligent interfaces is discussed
Prototype-based models in machine learning.
Biehl, Michael; Hammer, Barbara; Villmann, Thomas
2016-01-01
An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high-dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so-called neural gas approach and Kohonen's topology-preserving self-organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning. © 2016 Wiley Periodicals, Inc.
Simulation Tools for Electrical Machines Modelling: Teaching and ...
African Journals Online (AJOL)
Simulation tools are used both for research and teaching to allow a good comprehension of the systems under study before practical implementations. This paper illustrates the way MATLAB is used to model non-linearites in synchronous machine. The machine is modeled in rotor reference frame with currents as state ...
Investigation of approximate models of experimental temperature characteristics of machines
Parfenov, I. V.; Polyakov, A. N.
2018-05-01
This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
Virtual NC machine model with integrated knowledge data
International Nuclear Information System (INIS)
Sidorenko, Sofija; Dukovski, Vladimir
2002-01-01
The concept of virtual NC machining was established for providing a virtual product that could be compared with an appropriate designed product, in order to make NC program correctness evaluation, without real experiments. This concept is applied in the intelligent CAD/CAM system named VIRTUAL MANUFACTURE. This paper presents the first intelligent module that enables creation of the virtual models of existed NC machines and virtual creation of new ones, applying modular composition. Creation of a virtual NC machine is carried out via automatic knowledge data saving (features of the created NC machine). (Author)
Testing and Modeling of Machine Properties in Resistance Welding
DEFF Research Database (Denmark)
Wu, Pei
The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... as real projection welding tests, is easy to realize in industry, since tests may be performed in situ. In part II, an approach of characterizing the electrical properties of AC resistance welding machines is presented, involving testing and mathematical modelling of the weld current, the firing angle...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...
Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines
DEFF Research Database (Denmark)
Wu, Pei; Zhang, Wenqi; Bay, Niels
2003-01-01
for both upper and lower electrode systems. This has laid a foundation for modeling the welding process and selecting the welding parameters considering the machine factors. The method is straightforward and easy to be applied in industry since the whole procedure is based on tests with no requirements......The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... model for characterizing the dynamic mechanical responses of machine and a special test set-up called breaking test set-up are developed. Based on the model and the test results, the mechanical parameters of machine are determined, including the equivalent mass, damping coefficient, and stiffness...
MODELING AND INVESTIGATION OF ASYNCHRONOUS TWO-MACHINE SYSTEM MODES
Directory of Open Access Journals (Sweden)
V. S. Safaryan
2014-01-01
Full Text Available The paper considers stationary and transient processes of an asynchronous two-machine system. A mathematical model for investigation of stationary and transient modes, static characteristics and research results of dynamic process pertaining to starting-up the asynchronous two-machine system has been given in paper.
Boltzmann machines as a model for parallel annealing
Aarts, E.H.L.; Korst, J.H.M.
1991-01-01
The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial
Discrete Model Reference Adaptive Control System for Automatic Profiling Machine
Directory of Open Access Journals (Sweden)
Peng Song
2012-01-01
Full Text Available Automatic profiling machine is a movement system that has a high degree of parameter variation and high frequency of transient process, and it requires an accurate control in time. In this paper, the discrete model reference adaptive control system of automatic profiling machine is discussed. Firstly, the model of automatic profiling machine is presented according to the parameters of DC motor. Then the design of the discrete model reference adaptive control is proposed, and the control rules are proven. The results of simulation show that adaptive control system has favorable dynamic performances.
Statistical and Machine Learning Models to Predict Programming Performance
Bergin, Susan
2006-01-01
This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...
Experimental force modeling for deformation machining stretching ...
Indian Academy of Sciences (India)
ARSHPREET SINGH
requires different machining techniques such as use of long ... thin structure to a desired shape incrementally using com- .... 4.1c Influence of wall angle (a): The average resultant ..... [3] Agrawal A, Smith S, Woody B and Cao J 2012 Study of.
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
Kurtulmus, A. Besir; Daniel, Kenny
2018-01-01
Using blockchain technology, it is possible to create contracts that offer a reward in exchange for a trained machine learning model for a particular data set. This would allow users to train machine learning models for a reward in a trustless manner. The smart contract will use the blockchain to automatically validate the solution, so there would be no debate about whether the solution was correct or not. Users who submit the solutions won't have counterparty risk that they won't get paid fo...
Modeling demagnetization effects in permanent magnet synchronous machines
Kral, C.; Sprangers, R.L.J.; Waarma, J.; Haumer, A.; Winter, O.; Lomonova, E.
2010-01-01
This paper presents a permanent magnet model which takes temperature dependencies and demagnetization effects into account. The proposed model is integrated into a magnetic fundamental wave machine model using the model- ing language Modelica. For different rotor types permanent magnet models are
Probabilistic models and machine learning in structural bioinformatics
DEFF Research Database (Denmark)
Hamelryck, Thomas
2009-01-01
. Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis...
Empirical model for estimating the surface roughness of machined ...
African Journals Online (AJOL)
Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.
Functional networks inference from rule-based machine learning models.
Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume
2016-01-01
Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The
Dual Numbers Approach in Multiaxis Machines Error Modeling
Directory of Open Access Journals (Sweden)
Jaroslav Hrdina
2014-01-01
Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.
Directory of Open Access Journals (Sweden)
Qing Ye
2015-01-01
Full Text Available This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Directory of Open Access Journals (Sweden)
Saerom Park
Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Modelling, Construction, and Testing of a Simple HTS Machine Demonstrator
DEFF Research Database (Denmark)
Jensen, Bogi Bech; Abrahamsen, Asger Bech
2011-01-01
This paper describes the construction, modeling and experimental testing of a high temperature superconducting (HTS) machine prototype employing second generation (2G) coated conductors in the field winding. The prototype is constructed in a simple way, with the purpose of having an inexpensive way...... of validating finite element (FE) simulations and gaining a better understanding of HTS machines. 3D FE simulations of the machine are compared to measured current vs. voltage (IV) curves for the tape on its own. It is validated that this method can be used to predict the critical current of the HTS tape...... installed in the machine. The measured torque as a function of rotor position is also reproduced by the 3D FE model....
An abstract machine model of dynamic module replacement
Walton, Chris; Kırlı, Dilsun; Gilmore, Stephen
2000-01-01
In this paper we define an abstract machine model for the mλ typed intermediate language. This abstract machine is used to give a formal description of the operation of run-time module replacement for the programming language Dynamic ML. The essential technical device which we employ for module replacement is a modification of two-space copying garbage collection. We show how the operation of module replacement could be applied to other garbage-collected languages such as Java.
Programming and machining of complex parts based on CATIA solid modeling
Zhu, Xiurong
2017-09-01
The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.
Modelling machine ensembles with discrete event dynamical system theory
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
Component based modelling of piezoelectric ultrasonic actuators for machining applications
International Nuclear Information System (INIS)
Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V
2013-01-01
Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance
Learning About Climate and Atmospheric Models Through Machine Learning
Lucas, D. D.
2017-12-01
From the analysis of ensemble variability to improving simulation performance, machine learning algorithms can play a powerful role in understanding the behavior of atmospheric and climate models. To learn about model behavior, we create training and testing data sets through ensemble techniques that sample different model configurations and values of input parameters, and then use supervised machine learning to map the relationships between the inputs and outputs. Following this procedure, we have used support vector machines, random forests, gradient boosting and other methods to investigate a variety of atmospheric and climate model phenomena. We have used machine learning to predict simulation crashes, estimate the probability density function of climate sensitivity, optimize simulations of the Madden Julian oscillation, assess the impacts of weather and emissions uncertainty on atmospheric dispersion, and quantify the effects of model resolution changes on precipitation. This presentation highlights recent examples of our applications of machine learning to improve the understanding of climate and atmospheric models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Energy Technology Data Exchange (ETDEWEB)
Egolf, P. W.; Gonin, C. [University of Applied Sciences of Western Switzerland, HEIG-VD, Yverdon-les Bains (Switzerland); Kitanovski, A. [University of Ljubljana, Ljubljana (Slovenia)
2010-03-15
This final report for the Swiss Federal Office of Energy (SFOE) presents the results of a feasibility study made concerning magnetic cooling and refrigeration machines. This report presents a comprehensive thermodynamic and economic analysis of applications of rotary magnetic chillers. The study deals with magnetic chillers based on permanent magnets and superconducting magnets, respectively. The numerical design of permanent magnet assemblies with different magnetic flux densities is discussed. The authors note that superconducting magnetic chillers are feasible only in large-scale applications with over 1 MW of cooling power. This report describes new ideas for magnetic refrigeration technologies, which go beyond the state of the art. They show potential for a substantial reduction of costs and further improvements in efficiency. Rotary magnetic liquid chillers with 'wavy' structures and using micro tubes are discussed, as are superconducting magnetic chillers and future magneto-caloric technologies.
Twin support vector machines models, extensions and applications
Jayadeva; Chandra, Suresh
2017-01-01
This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.
Runtime Optimizations for Tree-Based Machine Learning Models
N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)
2014-01-01
htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression
Comparative study of Moore and Mealy machine models adaptation ...
African Journals Online (AJOL)
Information and Communications Technology has influenced the need for automated machines that can carry out important production procedures and, automata models are among the computational models used in design and construction of industrial processes. The production process of the popular African Black Soap ...
Comparative analysis of various methods for modelling permanent magnet machines
Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.
2017-01-01
In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air
Repository simulation model: Final report
International Nuclear Information System (INIS)
1988-03-01
This report documents the application of computer simulation for the design analysis of the nuclear waste repository's waste handling and packaging operations. The Salt Repository Simulation Model was used to evaluate design alternatives during the conceptual design phase of the Salt Repository Project. Code development and verification was performed by the Office of Nuclear Waste Isolation (ONWL). The focus of this report is to relate the experience gained during the development and application of the Salt Repository Simulation Model to future repository design phases. Design of the repository's waste handling and packaging systems will require sophisticated analysis tools to evaluate complex operational and logistical design alternatives. Selection of these design alternatives in the Advanced Conceptual Design (ACD) and License Application Design (LAD) phases must be supported by analysis to demonstrate that the repository design will cost effectively meet DOE's mandated emplacement schedule and that uncertainties in the performance of the repository's systems have been objectively evaluated. Computer simulation of repository operations will provide future repository designers with data and insights that no other analytical form of analysis can provide. 6 refs., 10 figs
Directory of Open Access Journals (Sweden)
Pooyan Vahidi Pashsaki
2016-06-01
Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.
Analytical model for Stirling cycle machine design
Energy Technology Data Exchange (ETDEWEB)
Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)
2010-10-15
In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)
Neural Machine Translation with Recurrent Attention Modeling
Yang, Zichao; Hu, Zhiting; Deng, Yuntian; Dyer, Chris; Smola, Alex
2016-01-01
Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relat...
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Innovative model of business process reengineering at machine building enterprises
Nekrasov, R. Yu; Tempel, Yu A.; Tempel, O. A.
2017-10-01
The paper provides consideration of business process reengineering viewed as amanagerial innovation accepted by present day machine building enterprises, as well as waysto improve its procedure. A developed innovative model of reengineering measures isdescribed and is based on the process approach and other principles of company management.
An incremental anomaly detection model for virtual machines.
Directory of Open Access Journals (Sweden)
Hancui Zhang
Full Text Available Self-Organizing Map (SOM algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
Online State Space Model Parameter Estimation in Synchronous Machines
Directory of Open Access Journals (Sweden)
Z. Gallehdari
2014-06-01
The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.
Assessing Implicit Knowledge in BIM Models with Machine Learning
DEFF Research Database (Denmark)
Krijnen, Thomas; Tamke, Martin
2015-01-01
architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models....
Cutting force model for high speed machining process
International Nuclear Information System (INIS)
Haber, R. E.; Jimenez, J. E.; Jimenez, A.; Lopez-Coronado, J.
2004-01-01
This paper presents cutting force-based models able to describe a high speed machining process. The model considers the cutting force as output variable, essential for the physical processes that are taking place in high speed machining. Moreover, this paper shows the mathematical development to derive the integral-differential equations, and the algorithms implemented in MATLAB to predict the cutting force in real time MATLAB is a software tool for doing numerical computations with matrices and vectors. It can also display information graphically and includes many toolboxes for several research and applications areas. Two end mill shapes are considered (i. e. cylindrical and ball end mill) for real-time implementation of the developed algorithms. the developed models are validated in slot milling operations. The results corroborate the importance of the cutting force variable for predicting tool wear in high speed machining operations. The developed models are the starting point for future work related with vibration analysis, process stability and dimensional surface finish in high speed machining processes. (Author) 19 refs
Modeling RHIC using the standard machine formal accelerator description
International Nuclear Information System (INIS)
Pilat, F.; Trahern, C.G.; Wei, J.
1997-01-01
The Standard Machine Format (SMF) is a structured description of accelerator lattices which supports both the hierarchy of beam lines and generic lattice objects as well as those deviations (field errors, alignment efforts, etc.) associated with each component of the as-installed machine. In this paper we discuss the use of SMF to describe the Relativistic Heavy Ion Collider (RHIC) as well as the ancillary data structures (such as field quality measurements) that are necessarily incorporated into the RHIC SMF model. Future applications of SMF are outlined, including its use in the RHIC operational environment
Control of discrete event systems modeled as hierarchical state machines
Brave, Y.; Heymann, M.
1991-01-01
The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.
Machine learning models in breast cancer survival prediction.
Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin
2016-01-01
Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of
Latent domain models for statistical machine translation
Hoàng, C.
2017-01-01
A data-driven approach to model translation suffers from the data mismatch problem and demands domain adaptation techniques. Given parallel training data originating from a specific domain, training an MT system on the data would result in a rather suboptimal translation for other domains. But does
Global ocean modeling on the Connection Machine
International Nuclear Information System (INIS)
Smith, R.D.; Dukowicz, J.K.; Malone, R.C.
1993-01-01
The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ''barotropic'' equations describing the vertically-averaged flow, and a set of three-dimensional ''baroclinic'' equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow
A comparative study of machine learning models for ethnicity classification
Trivedi, Advait; Bessie Amali, D. Geraldine
2017-11-01
This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.
Modeling Geomagnetic Variations using a Machine Learning Framework
Cheung, C. M. M.; Handmer, C.; Kosar, B.; Gerules, G.; Poduval, B.; Mackintosh, G.; Munoz-Jaramillo, A.; Bobra, M.; Hernandez, T.; McGranaghan, R. M.
2017-12-01
We present a framework for data-driven modeling of Heliophysics time series data. The Solar Terrestrial Interaction Neural net Generator (STING) is an open source python module built on top of state-of-the-art statistical learning frameworks (traditional machine learning methods as well as deep learning). To showcase the capability of STING, we deploy it for the problem of predicting the temporal variation of geomagnetic fields. The data used includes solar wind measurements from the OMNI database and geomagnetic field data taken by magnetometers at US Geological Survey observatories. We examine the predictive capability of different machine learning techniques (recurrent neural networks, support vector machines) for a range of forecasting times (minutes to 12 hours). STING is designed to be extensible to other types of data. We show how STING can be used on large sets of data from different sensors/observatories and adapted to tackle other problems in Heliophysics.
A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology
Energy Technology Data Exchange (ETDEWEB)
Hamann, Hendrik F. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center
2017-05-31
The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.
Sensor guided control and navigation with intelligent machines. Final technical report
Energy Technology Data Exchange (ETDEWEB)
Ghosh, Bijoy K.
2001-03-26
This item constitutes the final report on ''Visionics: An integrated approach to analysis and design of intelligent machines.'' The report discusses dynamical systems approach to problems in robust control of possibly time-varying linear systems, problems in vision and visually guided control, and, finally, applications of these control techniques to intelligent navigation with a mobile platform. Robust design of a controller for a time-varying system essentially deals with the problem of synthesizing a controller that can adapt to sudden changes in the parameters of the plant and can maintain stability. The approach presented is to design a compensator that simultaneously stabilizes each and every possible mode of the plant as the parameters undergo sudden and unexpected changes. Such changes can in fact be detected by a visual sensor and, hence, visually guided control problems are studied as a natural consequence. The problem here is to detect parameters of the plant and maintain st ability in the closed loop using a ccd camera as a sensor. The main result discussed in the report is the role of perspective systems theory that was developed in order to analyze such a detection and control problem. The robust control algorithms and the visually guided control algorithms are applied in the context of a PUMA 560 robot arm control where the goal is to visually locate a moving part on a mobile turntable. Such problems are of paramount importance in manufacturing with a certain lack of structure. Sensor guided control problems are extended to problems in robot navigation using a NOMADIC mobile platform with a ccd and a laser range finder as sensors. The localization and map building problems are studied with the objective of navigation in an unstructured terrain.
Machine learning modelling for predicting soil liquefaction susceptibility
Directory of Open Access Journals (Sweden)
P. Samui
2011-01-01
Full Text Available This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN based on multi-layer perceptions (MLP that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N_{1}_{60}] and cyclic stress ratio (CSR. Further, an attempt has been made to simplify the models, requiring only the two parameters [(N_{1}_{60} and peck ground acceleration (a_{max}/g], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.
Support vector machine based battery model for electric vehicles
International Nuclear Information System (INIS)
Wang Junping; Chen Quanshi; Cao Binggang
2006-01-01
The support vector machine (SVM) is a novel type of learning machine based on statistical learning theory that can map a nonlinear function successfully. As a battery is a nonlinear system, it is difficult to establish the relationship between the load voltage and the current under different temperatures and state of charge (SOC). The SVM is used to model the battery nonlinear dynamics in this paper. Tests are performed on an 80Ah Ni/MH battery pack with the Federal Urban Driving Schedule (FUDS) cycle to set up the SVM model. Compared with the Nernst and Shepherd combined model, the SVM model can simulate the battery dynamics better with small amounts of experimental data. The maximum relative error is 3.61%
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Energy Technology Data Exchange (ETDEWEB)
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Customer requirement modeling and mapping of numerical control machine
Directory of Open Access Journals (Sweden)
Zhongqi Sheng
2015-10-01
Full Text Available In order to better obtain information about customer requirement and develop products meeting customer requirement, it is necessary to systematically analyze and handle the customer requirement. This article uses the product service system of numerical control machine as research objective and studies the customer requirement modeling and mapping oriented toward configuration design. It introduces the conception of requirement unit, expounds the customer requirement decomposition rules, and establishes customer requirement model; it builds the house of quality using quality function deployment and confirms the weight of technical feature of product and service; it explores the relevance rules between data using rough set theory, establishes rule database, and solves the target value of technical feature of product. Using economical turning center series numerical control machine as an example, it verifies the rationality of proposed customer requirement model.
Energy Technology Data Exchange (ETDEWEB)
Canat, S.
2005-07-15
Induction machine is most widespread in industry. Its traditional modeling does not take into account the eddy current in the rotor bars which however induce strong variations as well of the resistance as of the resistance of the rotor. This diffusive phenomenon, called 'skin effect' could be modeled by a compact transfer function using fractional derivative (non integer order). This report theoretically analyzes the electromagnetic phenomenon on a single rotor bar before approaching the rotor as a whole. This analysis is confirmed by the results of finite elements calculations of the magnetic field, exploited to identify a fractional order model of the induction machine (identification method of Levenberg-Marquardt). Then, the model is confronted with an identification of experimental results. Finally, an automatic method is carried out to approximate the dynamic model by integer order transfer function on a frequency band. (author)
Building Better Ecological Machines: Complexity Theory and Alternative Economic Models
Directory of Open Access Journals (Sweden)
Jess Bier
2016-12-01
Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.
Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.
Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean
2018-04-26
Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.
Modelling of destructive ability of water-ice-jet while machine processing of machine elements
Directory of Open Access Journals (Sweden)
Burnashov Mikhail
2017-01-01
Full Text Available This paper represents the classification of the most common contaminants, appearing on the surfaces of machine elements after a long-term service.The existing well-known surface cleaning methods are described and analyzed in the framework of this paper. The article is intended to provide the reader with an understanding of the process of cleaning and removing contamination from machine elements surface by means of water-ice-jet with preprepared beforehand particles, as well as the process of water-ice-jet formation. The paper deals with the description of such advantages of this method as low costs, wastelessness, high quality of the surface, undergoing processing, minimization of harmful impact upon environment and eco-friendliness, which makes it differ radically from formerly known methods. The scheme of interection between the surface and ice particle is represented. A thermo-physical model of destruction of contaminants by means of a water-ice-jet cleaning technology was developed on its basis. The thermo-physical model allows us to make setting of processing mode and the parameters of water-ice-jet scientifically substantiated and well-grounded.
Numerical modeling and optimization of machining duplex stainless steels
Directory of Open Access Journals (Sweden)
Rastee D. Koyee
2015-01-01
Full Text Available The shortcomings of the machining analytical and empirical models in combination with the industry demands have to be fulfilled. A three-dimensional finite element modeling (FEM introduces an attractive alternative to bridge the gap between pure empirical and fundamental scientific quantities, and fulfill the industry needs. However, the challenging aspects which hinder the successful adoption of FEM in the machining sector of manufacturing industry have to be solved first. One of the greatest challenges is the identification of the correct set of machining simulation input parameters. This study presents a new methodology to inversely calculate the input parameters when simulating the machining of standard duplex EN 1.4462 and super duplex EN 1.4410 stainless steels. JMatPro software is first used to model elastic–viscoplastic and physical work material behavior. In order to effectively obtain an optimum set of inversely identified friction coefficients, thermal contact conductance, Cockcroft–Latham critical damage value, percentage reduction in flow stress, and Taylor–Quinney coefficient, Taguchi-VIKOR coupled with Firefly Algorithm Neural Network System is applied. The optimization procedure effectively minimizes the overall differences between the experimentally measured performances such as cutting forces, tool nose temperature and chip thickness, and the numerically obtained ones at any specified cutting condition. The optimum set of input parameter is verified and used for the next step of 3D-FEM application. In the next stage of the study, design of experiments, numerical simulations, and fuzzy rule modeling approaches are employed to optimize types of chip breaker, insert shapes, process conditions, cutting parameters, and tool orientation angles based on many important performances. Through this study, not only a new methodology in defining the optimal set of controllable parameters for turning simulations is introduced, but also
Impact of Model Detail of Synchronous Machines on Real-time Transient Stability Assessment
DEFF Research Database (Denmark)
Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Østergaard, Jacob
2013-01-01
In this paper, it is investigated how detailed the model of a synchronous machine needs to be in order to assess transient stability using a Single Machine Equivalent (SIME). The results will show how the stability mechanism and the stability assessment are affected by the model detail. In order...... of the machine models is varied. Analyses of the results suggest that a 4th-order model may be sufficient to represent synchronous machines in transient stability studies....
Earth-moving equipment as base machines in forest work. Final report of an NSR project
Energy Technology Data Exchange (ETDEWEB)
Johansson, Jerry [ed.
1997-12-31
Excavators have been used for forest draining for a long time in the Nordic countries. Only during the 1980s they were introduced as base machines for other forest operations, such as mounding, processing, harvesting, and road construction and road maintenance. Backhoe loaders were introduced in forestry at a somewhat later stage and to a smaller degree. The number of this type of base machines in forestry is so far small and is increasing very slowly. The NSR project `Earth moving equipment as base machines in forest work` started in 1993 and the project ended in 1995. The objective of the project was to obtain an overall picture of this type of machines up to a point where the logs are at landing site, ready for transportation to the industry. The project should cover as many aspects as possible. In order to obtain this picture, the main project was divided into sub projects. The sub projects separately described in this volume are (1) Excavators in ditching operations and site preparation, (2) Backhoe loaders in harvesting operations, (3) Excavators in wood cutting operations, (4) Tracked excavators in forestry operations, (5) Crawler versus wheeled base machines for single-grip harvester, and (6) Soil changes - A comparison between a wheeled and a tracked forest machine
Credit Risk Analysis Using Machine and Deep Learning Models
Directory of Open Access Journals (Sweden)
Peter Martey Addo
2018-04-01
Full Text Available Due to the advanced technology associated with Big Data, data availability and computing power, most banks or lending institutions are renewing their business models. Credit risk predictions, monitoring, model reliability and effective loan processing are key to decision-making and transparency. In this work, we build binary classifiers based on machine and deep learning models on real data in predicting loan default probability. The top 10 important features from these models are selected and then used in the modeling process to test the stability of binary classifiers by comparing their performance on separate data. We observe that the tree-based models are more stable than the models based on multilayer artificial neural networks. This opens several questions relative to the intensive use of deep learning systems in enterprises.
Modeling Music Emotion Judgments Using Machine Learning Methods
Directory of Open Access Journals (Sweden)
Naresh N. Vempala
2018-01-01
Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.
Artificial emotional model based on finite state machine
Institute of Scientific and Technical Information of China (English)
MENG Qing-mei; WU Wei-guo
2008-01-01
According to the basic emotional theory, the artificial emotional model based on the finite state machine(FSM) was presented. In finite state machine model of emotion, the emotional space included the basic emotional space and the multiple emotional spaces. The emotion-switching diagram was defined and transition function was developed using Markov chain and linear interpolation algorithm. The simulation model was built using Stateflow toolbox and Simulink toolbox based on the Matlab platform.And the model included three subsystems: the input one, the emotion one and the behavior one. In the emotional subsystem, the responses of different personalities to the external stimuli were described by defining personal space. This model takes states from an emotional space and updates its state depending on its current state and a state of its input (also a state-emotion). The simulation model realizes the process of switching the emotion from the neutral state to other basic emotions. The simulation result is proved to correspond to emotion-switching law of human beings.
Inverse Analysis and Modeling for Tunneling Thrust on Shield Machine
Directory of Open Access Journals (Sweden)
Qian Zhang
2013-01-01
Full Text Available With the rapid development of sensor and detection technologies, measured data analysis plays an increasingly important role in the design and control of heavy engineering equipment. The paper proposed a method for inverse analysis and modeling based on mass on-site measured data, in which dimensional analysis and data mining techniques were combined. The method was applied to the modeling of the tunneling thrust on shield machines and an explicit expression for thrust prediction was established. Combined with on-site data from a tunneling project in China, the inverse identification of model coefficients was carried out using the multiple regression method. The model residual was analyzed by statistical methods. By comparing the on-site data and the model predicted results in the other two projects with different tunneling conditions, the feasibility of the model was discussed. The work may provide a scientific basis for the rational design and control of shield tunneling machines and also a new way for mass on-site data analysis of complex engineering systems with nonlinear, multivariable, time-varying characteristics.
A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping
Directory of Open Access Journals (Sweden)
Wang Yan
2014-01-01
Full Text Available The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results’ evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer’s obvious improvement of mapping error rate.
A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping
Yan, Wang; Jiajin, Le; Yun, Zhang
2014-01-01
The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results' evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer's obvious improvement of mapping error rate. PMID:25250372
Machining of Metal Matrix Composites
2012-01-01
Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...
Process acceptance and adjustment techniques for Swiss automatic screw machine parts. Final report
International Nuclear Information System (INIS)
Robb, J.M.
1976-01-01
Product tolerance requirements for small, cylindrical, piece parts produced on swiss automatic screw machines have progressed to the reliability limits of inspection equipment. The miniature size, configuration, and tolerance requirements (plus or minus 0.0001 in.) (0.00254 mm) of these parts preclude the use of screening techniques to accept product or adjust processes during setup and production runs; therefore, existing means of product acceptance and process adjustment must be refined or new techniques must be developed. The purpose of this endeavor has been to determine benefits gained through the implementation of a process acceptance technique (PAT) to swiss automatic screw machine processes. PAT is a statistical approach developed for the purpose of accepting product and centering processes for parts produced by selected, controlled processes. Through this endeavor a determination has been made of the conditions under which PAT can benefit a controlled process and some specific types of screw machine processes upon which PAT could be applied. However, it was also determined that PAT, if used indiscriminately, may become a record keeping burden when applied to more than one dimension at a given machining operation
Multiphysics Modeling of an Permanent Magnet Synchronous Machine
Directory of Open Access Journals (Sweden)
MARTIS Claudia
2012-10-01
Full Text Available This paper analyzes the noise and vibration in PMSMs. There are three types of vibrations in electrical machines: electromagnetic,mechanical and aerodynamic. Electromagnetic force are the main cause of noise and vibration in PMSMs. It is very important to calculate precisely the natural frequencies of the stator system. If oneradial force (which are the main cause for electromagnetic vibration has the frequency close to the natural frequency of the stator system for the same order of vibrational mode, then this force canproduce dangerous vibration in the stator system. The natural frequencies for a stator system of a PMSM have been calculated. Finally a Structural Analysis has been made , pointing out the radialdisplacement and stress for the chosen PMSM .
Product Quality Modelling Based on Incremental Support Vector Machine
International Nuclear Information System (INIS)
Wang, J; Zhang, W; Qin, B; Shi, W
2012-01-01
Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.
MODEL RESEARCH OF THE ACIVE VIBROIZOLATION CABS MACHINE
Directory of Open Access Journals (Sweden)
Jerzy MARGIELEWICZ
2014-03-01
Full Text Available The study was carried out computer simulations of mechatronic model bridge crane, which is intended to theoretical evaluation of the possibility of eliminating the mechanical vibrations affecting the operator's cab driven machine. The model studies used fixed value control, the controlled variable is selected as the vertical displacement of the cab. Also included in the research model rheological model of the operator's body. We examined four overhead cranes with lifting capacity of 50T, which are classified in accordance with the directive of the European Union concerning the design of cranes, the four classes of cranes HC stiffness. The use of an active vibration isolation system in which distinguishes two negative feedback loops, very well eliminate mechanical vibration to the operator.
Electric machines modeling, condition monitoring, and fault diagnosis
Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun
2012-01-01
With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi
Use of machine learning techniques for modeling of snow depth
Directory of Open Access Journals (Sweden)
G. V. Ayzel
2017-01-01
Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.
Advanced Machine Learning Emulators of Radiative Transfer Models
Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.
2017-12-01
Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.
Process Approach for Modeling of Machine and Tractor Fleet Structure
Dokin, B. D.; Aletdinova, A. A.; Kravchenko, M. S.; Tsybina, Y. S.
2018-05-01
The existing software complexes on modelling of the machine and tractor fleet structure are mostly aimed at solving the task of optimization. However, the creators, choosing only one optimization criterion and incorporating it in their software, provide grounds on why it is the best without giving a decision maker the opportunity to choose it for their enterprise. To analyze “bottlenecks” of machine and tractor fleet modelling, the authors of this article created a process model, in which they included adjustment to the plan of using machinery based on searching through alternative technologies. As a result, the following recommendations for software complex development have been worked out: the introduction of a database of alternative technologies; the possibility for a user to change the timing of the operations even beyond the allowable limits and in that case the calculation of the incurred loss; the possibility to rule out the solution of an optimization task, and if there is a necessity in it - the possibility to choose an optimization criterion; introducing graphical display of an annual complex of works, which could be enough for the development and adjustment of a business strategy.
Applications and modelling of bulk HTSs in brushless ac machines
International Nuclear Information System (INIS)
Barnes, G.J.
2000-01-01
The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation. (author)
Modeling the Swift Bat Trigger Algorithm with Machine Learning
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2016-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.
Modeling the Swift BAT Trigger Algorithm with Machine Learning
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2015-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.
Improving Language Models in Speech-Based Human-Machine Interaction
Directory of Open Access Journals (Sweden)
Raquel Justo
2013-02-01
Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.
International Nuclear Information System (INIS)
Du, Z C; Lv, C F; Hong, M S
2006-01-01
A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'
Machine learning based switching model for electricity load forecasting
Energy Technology Data Exchange (ETDEWEB)
Fan, Shu; Lee, Wei-Jen [Energy Systems Research Center, The University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen, Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan)
2008-06-15
In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma. (author)
Machine learning based switching model for electricity load forecasting
Energy Technology Data Exchange (ETDEWEB)
Fan Shu [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States); Chen Luonan [Department of Electronics, Information and Communication Engineering, Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-0013 (Japan); Lee, Weijen [Energy Systems Research Center, University of Texas at Arlington, 416 S. College Street, Arlington, TX 76019 (United States)], E-mail: wlee@uta.edu
2008-06-15
In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma.
Control volume based modelling of compressible flow in reciprocating machines
DEFF Research Database (Denmark)
Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik
2004-01-01
, and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures......An approach to modelling unsteady compressible flow that is primarily one dimensional is presented. The approach was developed for creating distributed models of machines with reciprocating pistons but it is not limited to this application. The approach is based on the integral form of the unsteady...... conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction...
Coal demand prediction based on a support vector machine model
Energy Technology Data Exchange (ETDEWEB)
Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering
2007-01-15
A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.
Machine learning based switching model for electricity load forecasting
International Nuclear Information System (INIS)
Fan Shu; Chen Luonan; Lee, Weijen
2008-01-01
In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma
DEFF Research Database (Denmark)
De Chiffre, Leonardo; Hansen, Hans Nørgaard; Morace, Renata Erica
2005-01-01
be expected that the optomechanical hole plates can be calibrated using the DKD procedure with an uncertainty in the range between 0.5 µm and 2 µm. Using the hole plate, it is possible to compare the performance of measurements obtained using optical and mechanical CMMs. Optical CMM measurements can...... be divided in two groups. A group leading to deviations larger than 2 µm, and a group with deviations that are comparable to those using mechanical machines. All but one laboratory could perform reversal measurements. Transfer of traceability was established as follows: 8 using gauge blocks, 2 laser...... interferometers, 1 zerodur hole plate, 2 callipers, and 1 quartz standard. Out of the 23 measurement campaigns, 5 optical and 2 mechanical machines were not provided with establishment of traceability. The optomechanical hole plate is a suitable reference artefact providing traceability of CMMs, in particular...
The Abstract Machine Model for Transaction-based System Control
Energy Technology Data Exchange (ETDEWEB)
Chassin, David P.
2003-01-31
Recent work applying statistical mechanics to economic modeling has demonstrated the effectiveness of using thermodynamic theory to address the complexities of large scale economic systems. Transaction-based control systems depend on the conjecture that when control of thermodynamic systems is based on price-mediated strategies (e.g., auctions, markets), the optimal allocation of resources in a market-based control system results in an emergent optimal control of the thermodynamic system. This paper proposes an abstract machine model as the necessary precursor for demonstrating this conjecture and establishes the dynamic laws as the basis for a special theory of emergence applied to the global behavior and control of complex adaptive systems. The abstract machine in a large system amounts to the analog of a particle in thermodynamic theory. The permit the establishment of a theory dynamic control of complex system behavior based on statistical mechanics. Thus we may be better able to engineer a few simple control laws for a very small number of devices types, which when deployed in very large numbers and operated as a system of many interacting markets yields the stable and optimal control of the thermodynamic system.
Subspace identification of Hammer stein models using support vector machines
International Nuclear Information System (INIS)
Al-Dhaifallah, Mujahed
2011-01-01
System identification is the art of finding mathematical tools and algorithms that build an appropriate mathematical model of a system from measured input and output data. Hammerstein model, consisting of a memoryless nonlinearity followed by a dynamic linear element, is often a good trade-off as it can represent some dynamic nonlinear systems very accurately, but is nonetheless quite simple. Moreover, the extensive knowledge about LTI system representations can be applied to the dynamic linear block. On the other hand, finding an effective representation for the nonlinearity is an active area of research. Recently, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have demonstrated powerful abilities in approximating linear and nonlinear functions. In contrast with other approximation methods, SVMs do not require a-priori structural information. Furthermore, there are well established methods with guaranteed convergence (ordinary least squares, quadratic programming) for fitting LS-SVMs and SVMs. The general objective of this research is to develop new subspace algorithms for Hammerstein systems based on SVM regression.
Hidden physics models: Machine learning of nonlinear partial differential equations
Raissi, Maziar; Karniadakis, George Em
2018-03-01
While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.
Error modeling for surrogates of dynamical systems using machine learning
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-12-01
A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.
Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei
2017-02-01
Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.
Xu, Xueping; Han, Qinkai; Chu, Fulei
2018-03-01
The electromagnetic vibration of electrical machines with an eccentric rotor has been extensively investigated. However, magnetic saturation was often neglected. Moreover, the rub impact between the rotor and stator is inevitable when the amplitude of the rotor vibration exceeds the air-gap. This paper aims to propose a general electromagnetic excitation model for electrical machines. First, a general model which takes the magnetic saturation and rub impact into consideration is proposed and validated by the finite element method and reference. The dynamic equations of a Jeffcott rotor system with electromagnetic excitation and mass imbalance are presented. Then, the effects of pole-pair number and rubbing parameters on vibration amplitude are studied and approaches restraining the amplitude are put forward. Finally, the influences of mass eccentricity, resultant magnetomotive force (MMF), stiffness coefficient, damping coefficient, contact stiffness and friction coefficient on the stability of the rotor system are investigated through the Floquet theory, respectively. The amplitude jumping phenomenon is observed in a synchronous generator for different pole-pair numbers. The changes of design parameters can alter the stability states of the rotor system and the range of parameter values forms the zone of stability, which lays helpful suggestions for the design and application of the electrical machines.
Attacking Machine Learning models as part of a cyber kill chain
Nguyen, Tam N.
2017-01-01
Machine learning is gaining popularity in the network security domain as many more network-enabled devices get connected, as malicious activities become stealthier, and as new technologies like Software Defined Networking emerge. Compromising machine learning model is a desirable goal. In fact, spammers have been quite successful getting through machine learning enabled spam filters for years. While previous works have been done on adversarial machine learning, none has been considered within...
Machine learning, computer vision, and probabilistic models in jet physics
CERN. Geneva; NACHMAN, Ben
2015-01-01
In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...
Estimating the complexity of 3D structural models using machine learning methods
Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques
2016-04-01
Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.
A Reference Model for Virtual Machine Launching Overhead
Energy Technology Data Exchange (ETDEWEB)
Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young
2016-07-01
Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.
Modelling open pit shovel-truck systems using the Machine Repair Model
Energy Technology Data Exchange (ETDEWEB)
Krause, A.; Musingwini, C. [CBH Resources Ltd., Sydney, NSW (Australia). Endeaver Mine
2007-08-15
Shovel-truck systems for loading and hauling material in open pit mines are now routinely analysed using simulation models or off-the-shelf simulation software packages, which can be very expensive for once-off or occasional use. The simulation models invariably produce different estimations of fleet sizes due to their differing estimations of cycle time. No single model or package can accurately estimate the required fleet size because the fleet operating parameters are characteristically random and dynamic. In order to improve confidence in sizing the fleet for a mining project, at least two estimation models should be used. This paper demonstrates that the Machine Repair Model can be modified and used as a model for estimating truck fleet size in an open pit shovel-truck system. The modified Machine Repair Model is first applied to a virtual open pit mine case study. The results compare favourably to output from other estimation models using the same input parameters for the virtual mine. The modified Machine Repair Model is further applied to an existing open pit coal operation, the Kwagga Section of Optimum Colliery as a case study. Again the results confirm those obtained from the virtual mine case study. It is concluded that the Machine Repair Model can be an affordable model compared to off-the-shelf generic software because it is easily modelled in Microsoft Excel, a software platform that most mines already use.
Guillaume, Ludovic; Legros, Arnaud; Quoilin, Sylvain; Declaye, Sébastien; Lemort, Vincent
2013-01-01
This paper aims at helping designers of waste heat recovery organic (or non-organic) Rankine cycles on internal combustion engines to best select the expander among the piston, scroll and screw machines, and the working fluids among R245fa, ethanol and water. The first part of the paper presents the technical constraints inherent to each machine through a state of the art of the three technologies. The second part of the paper deals with the modeling of such expanders. Finally, in the last pa...
Experimental program based on a High Beta Q Machine. Final report, 1 May 1978-30 September 1980
International Nuclear Information System (INIS)
Ribe, F.L.
1980-07-01
This report summarizes work done in designing and constructing the High Beta Q Machine from the inception of the work in May 1978 until the present time. It is a 3-m long, low-compression theta pinch with a 22-cm-diameter segmented compression coil with a minimum axial periodicity length of 10 cm. This capability of driving the machine as a simple, low-density theta pinch, and also of independently applying periodic magnetic fields before or after formation of the plasma column, gives the device considerable flexibility. Reported here is the construction and testing of the machine, development of its diagnostics and initial measurements of the plasma at early times in the duration of the crowbarred magnetic field. The experimental effort has been paralleled by theoretical work to model the diffuse profile, collisionless plasma in its response to the periodic RF magnetic fields. The model chosen is the Freidberg-Pearlstein Vlasov-fluid model which provides an MHD-like description but with accounting of ion kinetic effects over diffuse equilibrium profiles. A computer code has been developed to accurately calculate the resistive response of the plasma column, giving the power absorption by ion Landau damping and more recently, ion-cyclotron damping
Temperature Buffer Test. Final THM modelling
International Nuclear Information System (INIS)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan; Ledesma, Alberto; Jacinto, Abel
2012-01-01
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code B right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code B right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Temperature Buffer Test. Final THM modelling
Energy Technology Data Exchange (ETDEWEB)
Aakesson, Mattias; Malmberg, Daniel; Boergesson, Lennart; Hernelind, Jan [Clay Technology AB, Lund (Sweden); Ledesma, Alberto; Jacinto, Abel [UPC, Universitat Politecnica de Catalunya, Barcelona (Spain)
2012-01-15
The Temperature Buffer Test (TBT) is a joint project between SKB/ANDRA and supported by ENRESA (modelling) and DBE (instrumentation), which aims at improving the understanding and to model the thermo-hydro-mechanical behavior of buffers made of swelling clay submitted to high temperatures (over 100 deg C) during the water saturation process. The test has been carried out in a KBS-3 deposition hole at Aespoe HRL. It was installed during the spring of 2003. Two heaters (3 m long, 0.6 m diameter) and two buffer arrangements have been investigated: the lower heater was surrounded by bentonite only, whereas the upper heater was surrounded by a composite barrier, with a sand shield between the heater and the bentonite. The test was dismantled and sampled during the winter of 2009/2010. This report presents the final THM modelling which was resumed subsequent to the dismantling operation. The main part of this work has been numerical modelling of the field test. Three different modelling teams have presented several model cases for different geometries and different degree of process complexity. Two different numerical codes, Code{sub B}right and Abaqus, have been used. The modelling performed by UPC-Cimne using Code{sub B}right, has been divided in three subtasks: i) analysis of the response observed in the lower part of the test, by inclusion of a number of considerations: (a) the use of the Barcelona Expansive Model for MX-80 bentonite; (b) updated parameters in the vapour diffusive flow term; (c) the use of a non-conventional water retention curve for MX-80 at high temperature; ii) assessment of a possible relation between the cracks observed in the bentonite blocks in the upper part of TBT, and the cycles of suction and stresses registered in that zone at the start of the experiment; and iii) analysis of the performance, observations and interpretation of the entire test. It was however not possible to carry out a full THM analysis until the end of the test due to
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Modeling of tool path for the CNC sheet cutting machines
Petunin, Aleksandr A.
2015-11-01
In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.
A geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time
Yu, Miaomiao; Tang, Yinghui; Fu, Yonghong
2013-06-01
In this article, we consider a geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time. A maintenance policy (N - 1, N) based on the number of failures of the service machine is introduced into the system. Assuming that a failed service machine after repair will not be 'as good as new', and the spare service machine for replacement is only available by an order. More specifically, we suppose that the procurement lead time for delivering the spare service machine follows a phase-type (PH) distribution. Under such assumptions, we apply the matrix-analytic method to develop the steady state probabilities of the system, and then we obtain some system performance measures. Finally, employing an important Lemma, the explicit expression of the long-run average cost rate for the service machine is derived, and the direct search method is also implemented to determine the optimal value of N for minimising the average cost rate.
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
International Nuclear Information System (INIS)
Mondelin, A.
2012-01-01
During machining, extreme conditions of pressure, temperature and strain appear in the cutting zone. In this thermo-mechanical context, the link between the cutting conditions (cutting speed, lubrication, feed rate, wear, tool coating...) and the machining surface integrity represents a major scientific target. This PhD study is a part of a global project called MIFSU (Modeling of the Integrity and Fatigue resistance of Machining Surfaces) and it focuses on the finish turning of the 15-5PH (a martensitic stainless steel used for parts of helicopter rotor). Firstly, material behavior has been studied in order to provide data for machining simulations. Stress-free dilatometry tests were conducted to obtain the austenitization kinetics of 15-5PH steel for high heating rates (up to 11,000 degrees C/s). Then, parameters of Leblond metallurgical model have been calibrated. In addition, dynamic compression tests (de/dt ranging from 0.01 to 80/s and e ≥ 1) have been performed to calibrate a strain-rate dependent elasto-plasticity model (for high strains). These tests also helped to highlight the dynamic recrystallization phenomena and their influence on the flow stress of the material. Thus, recrystallization model has also been implemented.In parallel, a numerical model for the prediction of machined surface integrity has been constructed. This model is based on a methodology called 'hybrid' (developed during the PhD thesis of Frederic Valiorgue for the AISI 304L steel). The method consists in replacing tool and chip modeling by equivalent loadings (obtained experimentally). A calibration step of these loadings has been carried out using orthogonal cutting and friction tests (with sensitivity studies of machining forces, friction and heat partition coefficients to cutting parameters variations).Finally, numerical simulations predictions of microstructural changes (austenitization and dynamic recrystallization) and residual stresses have been successfully compared with
Energy Technology Data Exchange (ETDEWEB)
Guerette, D.
2009-07-01
This document presented a detailed mathematical explanation and validation of the steps leading to the development of an asynchronous squirrel-cage machine. The MatLab/Simulink software was used to model a wind turbine at variable high speeds. The asynchronous squirrel-cage machine is an electromechanical system coupled to a magnetic circuit. The resulting electromagnetic circuit can be represented as a set of resistances, leakage inductances and mutual inductances. Different models were used for a comparison study, including the Munteanu, Boldea, Wind Turbine Blockset, and SimPowerSystem. MatLab/Simulink modeling results were in good agreement with the results from other comparable models. Simulation results were in good agreement with analytical calculations. 6 refs, 2 tabs, 9 figs.
Developing a PLC-friendly state machine model: lessons learned
Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans
2014-07-01
Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we
Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms
Directory of Open Access Journals (Sweden)
Krzysztof Gajowniczek
2017-10-01
Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.
Modeling the Virtual Machine Launching Overhead under Fermicloud
Energy Technology Data Exchange (ETDEWEB)
Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon
2014-11-12
FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.
International Nuclear Information System (INIS)
Goldberg, L.F.
1990-08-01
The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year's funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge
An improved modelling of asynchronous machine with skin-effect ...
African Journals Online (AJOL)
The conventional method of analysis of Asynchronous machine fails to give accurate results especially when the machine is operated under high rotor frequency. At high rotor frequency, skin-effect dominates causing the rotor impedance to be frequency dependant. This paper therefore presents an improved method of ...
Modelling and Simulation of a Synchronous Machine with Power Electronic Systems
DEFF Research Database (Denmark)
Chen, Zhe; Blaabjerg, Frede
2005-01-01
is modelled in SIMULINK as well. The resulting model can more accurately represent non-idea situations such as non-symmetrical parameters of the electrical machines and unbalance conditions. The model may be used for both steady state and large-signal dynamic analysis. This is particularly useful......This paper reports the modeling and simulation of a synchronous machine with a power electronic interface in direct phase model. The implementation of a direct phase model of synchronous machines in MATLAB/SIMULINK is presented .The power electronic system associated with the synchronous machine...... in the systems where a detailed study is needed in order to assess the overall system stability. Simulation studies are performed under various operation conditions. It is shown that the developed model could be used for studies of various applications of synchronous machines such as in renewable and DG...
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
Energy Technology Data Exchange (ETDEWEB)
Reddy, M Mohan; Gorin, Alexander [School of Engineering and Science, Curtin University of Technology, Sarawak (Malaysia); Abou-El-Hossein, K A, E-mail: mohan.m@curtin.edu.my [Mechanical and Aeronautical Department, Nelson Mandela Metropolitan University, Port Elegebeth, 6031 (South Africa)
2011-02-15
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
International Nuclear Information System (INIS)
Reddy, M Mohan; Gorin, Alexander; Abou-El-Hossein, K A
2011-01-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
Modeling and simulation of five-axis virtual machine based on NX
Li, Xiaoda; Zhan, Xianghui
2018-04-01
Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.
Crystal structure representations for machine learning models of formation energies
Energy Technology Data Exchange (ETDEWEB)
Faber, Felix [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Lindmaa, Alexander [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden; von Lilienfeld, O. Anatole [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Argonne Leadership Computing Facility, Argonne National Laboratory, 9700 S. Cass Avenue Lemont Illinois 60439; Armiento, Rickard [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden
2015-04-20
We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a dataset of 3938 crystal structures obtained from the Materials Project. For training sets consisting of 3000 crystals, the generalization error in predicting formation energies of new structures corresponds to (i) 0.49, (ii) 0.64, and (iii) 0.37eV/atom for the respective representations.
Kelouaz, Moussa; Ouazir, Youcef; Hadjout, Larbi; Mezani, Smail; Lubin, Thiery; Berger, Kévin; Lévêque, Jean
2018-05-01
In this paper a new superconducting inductor topology intended for synchronous machine is presented. The studied machine has a standard 3-phase armature and a new kind of 2-poles inductor (claw-pole structure) excited by two coaxial superconducting coils. The air-gap spatial variation of the radial flux density is obtained by inserting a superconducting bulk, which deviates the magnetic field due to the coils. The complex geometry of this inductor usually needs 3D finite elements (FEM) for its analysis. However, to avoid a long computational time inherent to 3D FEM, we propose in this work an alternative modeling, which uses a 3D meshed reluctance network. The results obtained with the developed model are compared to 3D FEM computations as well as to measurements carried out on a laboratory prototype. Finally, a 3D FEM study of the shielding properties of the superconducting screen demonstrates the suitability of using a diamagnetic-like model of the superconducting screen.
Energy Technology Data Exchange (ETDEWEB)
Licht, R.H.; Ramanath, S.; Simpson, M.; Lilley, E.
1996-02-01
Norton Company successfully completed the 16-month Phase I technical effort to define requirements, design, develop, and evaluate a next-generation grinding wheel for cost-effective cylindrical grinding of advanced ceramics. This program was a cooperative effort involving three Norton groups representing a superabrasive grinding wheel manufacturer, a diamond film manufacturing division and a ceramic research center. The program was divided into two technical tasks, Task 1, Analysis of Required Grinding Wheel Characteristics, and Task 2, Design and Prototype Development. In Task 1 we performed a parallel path approach with Superabrasive metal-bond development and the higher technical risk, CVD diamond wheel development. For the Superabrasive approach, Task 1 included bond wear and strength tests to engineer bond-wear characteristics. This task culminated in a small-wheel screening test plunge grinding sialon disks. In Task 2, an improved Superabrasive metal-bond specification for low-cost machining of ceramics in external cylindrical grinding mode was identified. The experimental wheel successfully ground three types of advanced ceramics without the need for wheel dressing. The spindle power consumed by this wheel during test grinding of NC-520 sialon is as much as to 30% lower compared to a standard resin bonded wheel with 100 diamond concentration. The wheel wear with this improved metal bond was an order of magnitude lower than the resin-bonded wheel, which would significantly reduce ceramic grinding costs through fewer wheel changes for retruing and replacements. Evaluation of ceramic specimens from both Tasks 1 and 2 tests for all three ceramic materials did not show evidence of unusual grinding damage. The novel CVD-diamond-wheel approach was incorporated in this program as part of Task 1. The important factors affecting the grinding performance of diamond wheels made by CVD coating preforms were determined.
Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.
Suppes, Patrick
This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…
A self-calibrating robot based upon a virtual machine model of parallel kinematics
DEFF Research Database (Denmark)
Pedersen, David Bue; Eiríksson, Eyþór Rúnar; Hansen, Hans Nørgaard
2016-01-01
A delta-type parallel kinematics system for Additive Manufacturing has been created, which through a probing system can recognise its geometrical deviations from nominal and compensate for these in the driving inverse kinematic model of the machine. Novelty is that this model is derived from...... a virtual machine of the kinematics system, built on principles from geometrical metrology. Relevant mathematically non-trivial deviations to the ideal machine are identified and decomposed into elemental deviations. From these deviations, a routine is added to a physical machine tool, which allows...
Directory of Open Access Journals (Sweden)
B. V. Phung
2017-01-01
Full Text Available The subject of research is a new type of the multirip saw machine with circular reciprocating saw blades. This machine has a number of advantages in comparison with other machines of similar purpose. The paper presents an overview of different types of saw equipment and describes basic characteristics of the machine under investigation.Using the concept of lifecycle management of the considered machine in a unified information space is necessary to improve quality and competitiveness in the current production environment. In this lifecycle all the members, namely designers, technologists, customers, etc., have a philosophy to tend to optimize the overall machine design as much as possible. However, it is not always possible to achieve. Conversely, at the boundary between the phases there are several mismatching situations, if not even conflicting inconsistencies. For example, improvement of mass characteristics can lead to poor stability and rigidity of the saw blade. Machine output improvement through increasing frequency of the machine motor rotation, on the other side, results in reducing stable ability of the saw blades and so on.In order to provide a coherent framework for the collaborative environment between the members of the life cycle, the article presents a technique to construct a mathematical model that allows combining all different members’ requirements in the unified information model. The article also gives analysis of kinematic and dynamic behavior and technological characteristics of the machine. Describes in detail all the controlled parameters, functional constraints, and quality criteria of the machine under consideration. Depending on the controlled parameters, the analytical relationships formulate functional constraints and quality criteria of the machine. The proposed algorithm allows fast and exact calculation of all the functional constraints and quality criteria of the machine for a given vector of the control
Energy Technology Data Exchange (ETDEWEB)
Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)
2016-07-05
Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.
International Nuclear Information System (INIS)
Ma, Denglong; Zhang, Zaoxiao
2016-01-01
Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.
Modelling tick abundance using machine learning techniques and satellite imagery
DEFF Research Database (Denmark)
Kjær, Lene Jung; Korslund, L.; Kjelland, V.
satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....
Quasilinear Extreme Learning Machine Model Based Internal Model Control for Nonlinear Process
Directory of Open Access Journals (Sweden)
Dazi Li
2015-01-01
Full Text Available A new strategy for internal model control (IMC is proposed using a regression algorithm of quasilinear model with extreme learning machine (QL-ELM. Aimed at the chemical process with nonlinearity, the learning process of the internal model and inverse model is derived. The proposed QL-ELM is constructed as a linear ARX model with a complicated nonlinear coefficient. It shows some good approximation ability and fast convergence. The complicated coefficients are separated into two parts. The linear part is determined by recursive least square (RLS, while the nonlinear part is identified through extreme learning machine. The parameters of linear part and the output weights of ELM are estimated iteratively. The proposed internal model control is applied to CSTR process. The effectiveness and accuracy of the proposed method are extensively verified through numerical results.
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Directory of Open Access Journals (Sweden)
Qiang Shang
Full Text Available Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS. Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM is proposed based on singular spectrum analysis (SSA and kernel extreme learning machine (KELM. SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
International Nuclear Information System (INIS)
Shi Chunsheng; Meng Dapeng
2011-01-01
The prediction index for supply risk is developed based on the factor identifying of nuclear equipment manufacturing industry. The supply risk prediction model is established with the method of support vector machine and decision tree, based on the investigation on 3 important nuclear power equipment manufacturing enterprises and 60 suppliers. Final case study demonstrates that the combination model is better than the single prediction model, and demonstrates the feasibility and reliability of this model, which provides a method to evaluate the suppliers and measure the supply risk. (authors)
Developing robust arsenic awareness prediction models using machine learning algorithms.
Singh, Sushant K; Taylor, Robert W; Rahman, Mohammad Mahmudur; Pradhan, Biswajeet
2018-04-01
Arsenic awareness plays a vital role in ensuring the sustainability of arsenic mitigation technologies. Thus far, however, few studies have dealt with the sustainability of such technologies and its associated socioeconomic dimensions. As a result, arsenic awareness prediction has not yet been fully conceptualized. Accordingly, this study evaluated arsenic awareness among arsenic-affected communities in rural India, using a structured questionnaire to record socioeconomic, demographic, and other sociobehavioral factors with an eye to assessing their association with and influence on arsenic awareness. First a logistic regression model was applied and its results compared with those produced by six state-of-the-art machine-learning algorithms (Support Vector Machine [SVM], Kernel-SVM, Decision Tree [DT], k-Nearest Neighbor [k-NN], Naïve Bayes [NB], and Random Forests [RF]) as measured by their accuracy at predicting arsenic awareness. Most (63%) of the surveyed population was found to be arsenic-aware. Significant arsenic awareness predictors were divided into three types: (1) socioeconomic factors: caste, education level, and occupation; (2) water and sanitation behavior factors: number of family members involved in water collection, distance traveled and time spent for water collection, places for defecation, and materials used for handwashing after defecation; and (3) social capital and trust factors: presence of anganwadi and people's trust in other community members, NGOs, and private agencies. Moreover, individuals' having higher social network positively contributed to arsenic awareness in the communities. Results indicated that both the SVM and the RF algorithms outperformed at overall prediction of arsenic awareness-a nonlinear classification problem. Lower-caste, less educated, and unemployed members of the population were found to be the most vulnerable, requiring immediate arsenic mitigation. To this end, local social institutions and NGOs could play a
Modelling Machine Tools using Structure Integrated Sensors for Fast Calibration
Directory of Open Access Journals (Sweden)
Benjamin Montavon
2018-02-01
Full Text Available Monitoring of the relative deviation between commanded and actual tool tip position, which limits the volumetric performance of the machine tool, enables the use of contemporary methods of compensation to reduce tolerance mismatch and the uncertainties of on-machine measurements. The development of a primarily optical sensor setup capable of being integrated into the machine structure without limiting its operating range is presented. The use of a frequency-modulating interferometer and photosensitive arrays in combination with a Gaussian laser beam allows for fast and automated online measurements of the axes’ motion errors and thermal conditions with comparable accuracy, lower cost, and smaller dimensions as compared to state-of-the-art optical measuring instruments for offline machine tool calibration. The development is tested through simulation of the sensor setup based on raytracing and Monte-Carlo techniques.
Towards an automatic model transformation mechanism from UML state machines to DEVS models
Directory of Open Access Journals (Sweden)
Ariel González
2015-08-01
Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.
Modelling injection moulding machines for micro manufacture applications through functional analysis
DEFF Research Database (Denmark)
Fantoni, G.; Tosello, Guido; Gabelloni, D.
2012-01-01
The paper presents the analysis of an injection moulding machine using functional analysis to identify both its critical components and possible working problems when such a machine is employed for the production of polymer-based micro products. The step-by-step procedure starts from the study...... of the process phases of a machine and then it employs functional analysis to decompose the phases and attributes functions to part features. Part features are subsequently analyzed to understand the causal chains bringing either to the desired behaviour or to failures to avoid. The assessment of the design...... solution is finally performed by gathering quantitative data from experiments. The case study investigates the design motivations and functional drivers of a micro injection moulding machine. The analysis allows identifying the correlations between failures and advantages with the design of the machine...
Fast algorithms for transport models. Final report
International Nuclear Information System (INIS)
Manteuffel, T.A.
1994-01-01
This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))
Underlying finite state machine for the social engineering attack detection model
CSIR Research Space (South Africa)
Mouton, Francois
2017-08-01
Full Text Available one to have a clearer overview of the mental processing performed within the model. While the current model provides a general procedural template for implementing detection mechanisms for social engineering attacks, the finite state machine provides a...
Machine learning in updating predictive models of planning and scheduling transportation projects
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Fishery landing forecasting using EMD-based least square support vector machine models
Shabri, Ani
2015-05-01
In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
Salameh , Farah; Picot , Antoine; Chabert , Marie; Maussion , Pascal
2017-01-01
International audience; This paper describes an original statistical approach for the lifespan modeling of electric machine insulation materials. The presented models aim to study the effect of three main stress factors (voltage, frequency and temperature) and their interactions on the insulation lifespan. The proposed methodology is applied to two different insulation materials tested in partial discharge regime. Accelerated ageing tests are organized according to experimental optimization m...
International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines
Belyaev, Alexander; Krommer, Michael
2017-01-01
The papers in this volume present and discuss the frontiers in the mechanics of controlled machines and structures. They are based on papers presented at the International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines held in Vienna in September 2015. The workshop continues a series of international workshops held in Linz (2008) and St. Petersburg (2010).
Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.
2016-12-01
Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.
A 3D finite element model for the vibration analysis of asymmetric rotating machines
Energy Technology Data Exchange (ETDEWEB)
Prabel, B.; Combescure, D. [CEA Saclay, DEN, DM2S, SEMT, DYN, F-91191 Gif Sur Yvette (France); Lazarus, A. [Ecole Polytech, Mecan Solides Lab, F-91128 Palaiseau (France)
2010-07-01
This paper suggests a 3D finite element method based on the modal theory in order to analyse linear periodically time-varying systems. Presentation of the method is given through the particular case of asymmetric rotating machines. First, Hill governing equations of asymmetric rotating oscillators with two degrees of freedom are investigated. These differential equations with periodic coefficients are solved with classic Floquet theory leading to parametric quasi-modes. These mathematical entities are found to have the same fundamental properties as classic Eigenmodes, but contain several harmonics possibly responsible for parametric instabilities. Extension to the vibration analysis (stability, frequency spectrum) of asymmetric rotating machines with multiple degrees of freedom is achieved with a fully 3D finite element model including stator and rotor coupling. Due to Hill expansion, the usual degrees of freedom are duplicated and associated with the relevant harmonic of the Floquet solutions in the frequency domain. Parametric quasi-modes as well as steady-state response of the whole system are ingeniously computed with a component-mode synthesis method. Finally, experimental investigations are performed on a test rig composed of an asymmetric rotor running on non-isotropic supports. Numerical and experimental results are compared to highlight the potential of the numerical method. (authors)
Ab-sorption machines for heating and cooling in future energy systems - Final report
Energy Technology Data Exchange (ETDEWEB)
Tozer, R.; Gustafsson, M.
2000-12-15
safety etc. The lack of trained maintenance people is a major concern. A brief survey of R&D activities is given in Chapter 5. In Chapter 6, Future Opportunities, an analysis of the market factors shows that the market pull favours sorption technologies in different ways. A chart shows, in decreasing order of market pull, the following technologies: Direct-fired or boiler-driven absorption chillers; and Absorption chillers driven by waste heat, heat recovery or combined heat and power (CHP) systems. Far less prominent are: Absorption heat pumps; then Absorption heat transformers; and finally Adsorption chillers. The chart shows that environmental benefit is inversely proportional to the market pull and share of sorption technologies. The main market barriers are considered to be the relatively high first costs of absorption plant, and the lack of knowledge of sorption technology by technicians, engineers, and professionals. In practice, different technologies are found to be most suitable for different countries, mainly depending on their energy infrastructure and particularly on how the country's electricity is produced. Recommendations are made regarding policies to promote sorption technology (where it is environmentally beneficial), and with regard to future R&D. In Chapter 7, Conclusions, it is emphasised that the application of sorption technology is not in all cases the best choice for the environment. Detailed conclusions are that: Direct-fired chillers should be phased out slowly, in the short and medium term; Encouragement should be given to absorption and adsorption chillers using waste heat, heat recovery or applied heat; Sorption chillers applied to CHP systems have an existing market pull, and benefit the environment. However, the overall efficiency has to be relatively high with respect to each nation's power production, throughout the life of the system; Absorption heat pumps (including reversible heat pumps) will be available on the market in
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
The Effect of Unreliable Machine for Two Echelons Deteriorating Inventory Model
Directory of Open Access Journals (Sweden)
I Nyoman Sutapa
2014-01-01
Full Text Available Many researchers have developed two echelons supply chain, however only few of them consider deteriorating items and unreliable machine in their models In this paper, we develop an inventory deteriorating model for two echelons supply chain with unreliable machine. The unreliable machine time is assumed uniformly distributed. The model is solved using simple heuristic since a closed form model can not be derived. A numerical example is used to show how the model works. A sensitivity analysis is conducted to show effect of different lost sales cost in the model. The result shows that increasing lost sales cost will increase both manufacture and buyer costs however buyer’s total cost increase higher than manufacture’s total cost as manufacture’s machine is more unreliable.
International Nuclear Information System (INIS)
El-Berry, A.; El-Berry, A.; Al-Bossly, A.
2010-01-01
In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.
Fleet replacement modeling : final report, July 2009.
2009-07-01
This project focused on two interrelated areas in equipment replacement modeling for fleets. The first area was research-oriented and addressed a fundamental assumption in engineering economic replacement modeling that all assets providing a similar ...
Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.
Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi
2017-10-20
Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.
Mathematical model of five-phase induction machine
Czech Academy of Sciences Publication Activity Database
Schreier, Luděk; Bendl, Jiří; Chomát, Miroslav
2011-01-01
Roč. 56, č. 2 (2011), s. 141-157 ISSN 0001-7043 R&D Projects: GA ČR GA102/08/0424 Institutional research plan: CEZ:AV0Z20570509 Keywords : five-phase induction machines * symmetrical components * spatial wave harmonics Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering
Final Project Report Load Modeling Transmission Research
Energy Technology Data Exchange (ETDEWEB)
Lesieutre, Bernard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bravo, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yinger, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chassin, Dave [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Huang, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lu, Ning [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hiskens, Ian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Venkataramanan, Giri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-03-31
The research presented in this report primarily focuses on improving power system load models to better represent their impact on system behavior. The previous standard load model fails to capture the delayed voltage recovery events that are observed in the Southwest and elsewhere. These events are attributed to stalled air conditioner units after a fault. To gain a better understanding of their role in these events and to guide modeling efforts, typical air conditioner units were testing in laboratories. Using data obtained from these extensive tests, new load models were developed to match air conditioner behavior. An air conditioner model is incorporated in the new WECC composite load model. These models are used in dynamic studies of the West and can impact power transfer limits for California. Unit-level and systemlevel solutions are proposed as potential solutions to the delayed voltage recovery problem.
Energy Technology Data Exchange (ETDEWEB)
Herrero Alvarez, J; Diaz Diaz, J; Diaz Diaz, J L
1972-07-01
It has been constructed a remote are welding machine, wholly transistorized, to be used in a Hot Cell of 1.000 Cu. In this work are presented the different parts of the equipment and its electronic description. Finally, some works of final preparation are shown such as ending of irradiation capsules, thermocouples welding, stainless steel cover welding. For these types of welding are quoted its relative programs. (Author)
Mathematical models for atmospheric pollutants. Final report
International Nuclear Information System (INIS)
Drake, R.L.; Barrager, S.M.
1979-08-01
The present and likely future roles of mathematical modeling in air quality decisions are described. The discussion emphasizes models and air pathway processes rather than the chemical and physical behavior of specific anthropogenic emissions. Summarized are the characteristics of various types of models used in the decision-making processes. Specific model subclasses are recommended for use in making air quality decisions that have site-specific, regional, national, or global impacts. The types of exposure and damage models that are currently used to predict the effects of air pollutants on humans, other animals, plants, ecosystems, property, and materials are described. The aesthetic effects of odor and visibility and the impact of pollutants on weather and climate are also addressed. Technical details of air pollution meteorology, chemical and physical properties of air pollutants, solution techniques, and air quality models are discussed in four appendices bound in separate volumes
Establishment of tunnel-boring machine disk cutter rock-breaking model from energy perspective
Directory of Open Access Journals (Sweden)
Liwei Song
2015-12-01
Full Text Available As the most important cutting tools during tunnel-boring machine tunneling construction process, V-type disk cutter’s rock-breaking mechanism has been researched by many scholars all over the world. Adopting finite element method, this article focused on the interaction between V-type disk cutters and the intact rock to carry out microscopic parameter analysis methods: first, the stress model of rock breaking was established through V-type disk cutter motion trajectory analysis; second, based on the incremental theorem of the elastic–plastic theory, the strain model of the relative changes of rock displacement during breaking process was created. According to the principle of admissible work by energy method of the elastic–plastic theory to analyze energy transfer rules in the process of breaking rock, rock-breaking force of the V-type disk cutter could be regarded as the external force in the rock system. Finally, by taking the rock system as the reference object, the total potential energy equivalent model of rock system was derived to obtain the forces of the three directions acting on V-type disk cutter during the rock-breaking process. This derived model, which has been proved to be effective and scientific through comparisons with some original force models and by comparative analysis with experimental data, also initiates a new research strategy taking the view of the micro elastic–plastic theory to study the rock-breaking mechanism.
Non-linear hybrid control oriented modelling of a digital displacement machine
DEFF Research Database (Denmark)
Pedersen, Niels Henrik; Johansen, Per; Andersen, Torben O.
2017-01-01
Proper feedback control of digital fluid power machines (Pressure, flow, torque or speed control) requires a control oriented model, from where the system dynamics can be analyzed, stability can be proven and design criteria can be specified. The development of control oriented models for hydraulic...... Digital Displacement Machines (DDM) is complicated due to non-smooth machine behavior, where the dynamics comprises both analog, digital and non-linear elements. For a full stroke operated DDM the power throughput is altered in discrete levels based on the ratio of activated pressure chambers....... In this paper, a control oriented hybrid model is established, which combines the continuous non-linear pressure chamber dynamics and the discrete shaft position dependent activation of the pressure chambers. The hybrid machine model is further extended to describe the dynamics of a Digital Fluid Power...
Modelling of human-machine interaction in equipment design of manufacturing cells
Cochran, David S.; Arinez, Jorge F.; Collins, Micah T.; Bi, Zhuming
2017-08-01
This paper proposes a systematic approach to model human-machine interactions (HMIs) in supervisory control of machining operations; it characterises the coexistence of machines and humans for an enterprise to balance the goals of automation/productivity and flexibility/agility. In the proposed HMI model, an operator is associated with a set of behavioural roles as a supervisor for multiple, semi-automated manufacturing processes. The model is innovative in the sense that (1) it represents an HMI based on its functions for process control but provides the flexibility for ongoing improvements in the execution of manufacturing processes; (2) it provides a computational tool to define functional requirements for an operator in HMIs. The proposed model can be used to design production systems at different levels of an enterprise architecture, particularly at the machine level in a production system where operators interact with semi-automation to accomplish the goal of 'autonomation' - automation that augments the capabilities of human beings.
Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback
Jung–Min Yang
2016-01-01
Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...
Directory of Open Access Journals (Sweden)
Qiang Shang
2016-08-01
Full Text Available Short-term traffic flow prediction is an important part of intelligent transportation systems research and applications. For further improving the accuracy of short-time traffic flow prediction, a novel hybrid prediction model (multivariate phase space reconstruction–combined kernel function-least squares support vector machine based on multivariate phase space reconstruction and combined kernel function-least squares support vector machine is proposed. The C-C method is used to determine the optimal time delay and the optimal embedding dimension of traffic variables’ (flow, speed, and occupancy time series for phase space reconstruction. The G-P method is selected to calculate the correlation dimension of attractor which is an important index for judging chaotic characteristics of the traffic variables’ series. The optimal input form of combined kernel function-least squares support vector machine model is determined by multivariate phase space reconstruction, and the model’s parameters are optimized by particle swarm optimization algorithm. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. The experimental results suggest that the new proposed model yields better predictions compared with similar models (combined kernel function-least squares support vector machine, multivariate phase space reconstruction–generalized kernel function-least squares support vector machine, and phase space reconstruction–combined kernel function-least squares support vector machine, which indicates that the new proposed model exhibits stronger prediction ability and robustness.
Rotating magnetizations in electrical machines: Measurements and modeling
Directory of Open Access Journals (Sweden)
Andreas Thul
2018-05-01
Full Text Available This paper studies the magnetization process in electrical steel sheets for rotational magnetizations as they occur in the magnetic circuit of electrical machines. A four-pole rotational single sheet tester is used to generate the rotating magnetic flux inside the sample. A field-oriented control scheme is implemented to improve the control performance. The magnetization process of different non-oriented materials is analyzed and compared.
Behavioral Modeling for Mental Health using Machine Learning Algorithms.
Srividya, M; Mohanavalli, S; Bhalaji, N
2018-04-03
Mental health is an indicator of emotional, psychological and social well-being of an individual. It determines how an individual thinks, feels and handle situations. Positive mental health helps one to work productively and realize their full potential. Mental health is important at every stage of life, from childhood and adolescence through adulthood. Many factors contribute to mental health problems which lead to mental illness like stress, social anxiety, depression, obsessive compulsive disorder, drug addiction, and personality disorders. It is becoming increasingly important to determine the onset of the mental illness to maintain proper life balance. The nature of machine learning algorithms and Artificial Intelligence (AI) can be fully harnessed for predicting the onset of mental illness. Such applications when implemented in real time will benefit the society by serving as a monitoring tool for individuals with deviant behavior. This research work proposes to apply various machine learning algorithms such as support vector machines, decision trees, naïve bayes classifier, K-nearest neighbor classifier and logistic regression to identify state of mental health in a target group. The responses obtained from the target group for the designed questionnaire were first subject to unsupervised learning techniques. The labels obtained as a result of clustering were validated by computing the Mean Opinion Score. These cluster labels were then used to build classifiers to predict the mental health of an individual. Population from various groups like high school students, college students and working professionals were considered as target groups. The research presents an analysis of applying the aforementioned machine learning algorithms on the target groups and also suggests directions for future work.
Rotating magnetizations in electrical machines: Measurements and modeling
Thul, Andreas; Steentjes, Simon; Schauerte, Benedikt; Klimczyk, Piotr; Denke, Patrick; Hameyer, Kay
2018-05-01
This paper studies the magnetization process in electrical steel sheets for rotational magnetizations as they occur in the magnetic circuit of electrical machines. A four-pole rotational single sheet tester is used to generate the rotating magnetic flux inside the sample. A field-oriented control scheme is implemented to improve the control performance. The magnetization process of different non-oriented materials is analyzed and compared.
Respiratory trace deposition models. Final report
International Nuclear Information System (INIS)
Yeh, H.C.
1980-03-01
Respiratory tract characteristics of four mammalian species (human, dog, rat and Syrian hamster) were studied, using replica lung casts. An in situ casting techniques was developed for making the casts. Based on an idealized branch model, over 38,000 records of airway segment diameters, lengths, branching angles and gravity angles were obtained from measurements of two humans, two Beagle dogs, two rats and one Syrian hamster. From examination of the trimmed casts and morphometric data, it appeared that the structure of the human airway is closer to a dichotomous structure, whereas for dog, rat and hamster, it is monopodial. Flow velocity in the trachea and major bronchi in living Beagle dogs was measured using an implanted, subminiaturized, heated film anemometer. A physical model was developed to simulate the regional deposition characteristics proposed by the Task Group on Lung Dynamics of the ICRP. Various simulation modules for the nasopharyngeal (NP), tracheobronchial (TB) and pulmonary (P) compartments were designed and tested. Three types of monodisperse aerosols were developed for animal inhalation studies. Fifty Syrian hamsters and 50 rats were exposed to five different sizes of monodisperse fused aluminosilicate particles labeled with 169 Yb. Anatomical lung models were developed for four species (human, Beagle dog, rat and Syrian hamster) that were based on detailed morphometric measurements of replica lung casts. Emphasis was placed on developing a lobar typical-path lung model and on developing a modeling technique which could be applied to various mammalian species. A set of particle deposition equations for deposition caused by inertial impaction, sedimentation, and diffusion were developed. Theoretical models of particle deposition were developed based on these equations and on the anatomical lung models
Equivalent model of a dually-fed machine for electric drive control systems
Ostrovlyanchik, I. Yu; Popolzin, I. Yu
2018-05-01
The article shows that the mathematical model of a dually-fed machine is complicated because of the presence of a controlled voltage source in the rotor circuit. As a method of obtaining a mathematical model, the method of a generalized two-phase electric machine is applied and a rotating orthogonal coordinate system is chosen that is associated with the representing vector of a stator current. In the chosen coordinate system in the operator form the differential equations of electric equilibrium for the windings of the generalized machine (the Kirchhoff equation) are written together with the expression for the moment, which determines the electromechanical energy transformation in the machine. Equations are transformed so that they connect the currents of the windings, that determine the moment of the machine, and the voltages on these windings. The structural diagram of the machine is assigned to the written equations. Based on the written equations and accepted assumptions, expressions were obtained for the balancing the EMF of windings, and on the basis of these expressions an equivalent mathematical model of a dually-fed machine is proposed, convenient for use in electric drive control systems.
Prediction Model of Machining Failure Trend Based on Large Data Analysis
Li, Jirong
2017-12-01
The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.
A comparison of machine learning and Bayesian modelling for molecular serotyping.
Newton, Richard; Wernisch, Lorenz
2017-08-11
Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological
Multi products single machine economic production quantity model with multiple batch size
Directory of Open Access Journals (Sweden)
Ata Allah Taleizadeh
2011-04-01
Full Text Available In this paper, a multi products single machine economic order quantity model with discrete delivery is developed. A unique cycle length is considered for all produced items with an assumption that all products are manufactured on a single machine with a limited capacity. The proposed model considers different items such as production, setup, holding, and transportation costs. The resulted model is formulated as a mixed integer nonlinear programming model. Harmony search algorithm, extended cutting plane and particle swarm optimization methods are used to solve the proposed model. Two numerical examples are used to analyze and to evaluate the performance of the proposed model.
Czernecki, Bartosz; Nowosad, Jakub; Jabłońska, Katarzyna
2018-04-01
Changes in the timing of plant phenological phases are important proxies in contemporary climate research. However, most of the commonly used traditional phenological observations do not give any coherent spatial information. While consistent spatial data can be obtained from airborne sensors and preprocessed gridded meteorological data, not many studies robustly benefit from these data sources. Therefore, the main aim of this study is to create and evaluate different statistical models for reconstructing, predicting, and improving quality of phenological phases monitoring with the use of satellite and meteorological products. A quality-controlled dataset of the 13 BBCH plant phenophases in Poland was collected for the period 2007-2014. For each phenophase, statistical models were built using the most commonly applied regression-based machine learning techniques, such as multiple linear regression, lasso, principal component regression, generalized boosted models, and random forest. The quality of the models was estimated using a k-fold cross-validation. The obtained results showed varying potential for coupling meteorological derived indices with remote sensing products in terms of phenological modeling; however, application of both data sources improves models' accuracy from 0.6 to 4.6 day in terms of obtained RMSE. It is shown that a robust prediction of early phenological phases is mostly related to meteorological indices, whereas for autumn phenophases, there is a stronger information signal provided by satellite-derived vegetation metrics. Choosing a specific set of predictors and applying a robust preprocessing procedures is more important for final results than the selection of a particular statistical model. The average RMSE for the best models of all phenophases is 6.3, while the individual RMSE vary seasonally from 3.5 to 10 days. Models give reliable proxy for ground observations with RMSE below 5 days for early spring and late spring phenophases. For
Final model of multicriterionevaluation of animal welfare
DEFF Research Database (Denmark)
Bonde, Marianne; Botreau, R; Bracke, MBM
One major objective of Welfare Quality® is to propose harmonized methods for the overall assessment of animal welfare on farm and at slaughter that are science based and meet societal concerns. Welfare is a multidimensional concept and its assessment requires measures of different aspects. Welfar......, acceptable welfare and not classified. This evaluation model is tuned according to the views of experts from animal and social sciences, and stakeholders....... Quality® proposes a formal evaluation model whereby the data on animals or their environment are transformed into value scores that reflect compliance with 12 subcriteria and 4 criteria of good welfare. Each animal unit is then allocated to one of four categories: excellent welfare, enhanced welfare...
A modeling method for hybrid energy behaviors in flexible machining systems
International Nuclear Information System (INIS)
Li, Yufeng; He, Yan; Wang, Yan; Wang, Yulin; Yan, Ping; Lin, Shenlong
2015-01-01
Increasingly environmental and economic pressures have led to great concerns regarding the energy consumption of machining systems. Understanding energy behaviors of flexible machining systems is a prerequisite for improving energy efficiency of these systems. This paper proposes a modeling method to predict energy behaviors in flexible machining systems. The hybrid energy behaviors not only depend on the technical specification related of machine tools and workpieces, but are significantly affected by individual production scenarios. In the method, hybrid energy behaviors are decomposed into Structure-related energy behaviors, State-related energy behaviors, Process-related energy behaviors and Assignment-related energy behaviors. The modeling method for the hybrid energy behaviors is proposed based on Colored Timed Object-oriented Petri Net (CTOPN). The former two types of energy behaviors are modeled by constructing the structure of CTOPN, whist the latter two types of behaviors are simulated by applying colored tokens and associated attributes. Machining on two workpieces in the experimental workshop were undertaken to verify the proposed modeling method. The results showed that the method can provide multi-perspective transparency on energy consumption related to machine tools, workpieces as well as production management, and is particularly suitable for flexible manufacturing system when frequent changes in machining systems are often encountered. - Highlights: • Energy behaviors in flexible machining systems are modeled in this paper. • Hybrid characteristics of energy behaviors are examined from multiple viewpoints. • Flexible modeling method CTOPN is used to predict the hybrid energy behaviors. • This work offers a multi-perspective transparency on energy consumption
DEFF Research Database (Denmark)
Chen, Zhe; Blaabjerg, Frede; Iov, Florin
2005-01-01
A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the compl...... synchronous machine and the implemented model could be used for studies of various applications of synchronous machines including in renewable and DG generation systems....
Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann
2003-01-01
Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.
Jain, Madhu; Meena, Rakesh Kumar
2018-03-01
Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia
Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.
2017-08-01
In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.
Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip
2015-11-01
Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rahmati, Omid; Tahmasebipour, Nasser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar
2017-12-01
Gully erosion constitutes a serious problem for land degradation in a wide range of environments. The main objective of this research was to compare the performance of seven state-of-the-art machine learning models (SVM with four kernel types, BP-ANN, RF, and BRT) to model the occurrence of gully erosion in the Kashkan-Poldokhtar Watershed, Iran. In the first step, a gully inventory map consisting of 65 gully polygons was prepared through field surveys. Three different sample data sets (S1, S2, and S3), including both positive and negative cells (70% for training and 30% for validation), were randomly prepared to evaluate the robustness of the models. To model the gully erosion susceptibility, 12 geo-environmental factors were selected as predictors. Finally, the goodness-of-fit and prediction skill of the models were evaluated by different criteria, including efficiency percent, kappa coefficient, and the area under the ROC curves (AUC). In terms of accuracy, the RF, RBF-SVM, BRT, and P-SVM models performed excellently both in the degree of fitting and in predictive performance (AUC values well above 0.9), which resulted in accurate predictions. Therefore, these models can be used in other gully erosion studies, as they are capable of rapidly producing accurate and robust gully erosion susceptibility maps (GESMs) for decision-making and soil and water management practices. Furthermore, it was found that performance of RF and RBF-SVM for modelling gully erosion occurrence is quite stable when the learning and validation samples are changed.
Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei
2018-05-01
The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.
Bayesian networks modeling for thermal error of numerical control machine tools
Institute of Scientific and Technical Information of China (English)
Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN
2008-01-01
The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.
Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools
Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu
2018-03-01
Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.
Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods
Shan, Min
2017-01-01
With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...
Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.
2016-12-01
A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in
Magana-Mora, Arturo
2017-04-29
Machine-learning (ML) techniques have been widely applied to solve different problems in biology. However, biological data are large and complex, which often result in extremely intricate ML models. Frequently, these models may have a poor performance or may be computationally unfeasible. This study presents a set of novel computational methods and focuses on the application of genetic algorithms (GAs) for the simplification and optimization of ML models and their applications to biological problems. The dissertation addresses the following three challenges. The first is to develop a generalizable classification methodology able to systematically derive competitive models despite the complexity and nature of the data. Although several algorithms for the induction of classification models have been proposed, the algorithms are data dependent. Consequently, we developed OmniGA, a novel and generalizable framework that uses different classification models in a treeXlike decision structure, along with a parallel GA for the optimization of the OmniGA structure. Results show that OmniGA consistently outperformed existing commonly used classification models. The second challenge is the prediction of translation initiation sites (TIS) in plants genomic DNA. We performed a statistical analysis of the genomic DNA and proposed a new set of discriminant features for this problem. We developed a wrapper method based on GAs for selecting an optimal feature subset, which, in conjunction with a classification model, produced the most accurate framework for the recognition of TIS in plants. Finally, results demonstrate that despite the evolutionary distance between different plants, our approach successfully identified conserved genomic elements that may serve as the starting point for the development of a generic model for prediction of TIS in eukaryotic organisms. Finally, the third challenge is the accurate prediction of polyadenylation signals in human genomic DNA. To achieve
DEFF Research Database (Denmark)
Andersen, Thomas Timm; Amor, Heni Ben; Andersen, Nils Axel
2015-01-01
and separate. In this paper, we present a data-driven methodology for separating and modelling inherent delays during robot control. We show how both actuation and response delays can be modelled using modern machine learning methods. The resulting models can be used to predict the delays as well...
Snyder, Robin M.
2015-01-01
The field of topic modeling has become increasingly important over the past few years. Topic modeling is an unsupervised machine learning way to organize text (or image or DNA, etc.) information such that related pieces of text can be identified. This paper/session will present/discuss the current state of topic modeling, why it is important, and…
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model
Directory of Open Access Journals (Sweden)
Yanbing Liu
2014-01-01
Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.
Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2018-04-01
A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.
Directory of Open Access Journals (Sweden)
Jian Chai
2015-01-01
Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.
A rule-based approach to model checking of UML state machines
Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz
2016-12-01
In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2017-08-01
Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Novel Simplified Model for Asynchronous Machine with Consideration of Frequency Characteristic
Directory of Open Access Journals (Sweden)
Changchun Cai
2014-01-01
Full Text Available The frequency characteristic of electric equipment should be considered in the digital simulation of power systems. The traditional asynchronous machine third-order transient model excludes not only the stator transient but also the frequency characteristics, thus decreasing the application sphere of the model and resulting in a large error under some special conditions. Based on the physical equivalent circuit and Park model for asynchronous machines, this study proposes a novel asynchronous third-order transient machine model with consideration of the frequency characteristic. In the new definitions of variables, the voltages behind the reactance are redefined as the linear equation of flux linkage. In this way, the rotor voltage equation is not associated with the derivative terms of frequency. However, the derivative terms of frequency should not always be ignored in the application of the traditional third-order transient model. Compared with the traditional third-order transient model, the novel simplified third-order transient model with consideration of the frequency characteristic is more accurate without increasing the order and complexity. Simulation results show that the novel third-order transient model for the asynchronous machine is suitable and effective and is more accurate than the widely used traditional simplified third-order transient model under some special conditions with drastic frequency fluctuations.
Directory of Open Access Journals (Sweden)
Weide Li
2017-05-01
Full Text Available Electric load forecasting plays an important role in electricity markets and power systems. Because electric load time series are complicated and nonlinear, it is very difficult to achieve a satisfactory forecasting accuracy. In this paper, a hybrid model, Wavelet Denoising-Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EWKM, which combines k-Nearest Neighbor (KNN and Extreme Learning Machine (ELM based on a wavelet denoising technique is proposed for short-term load forecasting. The proposed hybrid model decomposes the time series into a low frequency-associated main signal and some detailed signals associated with high frequencies at first, then uses KNN to determine the independent and dependent variables from the low-frequency signal. Finally, the ELM is used to get the non-linear relationship between these variables to get the final prediction result for the electric load. Compared with three other models, Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EKM, Wavelet Denoising-Extreme Learning Machine (WKM and Wavelet Denoising-Back Propagation Neural Network optimized by k-Nearest Neighbor Regression (WNNM, the model proposed in this paper can improve the accuracy efficiently. New South Wales is the economic powerhouse of Australia, so we use the proposed model to predict electric demand for that region. The accurate prediction has a significant meaning.
Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L
2017-08-29
To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care
Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators
Directory of Open Access Journals (Sweden)
Ilias Ouachtouk
2016-01-01
Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.
Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model
Energy Technology Data Exchange (ETDEWEB)
Sahragard, Nasrolah; Ramli, Abdul Rahman B [Institute of Advanced Technology, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Marhaban, Mohammad Hamiruce [Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Mansor, Shattri B, E-mail: sahragard@yahoo.com [Department of Civil Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia)
2011-02-15
Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.
Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model
Sahragard, Nasrolah; Ramli, Abdul Rahman B.; Hamiruce Marhaban, Mohammad; Mansor, Shattri B.
2011-02-01
Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.
Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model
International Nuclear Information System (INIS)
Sahragard, Nasrolah; Ramli, Abdul Rahman B; Marhaban, Mohammad Hamiruce; Mansor, Shattri B
2011-01-01
Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.
Modeling human-machine interactions for operations room layouts
Hendy, Keith C.; Edwards, Jack L.; Beevis, David
2000-11-01
The LOCATE layout analysis tool was used to analyze three preliminary configurations for the Integrated Command Environment (ICE) of a future USN platform. LOCATE develops a cost function reflecting the quality of all human-human and human-machine communications within a workspace. This proof- of-concept study showed little difference between the efficacy of the preliminary designs selected for comparison. This was thought to be due to the limitations of the study, which included the assumption of similar size for each layout and a lack of accurate measurement data for various objects in the designs, due largely to their notional nature. Based on these results, the USN offered an opportunity to conduct a LOCATE analysis using more appropriate assumptions. A standard crew was assumed, and subject matter experts agreed on the communications patterns for the analysis. Eight layouts were evaluated with the concepts of coordination and command factored into the analysis. Clear differences between the layouts emerged. The most promising design was refined further by the USN, and a working mock-up built for human-in-the-loop evaluation. LOCATE was applied to this configuration for comparison with the earlier analyses.
Magnetic saturation in semi-analytical harmonic modeling for electric machine analysis
Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.
2016-01-01
A semi-analytical method based on the harmonic modeling (HM) technique is presented for the analysis of the magneto-static field distribution in the slotted structure of rotating electric machines. In contrast to the existing literature, the proposed model does not require the assumption of infinite
Advanced induction machine model in phase coordinates for wind turbine applications
DEFF Research Database (Denmark)
Fajardo, L.A.; Iov, F.; Hansen, Anca Daniela
2007-01-01
In this paper an advanced phase coordinates squirrel cage induction machine model with time varying electrical parameters affected by magnetic saturation and rotor deep bar effects, is presented. The model uses standard data sheet for characterization of the electrical parameters, it is developed...
Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K
2015-01-01
Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273
Directory of Open Access Journals (Sweden)
Morvam dos Santos Netto
2014-11-01
Full Text Available Machining and industrial maintenance services include repair (corrective maintenance of equipments, activities involving the assembly-disassembly of equipments, fault diagnosis, machining operations, forming operations, welding processes, assembly and test of equipments. This article proposes a model for assessing the quality of services provided by small machining and industrial maintenance companies, since there is a gap in the literature regarding this issue and because the importance of small service companies in socio-economic development of the country. The model is an adaptation of the SERVQUAL instrument and the criteria determining the quality of services are designed according to the service cycle of a typical small machining and industrial maintenance company. In this sense, the Moments of Truth have been considered in the preparation of two separate questionnaires. The first questionnaire contains 24 statements that reflect the expectations of customers, and the second one contains 24 statements that measure perceptions of service performance. An additional item was included in each questionnaire to assess, respectively, the overall expectation about the services and the overall company performance. Therefore, it is a model that considers the interfaces of the client/supplier relationship, the peculiarities of the machining and industrial maintenance service sector and the company size.
A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction
Energy Technology Data Exchange (ETDEWEB)
Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-27
Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.
Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel
Outeiro, José C.; Umbrello, Domenico; Pina, José C.; Rizzuti, Stefania
2007-05-01
Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.
Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine
Directory of Open Access Journals (Sweden)
Hang-cheong Wong
2012-01-01
Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.
MODELS OF LIVE MIGRATION WITH ITERATIVE APPROACH AND MOVE OF VIRTUAL MACHINES
Directory of Open Access Journals (Sweden)
S. M. Aleksankov
2015-11-01
Full Text Available Subject of Research. The processes of live migration without shared storage with pre-copy approach and move migration are researched. Migration of virtual machines is an important opportunity of virtualization technology. It enables applications to move transparently with their runtime environments between physical machines. Live migration becomes noticeable technology for efficient load balancing and optimizing the deployment of virtual machines to physical hosts in data centres. Before the advent of live migration, only network migration (the so-called, «Move», has been used, that entails stopping the virtual machine execution while copying to another physical server, and, consequently, unavailability of the service. Method. Algorithms of live migration without shared storage with pre-copy approach and move migration of virtual machines are reviewed from the perspective of research of migration time and unavailability of services at migrating of virtual machines. Main Results. Analytical models are proposed predicting migration time of virtual machines and unavailability of services at migrating with such technologies as live migration with pre-copy approach without shared storage and move migration. In the latest works on the time assessment of unavailability of services and migration time using live migration without shared storage experimental results are described, that are applicable to draw general conclusions about the changes of time for unavailability of services and migration time, but not to predict their values. Practical Significance. The proposed models can be used for predicting the migration time and time of unavailability of services, for example, at implementation of preventive and emergency works on the physical nodes in data centres.
International Nuclear Information System (INIS)
Stern, R.E.; Song, J.; Work, D.B.
2017-01-01
The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.
Comparison of tree types of models for the prediction of final academic achievement
Directory of Open Access Journals (Sweden)
Silvana Gasar
2002-12-01
Full Text Available For efficient prevention of inappropriate secondary school choices and by that academic failure, school counselors need a tool for the prediction of individual pupil's final academic achievements. Using data mining techniques on pupils' data base and expert modeling, we developed several models for the prediction of final academic achievement in an individual high school educational program. For data mining, we used statistical analyses, clustering and two machine learning methods: developing classification decision trees and hierarchical decision models. Using an expert system shell DEX, an expert system, based on a hierarchical multi-attribute decision model, was developed manually. All the models were validated and evaluated from the viewpoint of their applicability. The predictive accuracy of DEX models and decision trees was equal and very satisfying, as it reached the predictive accuracy of an experienced counselor. With respect on the efficiency and difficulties in developing models, and relatively rapid changing of our education system, we propose that decision trees are used in further development of predictive models.
Improving wave forecasting by integrating ensemble modelling and machine learning
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
Design ensemble machine learning model for breast cancer diagnosis.
Hsieh, Sheau-Ling; Hsieh, Sung-Huai; Cheng, Po-Hsun; Chen, Chi-Huang; Hsu, Kai-Ping; Lee, I-Shun; Wang, Zhenyu; Lai, Feipei
2012-10-01
In this paper, we classify the breast cancer of medical diagnostic data. Information gain has been adapted for feature selections. Neural fuzzy (NF), k-nearest neighbor (KNN), quadratic classifier (QC), each single model scheme as well as their associated, ensemble ones have been developed for classifications. In addition, a combined ensemble model with these three schemes has been constructed for further validations. The experimental results indicate that the ensemble learning performs better than individual single ones. Moreover, the combined ensemble model illustrates the highest accuracy of classifications for the breast cancer among all models.
LINEAR KERNEL SUPPORT VECTOR MACHINES FOR MODELING PORE-WATER PRESSURE RESPONSES
Directory of Open Access Journals (Sweden)
KHAMARUZAMAN W. YUSOF
2017-08-01
Full Text Available Pore-water pressure responses are vital in many aspects of slope management, design and monitoring. Its measurement however, is difficult, expensive and time consuming. Studies on its predictions are lacking. Support vector machines with linear kernel was used here to predict the responses of pore-water pressure to rainfall. Pore-water pressure response data was collected from slope instrumentation program. Support vector machine meta-parameter calibration and model development was carried out using grid search and k-fold cross validation. The mean square error for the model on scaled test data is 0.0015 and the coefficient of determination is 0.9321. Although pore-water pressure response to rainfall is a complex nonlinear process, the use of linear kernel support vector machine can be employed where high accuracy can be sacrificed for computational ease and time.
Static Object Detection Based on a Dual Background Model and a Finite-State Machine
Directory of Open Access Journals (Sweden)
Heras Evangelio Rubén
2011-01-01
Full Text Available Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the learning rate. The state machine provides the meaning for the interpretation of the results obtained from background subtraction; it can be implemented as a look-up table with negligible computational cost and it can be easily extended. Due to the definition of the states in the state machine, the system can be used either full automatically or interactively, making it extremely suitable for real-life surveillance applications. The system was successfully validated with several public datasets.
A Novel Machine Learning Strategy Based on Two-Dimensional Numerical Models in Financial Engineering
Directory of Open Access Journals (Sweden)
Qingzhen Xu
2013-01-01
Full Text Available Machine learning is the most commonly used technique to address larger and more complex tasks by analyzing the most relevant information already present in databases. In order to better predict the future trend of the index, this paper proposes a two-dimensional numerical model for machine learning to simulate major U.S. stock market index and uses a nonlinear implicit finite-difference method to find numerical solutions of the two-dimensional simulation model. The proposed machine learning method uses partial differential equations to predict the stock market and can be extensively used to accelerate large-scale data processing on the history database. The experimental results show that the proposed algorithm reduces the prediction error and improves forecasting precision.
Effect of power quality on windings temperature of marine induction motors. Part I: Machine model
Energy Technology Data Exchange (ETDEWEB)
Gnacinski, P. [Gdynia Maritime Univ., Dept. of Ship Electrical Power Engineering, Morska Str. 83, 81-225 Gdynia (Poland)
2009-10-15
Marine induction machines are exposed to various power quality disturbances appearing simultaneously in ship power systems: frequency and voltage rms value deviation, voltage unbalance and voltage waveform distortions. As a result, marine induction motors can be seriously overheated due to lowered supply voltage quality. Improvement of the protection of marine induction machines requires an appropriate method of power quality assessment and modification of the power quality regulations of ship classification societies. This paper presents an analytical model of an induction cage machine supplied with voltage of lowered quality, used in part II of the work (effect of power quality on windings temperature of marine induction motors. Part II. Results of investigations and recommendations for related regulations) for power quality assessment in ship power systems, and for justification of the new power quality regulations proposal. The presented model is suitable for implementation in an on-line measurement system. (author)
Photon beam modelling with Pinnacle3 Treatment Planning System for a Rokus M Co-60 Machine
International Nuclear Information System (INIS)
Dulcescu, Mihaela; Murgulet Cristian
2008-01-01
The basic relationships of the convolution/superposition dose calculation technique are reviewed, and a modelling technique that can be used for obtaining a satisfactory beam model for a commercially available convolution/superposition-based treatment planning system is described. A fluence energy spectrum for a Co-60 treatment machine obtained from a Monte Carlo simulation was used for modelling the fluence spectrum for a Rokus M machine. In order to achieve this model we measured the depth dose distribution and the dose profiles with a Wellhofer dosimetry system. The primary fluence was iteratively modelled by comparing the computed depth dose curves and beam profiles with the depth dose curves and crossbeam profiles measured in a water phantom. The objective of beam modelling is to build a model of the primary fluence that the patient is exposed to, which can then be used for the calculation of the dose deposited in the patient. (authors)
Evaluation of discrete modeling efficiency of asynchronous electric machines
Byczkowska-Lipińska, Liliana; Stakhiv, Petro; Hoholyuk, Oksana; Vasylchyshyn, Ivanna
2011-01-01
In the paper the problem of effective mathematical macromodels in the form of state variables intended for asynchronous motor transient analysis is considered. Their comparing with traditional mathematical models of asynchronous motors including models built into MATLAB/Simulink software was carried out and analysis of their efficiency was conducted.
A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning
Directory of Open Access Journals (Sweden)
Shang Bo-Wen
2016-01-01
Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models
Bilalic, Rusmir
A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.
State Machine Modeling of the Space Launch System Solid Rocket Boosters
Harris, Joshua A.; Patterson-Hine, Ann
2013-01-01
The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.
Empirical model for estimating the surface roughness of machined ...
African Journals Online (AJOL)
Michael Horsfall
one of the most critical quality measure in mechanical products. In the ... Keywords: cutting speed, centre lathe, empirical model, surface roughness, Mean absolute percentage deviation ... The factors considered were work piece properties.
Credit Risk Analysis using Machine and Deep Learning models
Addo , Peter ,; Guegan , Dominique; Hassani , Bertrand
2018-01-01
URL des Documents de travail : https://centredeconomiesorbonne.univ-paris1.fr/documents-de-travail-du-ces/; Documents de travail du Centre d'Economie de la Sorbonne 2018.03 - ISSN : 1955-611X; Due to the hyper technology associated to Big Data, data availability and computing power, most banks or lending financial institutions are renewing their business models. Credit risk predictions, monitoring, model reliability and effective loan processing are key to decision making and transparency. In...
OPERATING OF MOBILE MACHINE UNITS SYSTEM USING THE MODEL OF MULTICOMPONENT COMPLEX MOVEMENT
Directory of Open Access Journals (Sweden)
A. Lebedev
2015-07-01
Full Text Available To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.
Operating of mobile machine units system using the model of multicomponent complex movement
A. Lebedev; R. Kaidalov; N. Artiomov; M. Shulyak; M. Podrigalo; D. Abramov; D. Klets
2015-01-01
To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite) movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.
Model of large scale man-machine systems with an application to vessel traffic control
Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.
1989-01-01
Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the
A comparative study of machine learning classifiers for modeling travel mode choice
Hagenauer, J; Helbich, M
2017-01-01
The analysis of travel mode choice is an important task in transportation planning and policy making in order to understand and predict travel demands. While advances in machine learning have led to numerous powerful classifiers, their usefulness for modeling travel mode choice remains largely
Modelling and optimization of a permanent-magnet machine in a flywheel
Holm, S.R.
2003-01-01
This thesis describes the derivation of an analytical model for the design and optimization of a permanent-magnet machine for use in an energy storage flywheel. A prototype of this flywheel is to be used as the peak-power unit in a hybrid electric city bus. The thesis starts by showing the
Static stiffness modeling of a novel hybrid redundant robot machine
International Nuclear Information System (INIS)
Li Ming; Wu Huapeng; Handroos, Heikki
2011-01-01
This paper presents a modeling method to study the stiffness of a hybrid serial-parallel robot IWR (Intersector Welding Robot) for the assembly of ITER vacuum vessel. The stiffness matrix of the basic element in the robot is evaluated using matrix structural analysis (MSA); the stiffness of the parallel mechanism is investigated by taking account of the deformations of both hydraulic limbs and joints; the stiffness of the whole integrated robot is evaluated by employing the virtual joint method and the principle of virtual work. The obtained stiffness model of the hybrid robot is analytical and the deformation results of the robot workspace under certain external load are presented.
International Nuclear Information System (INIS)
Miao, Qiang; Huang, Hong Zhong; Fan, Xianfeng
2007-01-01
Condition classification is an important step in machinery fault detection, which is a problem of pattern recognition. Currently, there are a lot of techniques in this area and the purpose of this paper is to investigate two popular recognition techniques, namely hidden Markov model and support vector machine. At the beginning, we briefly introduced the procedure of feature extraction and the theoretical background of this paper. The comparison experiment was conducted for gearbox fault detection and the analysis results from this work showed that support vector machine has better classification performance in this area
Big data - modelling of midges in Europa using machine learning techniques and satellite imagery
DEFF Research Database (Denmark)
Cuellar, Ana Carolina; Kjær, Lene Jung; Skovgaard, Henrik
2017-01-01
coordinates of each trap, start and end dates of trapping. We used 120 environmental predictor variables together with Random Forest machine learning algorithms to predict the overall species distribution (probability of occurrence) and monthly abundance in Europe. We generated maps for every month...... and the Obsoletus group, although abundance was generally higher for a longer period of time for C. imicula than for the Obsoletus group. Using machine learning techniques, we were able to model the spatial distribution in Europe for C. imicola and the Obsoletus group in terms of abundance and suitability...
DEFF Research Database (Denmark)
Rasmussen, Jens
1986-01-01
and subjective preferences. For design of man-machine systems in process control, a framework has been developed in terms of separate representation of the problem domain, the decision task, and the information processing strategies required. The author analyzes the application of this framework to a number......For systematic and computer-aided design of man-machine systems, a consistent framework is needed, i. e. , a set of models which allows the selection of system characteristics which serve the individual user not only to satisfy his goal, but also to select mental processes that match his resources...
Designing Closed-Loop Brain-Machine Interfaces Using Model Predictive Control
Directory of Open Access Journals (Sweden)
Gautam Kumar
2016-06-01
Full Text Available Brain-machine interfaces (BMIs are broadly defined as systems that establish direct communications between living brain tissue and external devices, such as artificial arms. By sensing and interpreting neuronal activities to actuate an external device, BMI-based neuroprostheses hold great promise in rehabilitating motor disabled subjects, such as amputees. In this paper, we develop a control-theoretic analysis of a BMI-based neuroprosthetic system for voluntary single joint reaching task in the absence of visual feedback. Using synthetic data obtained through the simulation of an experimentally validated psycho-physiological cortical circuit model, both the Wiener filter and the Kalman filter based linear decoders are developed. We analyze the performance of both decoders in the presence and in the absence of natural proprioceptive feedback information. By performing simulations, we show that the performance of both decoders degrades significantly in the absence of the natural proprioception. To recover the performance of these decoders, we propose two problems, namely tracking the desired position trajectory and tracking the firing rate trajectory of neurons which encode the proprioception, in the model predictive control framework to design optimal artificial sensory feedback. Our results indicate that while the position trajectory based design can only recover the position and velocity trajectories, the firing rate trajectory based design can recover the performance of the motor task along with the recovery of firing rates in other cortical regions. Finally, we extend our design by incorporating a network of spiking neurons and designing artificial sensory feedback in the form of a charged balanced biphasic stimulating current.
Prediction of ttt curves of cold working tool steels using support vector machine model
Pillai, Nandakumar; Karthikeyan, R., Dr.
2018-04-01
The cold working tool steels are of high carbon steels with metallic alloy additions which impart higher hardenability, abrasion resistance and less distortion in quenching. The microstructure changes occurring in tool steel during heat treatment is of very much importance as the final properties of the steel depends upon these changes occurred during the process. In order to obtain the desired performance the alloy constituents and its ratio plays a vital role as the steel transformation itself is complex in nature and depends very much upon the time and temperature. The proper treatment can deliver satisfactory results, at the same time process deviation can completely spoil the results. So knowing time temperature transformation (TTT) of phases is very critical which varies for each type depending upon its constituents and proportion range. To obtain adequate post heat treatment properties the percentage of retained austenite should be lower and metallic carbides obtained should be fine in nature. Support vector machine is a computational model which can learn from the observed data and use these to predict or solve using mathematical model. Back propagation feedback network will be created and trained for further solutions. The points on the TTT curve for the known transformations curves are used to plot the curves for different materials. These data will be trained to predict TTT curves for other steels having similar alloying constituents but with different proportion range. The proposed methodology can be used for prediction of TTT curves for cold working steels and can be used for prediction of phases for different heat treatment methods.
Klocke, F.; Herrig, T.; Zeis, M.; Klink, A.
2017-10-01
Combining the working principle of electrochemical machining (ECM) with a universal rotating tool, like a wire, could manage lots of challenges of the classical ECM sinking process. Such a wire-ECM process could be able to machine flexible and efficient 2.5-dimensional geometries like fir tree slots in turbine discs. Nowadays, established manufacturing technologies for slotting turbine discs are broaching and wire electrical discharge machining (wire EDM). Nevertheless, high requirements on surface integrity of turbine parts need cost intensive process development and - in case of wire-EDM - trim cuts to reduce the heat affected rim zone. Due to the process specific advantages, ECM is an attractive alternative manufacturing technology and is getting more and more relevant for sinking applications within the last few years. But ECM is also opposed with high costs for process development and complex electrolyte flow devices. In the past, few studies dealt with the development of a wire ECM process to meet these challenges. However, previous concepts of wire ECM were only suitable for micro machining applications. Due to insufficient flushing concepts the application of the process for machining macro geometries failed. Therefore, this paper presents the modeling and simulation of a new flushing approach for process assessment. The suitability of a rotating structured wire electrode in combination with an axial flushing for electrodes with high aspect ratios is investigated and discussed.
Mathematical Model of Lifetime Duration at Insulation of Electrical Machines
Directory of Open Access Journals (Sweden)
Mihaela Răduca
2009-10-01
Full Text Available Abstract. This paper present a mathematical model of lifetime duration at hydro generator stator winding insulation when at hydro generator can be appear the damage regimes. The estimation to make by take of the programming and non-programming revisions, through the introduction and correlation of the new defined notions.
Modelling rollover behaviour of exacavator-based forest machines
M.W. Veal; S.E. Taylor; Robert B. Rummer
2003-01-01
This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...
Syntactic discriminative language model rerankers for statistical machine translation
Carter, S.; Monz, C.
2011-01-01
This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical
Modelling of Moving Coil Actuators in Fast Switching Valves Suitable for Digital Hydraulic Machines
DEFF Research Database (Denmark)
Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller
2015-01-01
an estimation of the eddy currents generated in the actuator yoke upon current rise, as they may have significant influence on the coil current response. The analytical model facilitates fast simulation of the transient actuator response opposed to the transient electro-magnetic finite element model which......The efficiency of digital hydraulic machines is strongly dependent on the valve switching time. Recently, fast switching have been achieved by using a direct electromagnetic moving coil actuator as the force producing element in fast switching hydraulic valves suitable for digital hydraulic...... machines. Mathematical models of the valve switching, targeted for design optimisation of the moving coil actuator, are developed. A detailed analytical model is derived and presented and its accuracy is evaluated against transient electromagnetic finite element simulations. The model includes...
Identification and non-integer order modelling of synchronous machines operating as generator
Directory of Open Access Journals (Sweden)
Szymon Racewicz
2012-09-01
Full Text Available This paper presents an original mathematical model of a synchronous generator using derivatives of fractional order. In contrast to classical models composed of a large number of R-L ladders, it comprises half-order impedances, which enable the accurate description of the electromagnetic induction phenomena in a wide frequency range, while minimizing the order and number of model parameters. The proposed model takes into account the skin eff ect in damper cage bars, the eff ects of eddy currents in rotor solid parts, and the saturation of the machine magnetic circuit. The half-order transfer functions used for modelling these phenomena were verifi ed by simulation of ferromagnetic sheet impedance using the fi nite elements method. The analysed machine’s parameters were identified on the basis of SSFR (StandStill Frequency Response characteristics measured on a gradually magnetised synchronous machine.
Implementation of the Lanczos algorithm for the Hubbard model on the Connection Machine system
International Nuclear Information System (INIS)
Leung, P.W.; Oppenheimer, P.E.
1992-01-01
An implementation of the Lanczos algorithm for the exact diagonalization of the two dimensional Hubbard model on a 4x4 square lattice on the Connection Machine CM-2 system is described. The CM-2 is a massively parallel machine with distributed memory. The program is written in C/PARIS. This implementation minimizes memory usage by generating the matrix elements as needed instead of storing them. The Lanczos vectors are stored across the local memory of the processors. Using translational symmetry only, the dimension of the Hilbert space at half filling is more than 10 million. A speed of about 2.4 min per iteration is achieved on a 64K CM-2. This implementation is scalable. Running it on a bigger machine with more processors speeds up the process. The performance analysis of this implementation is shown and discuss its advantages and disadvantages are discussed
Modeling of thermal spalling during electrical discharge machining of titanium diboride
International Nuclear Information System (INIS)
Gadalla, A.M.; Bozkurt, B.; Faulk, N.M.
1991-01-01
Erosion in electrical discharge machining has been described as occurring by melting and flushing the liquid formed. Recently, however, thermal spalling was reported as the mechanism for machining refractory materials with low thermal conductivity and high thermal expansion. The process is described in this paper by a model based on a ceramic surface exposed to a constant circular heating source which supplied a constant flux over the pulse duration. The calculations were based on TiB 2 mechanical properties along a and c directions. Theoretical predictions were verified by machining hexagonal TiB 2 . Large flakes of TiB 2 with sizes close to grain size and maximum thickness close to the predicted values were collected, together with spherical particles of Cu and Zn eroded from cutting wire. The cutting surfaces consist of cleavage planes sometimes contaminated with Cu, Zn, and impurities from the dielectric fluid
The development of fully dynamic rotating machine models for nuclear training simulators
International Nuclear Information System (INIS)
Birsa, J.J.
1990-01-01
Prior to beginning the development of an enhanced set of electrical plant models for several nuclear training simulators, an extensive literature search was conducted to evaluate and select rotating machine models for use on these simulators. These models include the main generator, diesel generators, in-plant electric power distribution and off-side power. Form the results of this search, various models were investigated and several were selected for further evaluation. Several computer studies were performed on the selected models in order to determine their suitability for use in a training simulator environment. One surprising result of this study was that a number of established, classical models could not be made to reproduce actual plant steady-state data over the range necessary for a training simulator. This evaluation process and its results are presented in this paper. Various historical, as well as contemporary, electrical models of rotating machines are discussed. Specific criteria for selection of rotating machine models for training simulator use are presented
Kral, C.; Haumer, A.; Bogomolov, M.D.; Lomonova, E.
2012-01-01
This paper proposes a multi domain physical model of permanent magnet synchronous machines, considering electrical, magnetic, thermal and mechanical effects. For each component of the model, the main wave as well as lower and higher harmonic wave components of the magnetic flux and the magnetic
MATHEMATICAL MODEL FOR THE STUDY AND DESIGN OF A ROTARY-VANE GAS REFRIGERATION MACHINE
Directory of Open Access Journals (Sweden)
V. V. Trandafilov
2016-08-01
Full Text Available This paper presents a mathematical model of calculating the main parameters the operating cycle, rotary-vane gas refrigerating machine that affect installation, machine control and working processes occurring in it at the specified criteria. A procedure and a graphical method for the rotary-vane gas refrigerating machine (RVGRM are proposed. A parametric study of the main geometric variables and temperature variables on the thermal behavior of the system is analyzed. The model considers polytrope index for the compression and expansion in the chamber. Graphs of the pressure and temperature in the chamber of the angle of rotation of the output shaft are received. The possibility of inclusion in the cycle regenerative heat exchanger is appreciated. The change of the coefficient of performance machine after turning the cycle regenerative heat exchanger is analyzed. It is shown that the installation of a regenerator RVGRM cycle results in increased COP more than 30%. The simulation results show that the proposed model can be used to design and optimize gas refrigerator Stirling.
Washington State Nursing Home Administrator Model Curriculum. Final Report.
Cowan, Florence Kelly
The course outlines presented in this final report comprise a proposed Fort Steilacoom Community College curriculum to be used as a statewide model two-year associate degree curriculum for nursing home administrators. The eight courses described are introduction to nursing, home administration, financial management of nursing homes, nursing home…
Final Report Fermionic Symmetries and Self consistent Shell Model
International Nuclear Information System (INIS)
Zamick, Larry
2008-01-01
In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.
Model validation studies of solar systems, Phase III. Final report
Energy Technology Data Exchange (ETDEWEB)
Lantz, L.J.; Winn, C.B.
1978-12-01
Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.
Calculation of extreme wind atlases using mesoscale modeling. Final report
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Badger, Jake
This is the final report of the project PSO-10240 "Calculation of extreme wind atlases using mesoscale modeling". The overall objective is to improve the estimation of extreme winds by developing and applying new methodologies to confront the many weaknesses in the current methodologies as explai...
Photovoltaic subsystem marketing and distribution model: programming manual. Final report
Energy Technology Data Exchange (ETDEWEB)
1982-07-01
Complete documentation of the marketing and distribution (M and D) computer model is provided. The purpose is to estimate the costs of selling and transporting photovoltaic solar energy products from the manufacturer to the final customer. The model adjusts for the inflation and regional differences in marketing and distribution costs. The model consists of three major components: the marketing submodel, the distribution submodel, and the financial submodel. The computer program is explained including the input requirements, output reports, subprograms and operating environment. The program specifications discuss maintaining the validity of the data and potential improvements. An example for a photovoltaic concentrator collector demonstrates the application of the model.
Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines
Directory of Open Access Journals (Sweden)
Wm M. Wood
2018-02-01
Full Text Available A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t, and current, I(t. The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX” model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.
Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F
2014-06-01
To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). © 2013 Elsevier B.V. All rights reserved.
Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines
Wood, Wm M.
2018-02-01
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.
Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes
Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv
2007-04-01
In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.
Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes
International Nuclear Information System (INIS)
Umbrello, Domenico; Rizzuti, Stefania; Outeiro, Jose C.; Shivpuri, Rajiv
2007-01-01
In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change
Issues of Application of Machine Learning Models for Virtual and Real-Life Buildings
Directory of Open Access Journals (Sweden)
Young Min Kim
2016-06-01
Full Text Available The current Building Energy Performance Simulation (BEPS tools are based on first principles. For the correct use of BEPS tools, simulationists should have an in-depth understanding of building physics, numerical methods, control logics of building systems, etc. However, it takes significant time and effort to develop a first principles-based simulation model for existing buildings—mainly due to the laborious process of data gathering, uncertain inputs, model calibration, etc. Rather than resorting to an expert’s effort, a data-driven approach (so-called “inverse” approach has received growing attention for the simulation of existing buildings. This paper reports a cross-comparison of three popular machine learning models (Artificial Neural Network (ANN, Support Vector Machine (SVM, and Gaussian Process (GP for predicting a chiller’s energy consumption in a virtual and a real-life building. The predictions based on the three models are sufficiently accurate compared to the virtual and real measurements. This paper addresses the following issues for the successful development of machine learning models: reproducibility, selection of inputs, training period, outlying data obtained from the building energy management system (BEMS, and validation of the models. From the result of this comparative study, it was found that SVM has a disadvantage in computation time compared to ANN and GP. GP is the most sensitive to a training period among the three models.
Component simulation in problems of calculated model formation of automatic machine mechanisms
Directory of Open Access Journals (Sweden)
Telegin Igor
2017-01-01
Full Text Available The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gaps in kinematic pairs, friction forces, design and technological loads. As an example in the paper there are considered a formalization of stages in the computer model formation of the cutting mechanism in cold stamping automatic machine AV1818 and methods of for the computation of their parameters on the basis of its solid-state model.
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
Component simulation in problems of calculated model formation of automatic machine mechanisms
Telegin Igor; Kozlov Alexander; Zhirkov Alexander
2017-01-01
The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gap...
Jian-ping Wen; Chuan-wei Zhang
2015-01-01
In order to improve energy utilization rate of battery-powered electric vehicle (EV) using brushless DC machine (BLDCM), the model of braking current generated by regenerative braking and control method are discussed. On the basis of the equivalent circuit of BLDCM during the generative braking period, the mathematic model of braking current is established. By using an extended state observer (ESO) to observe actual braking current and the unknown disturbances of regenerative braking system, ...
Direct Drive Synchronous Machine Models for Stability Assessment of Wind Farms
Energy Technology Data Exchange (ETDEWEB)
Poeller, Markus; Achilles, Sebastian [DIgSILENT GmbH, Gomaringen (Germany)
2003-11-01
The increasing size of wind farms requires power system stability analysis including dynamic wind generator models. For turbines above 1MW doubly-fed induction machines are the most widely used concept. However, especially in Germany, direct-drive wind generators based on converter-driven synchronous generator concepts have reached considerable market penetration. This paper presents converter driven synchronous generator models of various order that can be used for simulating transients and dynamics in a very wide time range.
Directory of Open Access Journals (Sweden)
R. I. Mustafayev
2012-01-01
Full Text Available The paper presents methodology for mathematical modeling of power system (its part when jointly operated with wind power plants (stations that contain asynchronous doubly-fed machines used as generators. The essence and advantage of the methodology is that it allows efficiently to mate equations of doubly-fed asynchronous machines, written in the axes that rotate with the machine rotor speed with the equations of external electric power system, written in synchronously rotating axes.
Characteristics determination of Tanka X-ray Diagnostic Machine Model RTO-125
International Nuclear Information System (INIS)
Trijoko, Susetyo; Nasukha; Suyati; Nugroho, Agung.
1993-01-01
Characteristics determination of Tanka X-ray diagnostic machine model RTO-125. The characteristics of X-ray machine used for examining patient should be known. The characteristics studied in this paper include : X-ray beam profile, coincidence of the light field with radiation field, peak voltage, radiation quality, stability of exposures, and linearity of exposures against time. Beam profile and radiation-field alignment were determined using X-ray film. Winconsin kVp test cassette was used to measure peak voltage. The quality of the radiation, represented by half-value layer (HVL), was measured using aluminium step-wedge. Stability and linearity of exposures were measured using ionization chamber detector having an air volume of 40 cc. The results of this study were documented for the TANKA X-ray machine model RTO-125 of PSPKR BATAN, and the method of this study could be applied for X-ray diagnostic machine in general. (authors). 6 refs., 2 tabs., 6 figs
Energy Technology Data Exchange (ETDEWEB)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard
2015-08-24
This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.
Using cognitive modeling to improve the man-machine interface
International Nuclear Information System (INIS)
Newton, R.A.; Zyduck, R.C.; Johnson, D.R.
1982-01-01
A group of utilities from the Westinghouse Owners Group was formed in early 1980 to examine the interface requirements and to determine how they could be implemented. The products available from the major vendors were examined early in 1980 and judged not to be completely applicable. The utility group then decided to develop its own specifications for a Safety Assessment System (SAS) and, later in 1980, contracted with a company to develop the system, prepare the software and demonstrate the system on a simulator. The resulting SAS is a state-of-the-art system targeted for implementation on pressurized water reactor nuclear units. It has been designed to provide control room operators with centralized and easily understandable information from a computer-based data and display system. This paper gives an overview of the SAS plus a detailed description of one of its functional areas - called AIDS. The AIDS portion of SAS is an advanced concept which uses cognitive modeling of the operator as the basis for its design
Young, Sean Gregory
The complex interactions between human health and the physical landscape and environment have been recognized, if not fully understood, since the ancient Greeks. Landscape epidemiology, sometimes called spatial epidemiology, is a sub-discipline of medical geography that uses environmental conditions as explanatory variables in the study of disease or other health phenomena. This theory suggests that pathogenic organisms (whether germs or larger vector and host species) are subject to environmental conditions that can be observed on the landscape, and by identifying where such organisms are likely to exist, areas at greatest risk of the disease can be derived. Machine learning is a sub-discipline of artificial intelligence that can be used to create predictive models from large and complex datasets. West Nile virus (WNV) is a relatively new infectious disease in the United States, and has a fairly well-understood transmission cycle that is believed to be highly dependent on environmental conditions. This study takes a geospatial approach to the study of WNV risk, using both landscape epidemiology and machine learning techniques. A combination of remotely sensed and in situ variables are used to predict WNV incidence with a correlation coefficient as high as 0.86. A novel method of mitigating the small numbers problem is also tested and ultimately discarded. Finally a consistent spatial pattern of model errors is identified, indicating the chosen variables are capable of predicting WNV disease risk across most of the United States, but are inadequate in the northern Great Plains region of the US.
Directory of Open Access Journals (Sweden)
Zheng Chang
2015-01-01
Full Text Available Based on the traditional machine vision recognition technology and traditional artificial neural networks about body movement trajectory, this paper finds out the shortcomings of the traditional recognition technology. By combining the invariant moments of the three-dimensional motion history image (computed as the eigenvector of body movements and the extreme learning machine (constructed as the classification artificial neural network of body movements, the paper applies the method to the machine vision of the body movement trajectory. In detail, the paper gives a detailed introduction about the algorithm and realization scheme of the body movement trajectory recognition based on the three-dimensional motion history image and the extreme learning machine. Finally, by comparing with the results of the recognition experiments, it attempts to verify that the method of body movement trajectory recognition technology based on the three-dimensional motion history image and extreme learning machine has a more accurate recognition rate and better robustness.
Parallel phase model : a programming model for high-end parallel machines with manycores.
Energy Technology Data Exchange (ETDEWEB)
Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian
2009-04-01
This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.
Sensitivity analysis of machine-learning models of hydrologic time series
O'Reilly, A. M.
2017-12-01
Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.
International Nuclear Information System (INIS)
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-01-01
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well
Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.
Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose
2018-02-22
Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.
A one-dimensional Q-machine model taking into account charge-exchange collisions
International Nuclear Information System (INIS)
Maier, H.; Kuhn, S.
1992-01-01
The Q-machine is a nontrivial bounded plasma system which is excellently suited not only for fundamental plasma physics investigations but also for the development and testing of new theoretical methods for modeling such systems. However, although Q-machines have now been around for over thirty years, it appears that there exist no comprehensive theoretical models taking into account their considerable geometrical and physical complexity with a reasonable degree of self-consistency. In the present context we are concerned with the low-density, single-emitter Q-machine, for which the most widely used model is probably the (one-dimensional) ''collisionless plane-diode model'', which has originally been developed for thermionic diodes. Although the validity of this model is restricted to certain ''axial'' phenomena, we consider it a suitable starting point for extensions of various kinds. While a generalization to two-dimensional geometry (with still collisionless plasma) is being reported elsewhere, the present work represents a first extension to collisional plasma (with still one-dimensional geometry). (author) 12 refs., 2 figs
Law machines: scale models, forensic materiality and the making of modern patent law.
Pottage, Alain
2011-10-01
Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.
International Nuclear Information System (INIS)
Liu, Hui; Tian, Hong-qi; Li, Yan-fei
2015-01-01
Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions
Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.
2017-12-01
The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.
Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality
Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.
2017-12-01
Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Directory of Open Access Journals (Sweden)
Roque Calvo
2016-09-01
Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.
DEFF Research Database (Denmark)
Puthumana, Govindan
2017-01-01
conductivity and high strength causing it extremely difficult tomachine. Micro-Electrical Discharge Machining (Micro-EDM) is a non-conventional method that has a potential toovercome these restrictions for machining of Inconel 718. Response Surface Method (RSM) was used for modelling thetool Electrode Wear...
DEFF Research Database (Denmark)
Sabuncu, Mert R.; Van Leemput, Koen
2012-01-01
This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...
DEFF Research Database (Denmark)
Knudsen, Hans
1995-01-01
A model of the 2×3-phase synchronous machine is presented using a new transformation based on the eigenvectors of the stator inductance matrix. The transformation fully decouples the stator inductance matrix, and this leads to an equivalent diagram of the machine with no mutual couplings...
Traffic congestion forecasting model for the INFORM System. Final report
Energy Technology Data Exchange (ETDEWEB)
Azarm, A.; Mughabghab, S.; Stock, D.
1995-05-01
This report describes a computerized traffic forecasting model, developed by Brookhaven National Laboratory (BNL) for a portion of the Long Island INFORM Traffic Corridor. The model has gone through a testing phase, and currently is able to make accurate traffic predictions up to one hour forward in time. The model will eventually take on-line traffic data from the INFORM system roadway sensors and make projections as to future traffic patterns, thus allowing operators at the New York State Department of Transportation (D.O.T.) INFORM Traffic Management Center to more optimally manage traffic. It can also form the basis of a travel information system. The BNL computer model developed for this project is called ATOP for Advanced Traffic Occupancy Prediction. The various modules of the ATOP computer code are currently written in Fortran and run on PC computers (pentium machine) faster than real time for the section of the INFORM corridor under study. The following summarizes the various routines currently contained in the ATOP code: Statistical forecasting of traffic flow and occupancy using historical data for similar days and time (long term knowledge), and the recent information from the past hour (short term knowledge). Estimation of the empirical relationships between traffic flow and occupancy using long and short term information. Mechanistic interpolation using macroscopic traffic models and based on the traffic flow and occupancy forecasted (item-1), and the empirical relationships (item-2) for the specific highway configuration at the time of simulation (construction, lane closure, etc.). Statistical routine for detection and classification of anomalies and their impact on the highway capacity which are fed back to previous items.
A Collaboration Model for Community-Based Software Development with Social Machines
Directory of Open Access Journals (Sweden)
Dave Murray-Rust
2016-02-01
Full Text Available Crowdsourcing is generally used for tasks with minimal coordination, providing limited support for dynamic reconfiguration. Modern systems, exemplified by social ma chines, are subject to continual flux in both the client and development communities and their needs. To support crowdsourcing of open-ended development, systems must dynamically integrate human creativity with machine support. While workflows can be u sed to handle structured, predictable processes, they are less suitable for social machine development and its attendant uncertainty. We present models and techniques for coordination of human workers in crowdsourced software development environments. We combine the Social Compute Unit—a model of ad-hoc human worker teams—with versatile coordination protocols expressed in the Lightweight Social Calculus. This allows us to combine coordination and quality constraints with dynamic assessments of end-user desires, dynamically discovering and applying development protocols.
Energy Technology Data Exchange (ETDEWEB)
Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George
2017-04-21
The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.
Pak Kin Wong; Hang Cheong Wong; Chi Man Vong; Tong Meng Iong; Ka In Wong; Xianghui Gao
2015-01-01
Effective air-ratio control is desirable to maintain the best engine performance. However, traditional air-ratio control assumes the lambda sensor located at the tail pipe works properly and relies strongly on the air-ratio feedback signal measured by the lambda sensor. When the sensor is warming up during cold start or under failure, the traditional air-ratio control no longer works. To address this issue, this paper utilizes an advanced modelling technique, kernel extreme learning machine (...
Quantitative chemogenomics: machine-learning models of protein-ligand interaction.
Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena
2011-01-01
Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers
Advancing Control for Shield Tunneling Machine by Backstepping Design with LuGre Friction Model
Directory of Open Access Journals (Sweden)
Haibo Xie
2014-01-01
Full Text Available Shield tunneling machine is widely applied for underground tunnel construction. The shield machine is a complex machine with large momentum and ultralow advancing speed. The working condition underground is rather complicated and unpredictable, and brings big trouble in controlling the advancing speed. This paper focused on the advancing motion control on desired tunnel axis. A three-state dynamic model was established with considering unknown front face earth pressure force and unknown friction force. LuGre friction model was introduced to describe the friction force. Backstepping design was then proposed to make tracking error converge to zero. To have a comparison study, controller without LuGre model was designed. Tracking simulations of speed regulations and simulations when front face earth pressure changed were carried out to show the transient performances of the proposed controller. The results indicated that the controller had good tracking performance even under changing geological conditions. Experiments of speed regulations were carried out to have validations of the controllers.
Study on intelligent processing system of man-machine interactive garment frame model
Chen, Shuwang; Yin, Xiaowei; Chang, Ruijiang; Pan, Peiyun; Wang, Xuedi; Shi, Shuze; Wei, Zhongqian
2018-05-01
A man-machine interactive garment frame model intelligent processing system is studied in this paper. The system consists of several sensor device, voice processing module, mechanical parts and data centralized acquisition devices. The sensor device is used to collect information on the environment changes brought by the body near the clothes frame model, the data collection device is used to collect the information of the environment change induced by the sensor device, voice processing module is used for speech recognition of nonspecific person to achieve human-machine interaction, mechanical moving parts are used to make corresponding mechanical responses to the information processed by data collection device.it is connected with data acquisition device by a means of one-way connection. There is a one-way connection between sensor device and data collection device, two-way connection between data acquisition device and voice processing module. The data collection device is one-way connection with mechanical movement parts. The intelligent processing system can judge whether it needs to interact with the customer, realize the man-machine interaction instead of the current rigid frame model.
Fang, Xingang; Bagui, Sikha; Bagui, Subhash
2017-08-01
The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Didar, Tohid Fatanat; Dolatabadi, Ali; Wüthrich, Rolf
2008-01-01
Spark-assisted chemical engraving (SACE) is an unconventional micro-machining technology based on electrochemical discharge used for micro-machining nonconductive materials. SACE 2D micro-machining with constant speed was used to machine micro-channels in glass. Parameters affecting the quality and geometry of the micro-channels machined by SACE technology with constant velocity were presented and the effect of each of the parameters was assessed. The effect of chemical etching on the geometry of micro-channels under different machining conditions has been studied, and a model is proposed for characterization of the micro-channels as a function of machining voltage and applied speed
Developing a dengue forecast model using machine learning: A case study in China.
Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun
2017-10-01
In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.
Are there intelligent Turing machines?
Bátfai, Norbert
2015-01-01
This paper introduces a new computing model based on the cooperation among Turing machines called orchestrated machines. Like universal Turing machines, orchestrated machines are also designed to simulate Turing machines but they can also modify the original operation of the included Turing machines to create a new layer of some kind of collective behavior. Using this new model we can define some interested notions related to cooperation ability of Turing machines such as the intelligence quo...
Multifrequency spiral vector model for the brushless doubly-fed induction machine
DEFF Research Database (Denmark)
Han, Peng; Cheng, Ming; Zhu, Xinkai
2017-01-01
This paper presents a multifrequency spiral vector model for both steady-state and dynamic performance analysis of the brushless doubly-fed induction machine (BDFIM) with a nested-loop rotor. Winding function theory is first employed to give a full picture of the inductance characteristics...... analytically, revealing the underlying relationship between harmonic components of stator-rotor mutual inductances and the airgap magnetic field distribution. Different from existing vector models, which only model the fundamental components of mutual inductances, the proposed vector model takes...... into consideration the low-order space harmonic coupling by incorporating nonsinusoidal inductances into modeling process. A new model order reduction approach is then proposed to transform the nested-loop rotor into an equivalent single-loop one. The effectiveness of the proposed modelling method is verified by 2D...
Language Model Adaptation Using Machine-Translated Text for Resource-Deficient Languages
Directory of Open Access Journals (Sweden)
Sadaoki Furui
2009-01-01
Full Text Available Text corpus size is an important issue when building a language model (LM. This is a particularly important issue for languages where little data is available. This paper introduces an LM adaptation technique to improve an LM built using a small amount of task-dependent text with the help of a machine-translated text corpus. Icelandic speech recognition experiments were performed using data, machine translated (MT from English to Icelandic on a word-by-word and sentence-by-sentence basis. LM interpolation using the baseline LM and an LM built from either word-by-word or sentence-by-sentence translated text reduced the word error rate significantly when manually obtained utterances used as a baseline were very sparse.
Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris
2015-04-01
The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.
Directory of Open Access Journals (Sweden)
Ickwon Choi
2015-04-01
Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.
International Nuclear Information System (INIS)
Leo Kwee Wah; Lojius Lombigit; Abu Bakar Mhd Ghazali; Muhamad Zahidee Taat; Ayub Mohamed; Chong Foh Yoong
2006-01-01
An EBM electronic model is designed to simulate the control system of the Nissin EBM, which is located at Block 43, MINT complex of Jalan Dengkil with maximum output of 3 MeV, 30 mA using a Programmable Automation Controllers (PAC). This model operates likes a real EBM system where all the start-up, interlocking and stopping procedures are fully followed. It also involves formulating the mathematical models to relate certain output with the input parameters using data from actual operation on EB machine. The simulation involves a set of PAC system consisting of the digital and analogue input/output modules. The program code is written using Labview software (real-time version) on a PC and then downloaded into the PAC stand-alone memory. All the 23 interlocking signals required by the EB machine are manually controlled by mechanical switches and represented by LEDs. The EB parameters are manually controlled by potentiometers and displayed on analogue and digital meters. All these signals are then interfaced to the PC via a wifi wireless communication built-in at the PAC controller. The program is developed in accordance to the specifications and requirement of the original real EB system and displays them on the panel of the model and also on the PC monitor. All possible chances from human errors, hardware and software malfunctions, including the worst-case conditions will be tested, evaluated and modified. We hope that the performance of our model complies the requirements of operating the EB machine. It also hopes that this electronic model can replace the original PC interfacing being utilized in the Nissin EBM in the near future. The system can also be used to study the fault tolerance analysis and automatic re-configuration for advanced control of the EB system. (Author)
Directory of Open Access Journals (Sweden)
Lucky eMehra
2016-03-01
Full Text Available Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB, caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum. The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early
Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S
2016-01-01
Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of
Directory of Open Access Journals (Sweden)
Grzegorz SZALA
2014-03-01
Full Text Available In the paper there was attempted to analyse models of fatigue life curves possible to apply in calculations of fatigue life of machine elements. The analysis was limited to fatigue life curves in stress approach enabling cyclic stresses from the range of low cycle fatigue (LCF, high cycle fatigue (HCF, fatigue limit (FL and giga cycle fatigue (GCF appearing in the loading spectrum at the same time. Chosen models of the analysed fatigue live curves will be illustrated with test results of steel and aluminium alloys.
Directory of Open Access Journals (Sweden)
Jian-ping Wen
2015-01-01
Full Text Available In order to improve energy utilization rate of battery-powered electric vehicle (EV using brushless DC machine (BLDCM, the model of braking current generated by regenerative braking and control method are discussed. On the basis of the equivalent circuit of BLDCM during the generative braking period, the mathematic model of braking current is established. By using an extended state observer (ESO to observe actual braking current and the unknown disturbances of regenerative braking system, the autodisturbances rejection controller (ADRC for controlling the braking current is developed. Experimental results show that the proposed method gives better recovery efficiency and is robust to disturbances.
Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En
2015-06-01
Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.
A hybrid prognostic model for multistep ahead prediction of machine condition
Roulias, D.; Loutas, T. H.; Kostopoulos, V.
2012-05-01
Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.
Directory of Open Access Journals (Sweden)
Lei Jia
Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.
Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto
2010-03-01
This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.
Predicting Freeway Work Zone Delays and Costs with a Hybrid Machine-Learning Model
Directory of Open Access Journals (Sweden)
Bo Du
2017-01-01
Full Text Available A hybrid machine-learning model, integrating an artificial neural network (ANN and a support vector machine (SVM model, is developed to predict spatiotemporal delays, subject to road geometry, number of lane closures, and work zone duration in different periods of a day and in the days of a week. The model is very user friendly, allowing the least inputs from the users. With that the delays caused by a work zone on any location of a New Jersey freeway can be predicted. To this end, tremendous amounts of data from different sources were collected to establish the relationship between the model inputs and outputs. A comparative analysis was conducted, and results indicate that the proposed model outperforms others in terms of the least root mean square error (RMSE. The proposed hybrid model can be used to calculate contractor penalty in terms of cost overruns as well as incentive reward schedule in case of early work competition. Additionally, it can assist work zone planners in determining the best start and end times of a work zone for developing and evaluating traffic mitigation and management plans.
Pervaiz, S.; Anwar, S.; Kannan, S.; Almarfadi, A.
2018-04-01
Ti6Al4V is known as difficult-to-cut material due to its inherent properties such as high hot hardness, low thermal conductivity and high chemical reactivity. Though, Ti6Al4V is utilized by industrial sectors such as aeronautics, energy generation, petrochemical and bio-medical etc. For the metal cutting community, competent and cost-effective machining of Ti6Al4V is a challenging task. To optimize cost and machining performance for the machining of Ti6Al4V, finite element based cutting simulation can be a very useful tool. The aim of this paper is to develop a finite element machining model for the simulation of Ti6Al4V machining process. The study incorporates material constitutive models namely Power Law (PL) and Johnson – Cook (JC) material models to mimic the mechanical behaviour of Ti6Al4V. The study investigates cutting temperatures, cutting forces, stresses, and plastic strains with respect to different PL and JC material models with associated parameters. In addition, the numerical study also integrates different cutting tool rake angles in the machining simulations. The simulated results will be beneficial to draw conclusions for improving the overall machining performance of Ti6Al4V.
Ratner, Bruce
2011-01-01
The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has
Energy Technology Data Exchange (ETDEWEB)
Bruckmann, Tobias; Brandt, Thorsten [mercatronics GmbH, Duisburg (Germany)
2009-12-17
The development of new functions for machines operating underground often requires a prolonged and cost-intensive test phase. Precisely the development of complex functions as occur in operating assistance systems, for example, is highly iterative. If a corresponding prototype is required for each iteration step of the development, the development costs will, of course, increase rapidly. Virtual prototypes and simulators based on mathematical models of the machine offer an alternative in this case. The article describes the same principles for modelling the kinematics of underground machines. (orig.)
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
Energy Technology Data Exchange (ETDEWEB)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
2011-07-27
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy in reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.
Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.
2017-02-01
We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.
Energy Technology Data Exchange (ETDEWEB)
Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M. [Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795 (Japan); Sugiura, K., E-mail: nishizuka.naoto@nict.go.jp [Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (Japan)
2017-02-01
We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.
International Nuclear Information System (INIS)
Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.; Sugiura, K.
2017-01-01
We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.
Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model
Directory of Open Access Journals (Sweden)
Shaojiang Dong
2014-01-01
Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.
Hemodynamic modelling of BOLD fMRI - A machine learning approach
DEFF Research Database (Denmark)
Jacobsen, Danjal Jakup
2007-01-01
This Ph.D. thesis concerns the application of machine learning methods to hemodynamic models for BOLD fMRI data. Several such models have been proposed by different researchers, and they have in common a basis in physiological knowledge of the hemodynamic processes involved in the generation...... of the BOLD signal. The BOLD signal is modelled as a non-linear function of underlying, hidden (non-measurable) hemodynamic state variables. The focus of this thesis work has been to develop methods for learning the parameters of such models, both in their traditional formulation, and in a state space...... formulation. In the latter, noise enters at the level of the hidden states, as well as in the BOLD measurements themselves. A framework has been developed to allow approximate posterior distributions of model parameters to be learned from real fMRI data. This is accomplished with Markov chain Monte Carlo...
Energy Technology Data Exchange (ETDEWEB)
Morton, April M [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; McManamay, Ryan A [ORNL
2017-01-01
As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.
A novel improved fuzzy support vector machine based stock price trend forecast model
Wang, Shuheng; Li, Guohao; Bao, Yifan
2018-01-01
Application of fuzzy support vector machine in stock price forecast. Support vector machine is a new type of machine learning method proposed in 1990s. It can deal with classification and regression problems very successfully. Due to the excellent learning performance of support vector machine, the technology has become a hot research topic in the field of machine learning, and it has been successfully applied in many fields. However, as a new technology, there are many limitations to support...
Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D
2014-03-25
A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.
Modeling and Designing of A Nonlineartemperature-Humidity Controller Using Inmushroom-Drying Machine
Wu, Xiuhua; Luo, Haiyan; Shi, Minhui
Drying-process of many kinds of farm produce in a close room, such as mushroom-drying machine, is generally a complicated nonlinear and timedelay cause, in which the temperature and the humidity are the main controlled elements. The accurate controlling of the temperature and humidity is always an interesting problem. It's difficult and very important to make a more accurate mathematical model about the varying of the two. A math model was put forward after considering many aspects and analyzing the actual working circumstance in this paper. Form the model it can be seen that the changes of temperature and humidity in drying machine are not simple linear but an affine nonlinear process. Controlling the process exactly is the key that influences the quality of the dried mushroom. In this paper, the differential geometry theories and methods are used to analyze and solve the model of these smallenvironment elements. And at last a kind of nonlinear controller which satisfied the optimal quadratic performance index is designed. It can be proved more feasible and practical than the conventional controlling.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP
Directory of Open Access Journals (Sweden)
Li Deng
2015-01-01
Full Text Available In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming, using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model’s input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators’ operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.
Directory of Open Access Journals (Sweden)
Daqing Zhang
2015-01-01
Full Text Available Blood-brain barrier (BBB is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration.
Kryshchyshyn, Anna; Devinyak, Oleg; Kaminskyy, Danylo; Grellier, Philippe; Lesyk, Roman
2017-11-14
This paper presents novel QSAR models for the prediction of antitrypanosomal activity among thiazolidines and related heterocycles. The performance of four machine learning algorithms: Random Forest regression, Stochastic gradient boosting, Multivariate adaptive regression splines and Gaussian processes regression have been studied in order to reach better levels of predictivity. The results for Random Forest and Gaussian processes regression are comparable and outperform other studied methods. The preliminary descriptor selection with Boruta method improved the outcome of machine learning methods. The two novel QSAR-models developed with Random Forest and Gaussian processes regression algorithms have good predictive ability, which was proved by the external evaluation of the test set with corresponding Q 2 ext =0.812 and Q 2 ext =0.830. The obtained models can be used further for in silico screening of virtual libraries in the same chemical domain in order to find new antitrypanosomal agents. Thorough analysis of descriptors influence in the QSAR models and interpretation of their chemical meaning allows to highlight a number of structure-activity relationships. The presence of phenyl rings with electron-withdrawing atoms or groups in para-position, increased number of aromatic rings, high branching but short chains, high HOMO energy, and the introduction of 1-substituted 2-indolyl fragment into the molecular structure have been recognized as trypanocidal activity prerequisites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna
2018-05-01
As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.
Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment
Rebbapragada, Umaa; Oommen, Thomas
2011-01-01
On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.
Machine learning of frustrated classical spin models. I. Principal component analysis
Wang, Ce; Zhai, Hui
2017-10-01
This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.
Application of heuristic and machine-learning approach to engine model calibration
Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.
1993-03-01
Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.
A Model-based Analysis of Impulsivity Using a Slot-Machine Gambling Paradigm
Directory of Open Access Journals (Sweden)
Saee ePaliwal
2014-07-01
Full Text Available Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling. Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11, correlated significantly with an aggregate read-out of the following gambling responses: bet increases, machines switches, casino switches and double-ups. Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e. the Hierarchical Gaussian Filter (HGF and Rescorla-Wagner reinforcement learning models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF, the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to impulsivity. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future assessments of pathological gambling.
Research on Dynamic Modeling and Application of Kinetic Contact Interface in Machine Tool
Directory of Open Access Journals (Sweden)
Dan Xu
2016-01-01
Full Text Available A method is presented which is a kind of combining theoretic analysis and experiment to obtain the equivalent dynamic parameters of linear guideway through four steps in detail. From statics analysis, vibration model analysis, dynamic experiment, and parameter identification, the dynamic modeling of linear guideway is synthetically studied. Based on contact mechanics and elastic mechanics, the mathematic vibration model and the expressions of basic mode frequency are deduced. Then, equivalent stiffness and damping of guideway are obtained in virtue of single-freedom-degree mode fitting method. Moreover, the investigation above is applied in a certain gantry-type machining center; and through comparing with simulation model and experiment results, both availability and correctness are validated.
Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process
Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.
2018-03-01
Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.
Modeling and prediction of human word search behavior in interactive machine translation
Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na
2017-12-01
As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.
Directory of Open Access Journals (Sweden)
Maíra Nunes Piveta
2016-06-01
Full Text Available Organizations are inserted in a globalized competitive environment that suffers transformations and constantly advances. Given this scenario, the Alpha company, which sells coffee vending machines and supplies, adopted as a strategy to approach their final consumers. In this way, the overall objective of this study was to identify and assess the degree of satisfaction of the consumers of Alpha company with respect to the products and drinks and digital marketing in social media. The search can be classified as descriptive as to the nature with quantitative approach through the implementation of a research tool with a sample of 120 consumers of Alpha company at the points of sale. The survey results indicated that, in general, the satisfaction of consumers for products Alpha is great, except when it comes to price and quantity of sugar. Still, it was observed that the scope of the actions of Alpha digital marketing is low, being evidenced by means of low degrees of satisfaction justified by lack of consumer knowledge. Finally, opportunities and suggestions for improvement were presented.
Directory of Open Access Journals (Sweden)
Chernetska Olga V.
2016-11-01
Full Text Available The article discloses the content of the definition of “information support”, identifies basic approaches to the interpretation of this economic category. The main purpose of information support for management of enterprise investment attractiveness is determined. The key components of information support for management of enterprise investment attractiveness are studied. The main types of automated information systems for management of the investment attractiveness of enterprises are identified and characterized. The basic computer programs for assessing the level of investment attractiveness of enterprises are considered. A model of information support for management of investment attractiveness of machine-building enterprises is developed.
Modeling of Residual Stress and Machining Distortion in Aerospace Components (PREPRINT)
2010-03-01
John Gayda, “The Effect of Heat Treatment on Residual Stress and Machining Distortions in Advanced Nickel Base Disk Alloys,” NASA/TM-2001-210717. 2...Wei-Tsu Wu, Guoji Li, Juipeng Tang, Shesh Srivatsa, Ravi Shankar, Ron Wallis, Padu Ramasundaram and John Gayda, “A process modeling system for heat...Materials Processing Technology 98 (2000) 189-195. 6. M.A. Rist, S. Tin, B.A. Roder, J.A. James, and M.R. Daymond , “Residual Stresses in a
Directory of Open Access Journals (Sweden)
Mata-Cabrera Francisco
2013-10-01
Full Text Available Polyetheretherketone (PEEK composite belongs to a group of high performance thermoplastic polymers and is widely used in structural components. To improve the mechanical and tribological properties, short fibers are added as reinforcement to the material. Due to its functional properties and potential applications, it’s impor- tant to investigate the machinability of non-reinforced PEEK (PEEK, PEEK rein- forced with 30% of carbon fibers (PEEK CF30, and reinforced PEEK with 30% glass fibers (PEEK GF30 to determine the optimal conditions for the manufacture of the parts. The present study establishes the relationship between the cutting con- ditions (cutting speed and feed rate and the roughness (Ra , Rt , Rq , Rp , by develop- ing second order mathematical models. The experiments were planned as per full factorial design of experiments and an analysis of variance has been performed to check the adequacy of the models. These state the adequacy of the derived models to obtain predictions for roughness parameters within ranges of parameters that have been investigated during the experiments. The experimental results show that the most influence of the cutting parameters is the feed rate, furthermore, proved that glass fiber reinforcements produce a worse machinability.
Van Esbroeck, Alexander; Rubinfeld, Ilan; Hall, Bruce; Syed, Zeeshan
2014-11-01
To investigate the use of machine learning to empirically determine the risk of individual surgical procedures and to improve surgical models with this information. American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) data from 2005 to 2009 were used to train support vector machine (SVM) classifiers to learn the relationship between textual constructs in current procedural terminology (CPT) descriptions and mortality, morbidity, Clavien 4 complications, and surgical-site infections (SSI) within 30 days of surgery. The procedural risk scores produced by the SVM classifiers were validated on data from 2010 in univariate and multivariate analyses. The procedural risk scores produced by the SVM classifiers achieved moderate-to-high levels of discrimination in univariate analyses (area under receiver operating characteristic curve: 0.871 for mortality, 0.789 for morbidity, 0.791 for SSI, 0.845 for Clavien 4 complications). Addition of these scores also substantially improved multivariate models comprising patient factors and previously proposed correlates of procedural risk (net reclassification improvement and integrated discrimination improvement: 0.54 and 0.001 for mortality, 0.46 and 0.011 for morbidity, 0.68 and 0.022 for SSI, 0.44 and 0.001 for Clavien 4 complications; P risk for individual procedures. This information can be measured in an entirely data-driven manner and substantially improves multifactorial models to predict postoperative complications. Copyright © 2014 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Gusfan Halik
2015-01-01
Full Text Available Climate change has significant impacts on changing precipitation patterns causing the variation of the reservoir inflow. Nowadays, Indonesian hydrologist performs reservoir inflow prediction according to the technical guideline of Pd-T-25-2004-A. This technical guideline does not consider the climate variables directly, resulting in significant deviation to the observation results. This research intends to predict the reservoir inflow using the statistical downscaling (SD of General Circulation Model (GCM outputs. The GCM outputs are obtained from the National Center for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP/NCAR Reanalysis. A new proposed hybrid SD model named Wavelet Support Vector Machine (WSVM was utilized. It is a combination of the Multiscale Principal Components Analysis (MSPCA and nonlinear Support Vector Machine regression. The model was validated at Sutami Reservoir, Indonesia. Training and testing were carried out using data of 1991–2008 and 2008–2012, respectively. The results showed that MSPCA produced better extracting data than PCA. The WSVM generated better reservoir inflow prediction than the one of technical guideline. Moreover, this research also applied WSVM for future reservoir inflow prediction based on GCM ECHAM5 and scenario SRES A1B.
Modeling PM2.5 Urban Pollution Using Machine Learning and Selected Meteorological Parameters
Directory of Open Access Journals (Sweden)
Jan Kleine Deters
2017-01-01
Full Text Available Outdoor air pollution costs millions of premature deaths annually, mostly due to anthropogenic fine particulate matter (or PM2.5. Quito, the capital city of Ecuador, is no exception in exceeding the healthy levels of pollution. In addition to the impact of urbanization, motorization, and rapid population growth, particulate pollution is modulated by meteorological factors and geophysical characteristics, which complicate the implementation of the most advanced models of weather forecast. Thus, this paper proposes a machine learning approach based on six years of meteorological and pollution data analyses to predict the concentrations of PM2.5 from wind (speed and direction and precipitation levels. The results of the classification model show a high reliability in the classification of low (25 µg/m3 and low (<10 µg/m3 versus moderate (10–25 µg/m3 concentrations of PM2.5. A regression analysis suggests a better prediction of PM2.5 when the climatic conditions are getting more extreme (strong winds or high levels of precipitation. The high correlation between estimated and real data for a time series analysis during the wet season confirms this finding. The study demonstrates that the use of statistical models based on machine learning is relevant to predict PM2.5 concentrations from meteorological data.
Modelling Water Stress in a Shiraz Vineyard Using Hyperspectral Imaging and Machine Learning
Directory of Open Access Journals (Sweden)
Kyle Loggenberg
2018-01-01
Full Text Available The detection of water stress in vineyards plays an integral role in the sustainability of high-quality grapes and prevention of devastating crop loses. Hyperspectral remote sensing technologies combined with machine learning provides a practical means for modelling vineyard water stress. In this study, we applied two ensemble learners, i.e., random forest (RF and extreme gradient boosting (XGBoost, for discriminating stressed and non-stressed Shiraz vines using terrestrial hyperspectral imaging. Additionally, we evaluated the utility of a spectral subset of wavebands, derived using RF mean decrease accuracy (MDA and XGBoost gain. Our results show that both ensemble learners can effectively analyse the hyperspectral data. When using all wavebands (p = 176, RF produced a test accuracy of 83.3% (KHAT (kappa analysis = 0.67, and XGBoost a test accuracy of 80.0% (KHAT = 0.6. Using the subset of wavebands (p = 18 produced slight increases in accuracy ranging from 1.7% to 5.5% for both RF and XGBoost. We further investigated the effect of smoothing the spectral data using the Savitzky-Golay filter. The results indicated that the Savitzky-Golay filter reduced model accuracies (ranging from 0.7% to 3.3%. The results demonstrate the feasibility of terrestrial hyperspectral imagery and machine learning to create a semi-automated framework for vineyard water stress modelling.
Directory of Open Access Journals (Sweden)
Peek Andrew S
2007-06-01
Full Text Available Abstract Background RNA interference (RNAi is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid
Microwave modeling of laser plasma interactions. Final report
International Nuclear Information System (INIS)
1983-08-01
For a large laser fusion targets and nanosecond pulse lengths, stimulated Brillouin scattering (SBS) and self-focusing are expected to be significant problems. The goal of the contractual effort was to examine certain aspects of these physical phenomena in a wavelength regime (lambda approx.5 cm) more amenable to detailed diagnostics than that characteristic of laser fusion (lambda approx.1 micron). The effort was to include the design, fabrication and operation of a suitable experimental apparatus. In addition, collaboration with Dr. Neville Luhmann and his associates at UCLA and with Dr. Curt Randall of LLNL, on analysis and modelling of the UCLA experiments was continued. Design and fabrication of the TRW experiment is described under ''Experiment Design'' and ''Experimental Apparatus''. The design goals for the key elements of the experimental apparatus were met, but final integration and operation of the experiment was not accomplished. Some theoretical considerations on the interaction between Stimulated Brillouin Scattering and Self-Focusing are also presented
Use of models and mockups in verifying man-machine interfaces
International Nuclear Information System (INIS)
Seminara, J.L.
1985-01-01
The objective of Human Factors Engineering is to tailor the design of facilities and equipment systems to match the capabilities and limitations of the personnel who will operate and maintain the system. This optimization of the man-machine interface is undertaken to enhance the prospects for safe, reliable, timely, and error-free human performance in meeting system objectives. To ensure the eventual success of a complex man-machine system it is important to systematically and progressively test and verify the adequacy of man-machine interfaces from initial design concepts to system operation. Human factors specialists employ a variety of methods to evaluate the quality of the human-system interface. These methods include: (1) Reviews of two-dimensional drawings using appropriately scaled transparent overlays of personnel spanning the anthropometric range, considering clothing and protective gear encumbrances (2) Use of articulated, scaled, plastic templates or manikins that are overlayed on equipment or facility drawings (3) Development of computerized manikins in computer aided design approaches (4) Use of three-dimensional scale models to better conceptualize work stations, control rooms or maintenance facilities (5) Full or half-scale mockups of system components to evaluate operator/maintainer interfaces (6) Part of full-task dynamic simulation of operator or maintainer tasks and interactive system responses (7) Laboratory and field research to establish human performance capabilities with alternative system design concepts or configurations. Of the design verification methods listed above, this paper will only consider the use of models and mockups in the design process
Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang
2018-06-01
Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.
Estimation of the applicability domain of kernel-based machine learning models for virtual screening
Directory of Open Access Journals (Sweden)
Fechner Nikolas
2010-03-01
Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening
Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas
2010-03-11
The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations
Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu
2016-01-01
This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal andbest functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitor...
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.
Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E
2016-10-26
Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.
Eliseev, A. V.; Sitov, I. S.; Eliseev, S. V.
2018-03-01
The methodological basis of constructing mathematical models of vibratory technological machines is developed in the article. An approach is proposed that makes it possible to introduce a vibration table in a specific mode that provides conditions for the dynamic damping of oscillations for the zone of placement of a vibration exciter while providing specified vibration parameters in the working zone of the vibration table. The aim of the work is to develop methods of mathematical modeling, oriented to technological processes with long cycles. The technologies of structural mathematical modeling are used with structural schemes, transfer functions and amplitude-frequency characteristics. The concept of the work is to test the possibilities of combining the conditions for reducing loads with working components of a vibration exciter while simultaneously maintaining sufficiently wide limits in variating the parameters of the vibrational field.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods
Directory of Open Access Journals (Sweden)
Felix F. Gonzalez-Navarro
2016-10-01
Full Text Available Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.
Extreme learning machine for reduced order modeling of turbulent geophysical flows
San, Omer; Maulik, Romit
2018-04-01
We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.
Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique
Directory of Open Access Journals (Sweden)
J. Belhadj
2006-03-01
Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology
Traditional machining processes research advances
2015-01-01
This book collects several examples of research in machining processes. Chapter 1 provides information on polycrystalline diamond tool material and its emerging applications. Chapter 2 is dedicated to the analysis of orthogonal cutting experiments using diamond-coated tools with force and temperature measurements. Chapter 3 describes the estimation of cutting forces and tool wear using modified mechanistic models in high performance turning. Chapter 4 contains information on cutting under gas shields for industrial applications. Chapter 5 is dedicated to the machinability of magnesium and its alloys. Chapter 6 provides information on grinding science. Finally, chapter 7 is dedicated to flexible integration of shape and functional modelling of machine tool spindles in a design framework.
Ozmutlu, H. Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204
Machine Learning Techniques for Modelling Short Term Land-Use Change
Directory of Open Access Journals (Sweden)
Mileva Samardžić-Petrović
2017-11-01
Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.
Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.
Modeling a ground-coupled heat pump system by a support vector machine
Energy Technology Data Exchange (ETDEWEB)
Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)
2008-08-15
This paper reports on a modeling study of ground coupled heat pump (GCHP) system performance (COP) by using a support vector machine (SVM) method. A GCHP system is a multi-variable system that is hard to model by conventional methods. As regards the SVM, it has a superior capability for generalization, and this capability is independent of the dimensionality of the input data. In this study, a SVM based method was intended to adopt GCHP system for efficient modeling. The Lin-kernel SVM method was quite efficient in modeling purposes and did not require a pre-knowledge about the system. The performance of the proposed methodology was evaluated by using several statistical validation parameters. It is found that the root-mean squared (RMS) value is 0.002722, the coefficient of multiple determinations (R{sup 2}) value is 0.999999, coefficient of variation (cov) value is 0.077295, and mean error function (MEF) value is 0.507437 for the proposed Lin-kernel SVM method. The optimum parameters of the SVM method were determined by using a greedy search algorithm. This search algorithm was effective for obtaining the optimum parameters. The simulation results show that the SVM is a good method for prediction of the COP of the GCHP system. The computation of SVM model is faster compared with other machine learning techniques (artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS)); because there are fewer free parameters and only support vectors (only a fraction of all data) are used in the generalization process. (author)
Limits, modeling and design of high-speed permanent magnet machines
Borisavljevic, A.
2011-01-01
There is a growing number of applications that require fast-rotating machines; motivation for this thesis comes from a project in which downsized spindles for micro-machining have been researched (TU Delft Microfactory project). The thesis focuses on analysis and design of high-speed PM machines and
Large-scale ligand-based predictive modelling using support vector machines.
Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola
2016-01-01
The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Directory of Open Access Journals (Sweden)
Xianhong Li
2013-01-01
Full Text Available A general nonlinear time-varying (NLTV dynamic model and linear time-varying (LTV dynamic model are presented for shield tunnel boring machine (TBM cutterhead driving system, respectively. Different gear backlashes and mesh damped and transmission errors are considered in the NLTV dynamic model. The corresponding multiple-input and multiple-output (MIMO state space models are also presented. Through analyzing the linear dynamic model, the optimal reducer ratio (ORR and optimal transmission ratio (OTR are obtained for the shield TBM cutterhead driving system, respectively. The NLTV and LTV dynamic models are numerically simulated, and the effects of physical parameters under various conditions of NLTV dynamic model are analyzed. Physical parameters such as the load torque, gear backlash and transmission error, gear mesh stiffness and damped, pinions inertia and damped, large gear inertia and damped, and motor rotor inertia and damped are investigated in detail to analyze their effects on dynamic response and performances of the shield TBM cutterhead driving system. Some preliminary approaches are proposed to improve dynamic performances of the cutterhead driving system, and dynamic models will provide a foundation for shield TBM cutterhead driving system's cutterhead fault diagnosis, motion control, and torque synchronous control.
Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.
2016-11-01
With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.
International Nuclear Information System (INIS)
Combescure, D.; Lazarus, A.; Lazarus, A.
2008-01-01
This paper is aimed at presenting refined finite element modelling used for dynamic analysis of large rotating machines. The first part shows an equivalence between several levels of modelling: firstly, models made of beam elements and rigid disc with gyroscopic coupling representing the position of the rotating shaft in an inertial frame; secondly full three-dimensional (3D) or 3D shell models of the rotor and the blades represented in the rotating frame and finally two-dimensional (2D) Fourier model for both rotor and stator. Simple cases are studied to better understand the results given by analysis performed using a rotating frame and the equivalence with the standard calculations with beam elements. Complete analysis of rotating machines can be performed with models in the frames best adapted for each part of the structure. The effects of several defects are analysed and compared with this approach. In the last part of the paper, the modelling approach is applied to the analysis of the large rotating shaft part of the power conversion unit of the GT-MHR nuclear reactor. (authors)
Energy Technology Data Exchange (ETDEWEB)
Combescure, D.; Lazarus, A. [CEA Saclay, DEN/DM2S/SEMT/DYN, Dynam Anal Lab, Saclay, (France); Lazarus, A. [Ecole Polytech, Mecan Solides Lab, F-91128 Palaiseau, (France)
2008-07-01
This paper is aimed at presenting refined finite element modelling used for dynamic analysis of large rotating machines. The first part shows an equivalence between several levels of modelling: firstly, models made of beam elements and rigid disc with gyroscopic coupling representing the position of the rotating shaft in an inertial frame; secondly full three-dimensional (3D) or 3D shell models of the rotor and the blades represented in the rotating frame and finally two-dimensional (2D) Fourier model for both rotor and stator. Simple cases are studied to better understand the results given by analysis performed using a rotating frame and the equivalence with the standard calculations with beam elements. Complete analysis of rotating machines can be performed with models in the frames best adapted for each part of the structure. The effects of several defects are analysed and compared with this approach. In the last part of the paper, the modelling approach is applied to the analysis of the large rotating shaft part of the power conversion unit of the GT-MHR nuclear reactor. (authors)
EXPERIMENTS AND COMPUTATIONAL MODELING OF PULVERIZED-COAL IGNITION; FINAL
International Nuclear Information System (INIS)
Samuel Owusu-Ofori; John C. Chen
1999-01-01
Under typical conditions of pulverized-coal combustion, which is characterized by fine particles heated at very high rates, there is currently a lack of certainty regarding the ignition mechanism of bituminous and lower rank coals as well as the ignition rate of reaction. furthermore, there have been no previous studies aimed at examining these factors under various experimental conditions, such as particle size, oxygen concentration, and heating rate. Finally, there is a need to improve current mathematical models of ignition to realistically and accurately depict the particle-to-particle variations that exist within a coal sample. Such a model is needed to extract useful reaction parameters from ignition studies, and to interpret ignition data in a more meaningful way. The authors propose to examine fundamental aspects of coal ignition through (1) experiments to determine the ignition temperature of various coals by direct measurement, and (2) modeling of the ignition process to derive rate constants and to provide a more insightful interpretation of data from ignition experiments. The authors propose to use a novel laser-based ignition experiment to achieve their first objective. Laser-ignition experiments offer the distinct advantage of easy optical access to the particles because of the absence of a furnace or radiating walls, and thus permit direct observation and particle temperature measurement. The ignition temperature of different coals under various experimental conditions can therefore be easily determined by direct measurement using two-color pyrometry. The ignition rate-constants, when the ignition occurs heterogeneously, and the particle heating rates will both be determined from analyses based on these measurements
Modeling x-ray data for the Saturn z-pinch machine
International Nuclear Information System (INIS)
Matuska, W.; Peterson, D.; Deeney, C.; Derzon, M.
1997-01-01
A wealth of XRD and time dependent x-ray imaging data exist for the Saturn z-pinch machine, where the load is either a tungsten wire array or a tungsten wire array which implodes onto a SiO 2 foam. Also, these pinches have been modeled with a 2-D RMHD Eulerian computer code. In this paper the authors start with the 2-D Eulerian results to calculate time and spatially dependent spectra using both LTE and NLTE models. Then using response functions, these spectra are converted to XRD currents and camera images, which are quantitatively compared with the data. Through these comparisons, areas of good and lesser quality agreement are determined, and areas are identified where the 2-D Eulerian code should be improved
Using fuzzy models in machining control system and assessment of sustainability
Grinek, A. V.; Boychuk, I. P.; Dantsevich, I. M.
2018-03-01
Description of the complex relationship of the optimum velocity with the temperature-strength state in the cutting zone for machining a fuzzy model is proposed. The fuzzy-logical conclusion allows determining the processing speed, which ensures effective, from the point of view of ensuring the quality of the surface layer, the temperature in the cutting zone and the maximum allowable cutting force. A scheme for stabilizing the temperature-strength state in the cutting zone using a nonlinear fuzzy PD–controller is proposed. The stability of the nonlinear system is estimated with the help of grapho–analytical realization of the method of harmonic balance and by modeling in MatLab.
Directory of Open Access Journals (Sweden)
Zhijian Liu
2017-07-01
Full Text Available Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM2.5 and PM10, temperature, relative humidity, and CO2 concentration. Our results show that a general regression neural network (GRNN model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups.
Mathematically modelling the power requirement for a vertical shaft mowing machine
Directory of Open Access Journals (Sweden)
Jorge Simón Pérez de Corcho Fuentes
2008-09-01
Full Text Available This work describes a mathematical model for determining the power demand for a vertical shaft mowing machine, particularly taking into account the influence of speed on cutting power, which is different from that of other models of mowers. The influence of the apparatus’ rotation and translation speeds was simulated in determining power demand. The results showed that no chan-ges in cutting power were produced by varying the knives’ angular speed (if translation speed was constant, while cutting power became increased if translation speed was increased. Variations in angular speed, however, influenced other parameters deter-mining total power demand. Determining this vertical shaft mower’s cutting pattern led to obtaining good crop stubble quality at the mower’s lower rotation speed, hence reducing total energy requirements.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Directory of Open Access Journals (Sweden)
Ladeesh V. G.
2017-01-01
Full Text Available Grinding aided electrochemical discharge machining is a hybrid technique, which combines the grinding action of an abrasive tool and thermal effects of electrochemical discharges to remove material from the workpiece for producing complex contours. The present study focuses on developing fluidic channels on borosilicate glass using G-ECDM and attempts to develop a mathematical model for surface roughness of the machined channel. Preliminary experiments are conducted to study the effect of machining parameters on surface roughness. Voltage, duty factor, frequency and tool feed rate are identified as the significant factors for controlling surface roughness of the channels produced by G-ECDM. A mathematical model was developed for surface roughness by considering the grinding action and thermal effects of electrochemical discharges in material removal. Experiments are conducted to validate the model and the results obtained are in good agreement with that predicted by the model.
Directory of Open Access Journals (Sweden)
Subburaj Ramasamy
2017-01-01
Full Text Available Reliability is one of the quantifiable software quality attributes. Software Reliability Growth Models (SRGMs are used to assess the reliability achieved at different times of testing. Traditional time-based SRGMs may not be accurate enough in all situations where test effort varies with time. To overcome this lacuna, test effort was used instead of time in SRGMs. In the past, finite test effort functions were proposed, which may not be realistic as, at infinite testing time, test effort will be infinite. Hence in this paper, we propose an infinite test effort function in conjunction with a classical Nonhomogeneous Poisson Process (NHPP model. We use Artificial Neural Network (ANN for training the proposed model with software failure data. Here it is possible to get a large set of weights for the same model to describe the past failure data equally well. We use machine learning approach to select the appropriate set of weights for the model which will describe both the past and the future data well. We compare the performance of the proposed model with existing model using practical software failure data sets. The proposed log-power TEF based SRGM describes all types of failure data equally well and also improves the accuracy of parameter estimation more than existing TEF and can be used for software release time determination as well.
Unsteady aerodynamic modeling at high angles of attack using support vector machines
Directory of Open Access Journals (Sweden)
Wang Qing
2015-06-01
Full Text Available Accurate aerodynamic models are the basis of flight simulation and control law design. Mathematically modeling unsteady aerodynamics at high angles of attack bears great difficulties in model structure determination and parameter estimation due to little understanding of the flow mechanism. Support vector machines (SVMs based on statistical learning theory provide a novel tool for nonlinear system modeling. The work presented here examines the feasibility of applying SVMs to high angle-of-attack unsteady aerodynamic modeling field. Mainly, after a review of SVMs, several issues associated with unsteady aerodynamic modeling by use of SVMs are discussed in detail, such as selection of input variables, selection of output variables and determination of SVM parameters. The least squares SVM (LS-SVM models are set up from certain dynamic wind tunnel test data of a delta wing and an aircraft configuration, and then used to predict the aerodynamic responses in other tests. The predictions are in good agreement with the test data, which indicates the satisfying learning and generalization performance of LS-SVMs.
Support vector machine-based open crop model (SBOCM: Case of rice production in China
Directory of Open Access Journals (Sweden)
Ying-xue Su
2017-03-01
Full Text Available Existing crop models produce unsatisfactory simulation results and are operationally complicated. The present study, however, demonstrated the unique advantages of statistical crop models for large-scale simulation. Using rice as the research crop, a support vector machine-based open crop model (SBOCM was developed by integrating developmental stage and yield prediction models. Basic geographical information obtained by surface weather observation stations in China and the 1:1000000 soil database published by the Chinese Academy of Sciences were used. Based on the principle of scale compatibility of modeling data, an open reading frame was designed for the dynamic daily input of meteorological data and output of rice development and yield records. This was used to generate rice developmental stage and yield prediction models, which were integrated into the SBOCM system. The parameters, methods, error resources, and other factors were analyzed. Although not a crop physiology simulation model, the proposed SBOCM can be used for perennial simulation and one-year rice predictions within certain scale ranges. It is convenient for data acquisition, regionally applicable, parametrically simple, and effective for multi-scale factor integration. It has the potential for future integration with extensive social and economic factors to improve the prediction accuracy and practicability.
Directory of Open Access Journals (Sweden)
Rachid Darnag
2017-02-01
Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.
Model design and simulation of automatic sorting machine using proximity sensor
Directory of Open Access Journals (Sweden)
Bankole I. Oladapo
2016-09-01
Full Text Available The automatic sorting system has been reported to be complex and a global problem. This is because of the inability of sorting machines to incorporate flexibility in their design concept. This research therefore designed and developed an automated sorting object of a conveyor belt. The developed automated sorting machine is able to incorporate flexibility and separate species of non-ferrous metal objects and at the same time move objects automatically to the basket as defined by the regulation of the Programmable Logic Controllers (PLC with a capacitive proximity sensor to detect a value range of objects. The result obtained shows that plastic, wood, and steel were sorted into their respective and correct position with an average, sorting, time of 9.903 s, 14.072 s and 18.648 s respectively. The proposed developed model of this research could be adopted at any institution or industries, whose practices are based on mechatronics engineering systems. This is to guide the industrial sector in sorting of object and teaching aid to institutions and hence produce the list of classified materials according to the enabled sorting program commands.
Mazzaferri, Javier; Larrivée, Bruno; Cakir, Bertan; Sapieha, Przemyslaw; Costantino, Santiago
2018-03-02
Preclinical studies of vascular retinal diseases rely on the assessment of developmental dystrophies in the oxygen induced retinopathy rodent model. The quantification of vessel tufts and avascular regions is typically computed manually from flat mounted retinas imaged using fluorescent probes that highlight the vascular network. Such manual measurements are time-consuming and hampered by user variability and bias, thus a rapid and objective method is needed. Here, we introduce a machine learning approach to segment and characterize vascular tufts, delineate the whole vasculature network, and identify and analyze avascular regions. Our quantitative retinal vascular assessment (QuRVA) technique uses a simple machine learning method and morphological analysis to provide reliable computations of vascular density and pathological vascular tuft regions, devoid of user intervention within seconds. We demonstrate the high degree of error and variability of manual segmentations, and designed, coded, and implemented a set of algorithms to perform this task in a fully automated manner. We benchmark and validate the results of our analysis pipeline using the consensus of several manually curated segmentations using commonly used computer tools. The source code of our implementation is released under version 3 of the GNU General Public License ( https://www.mathworks.com/matlabcentral/fileexchange/65699-javimazzaf-qurva ).
DEFF Research Database (Denmark)
Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.
1997-01-01
Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...
Directory of Open Access Journals (Sweden)
Eleonora Carletti
2016-11-01
Full Text Available It is well-known that the reduction of noise levels is not strictly linked to the reduction of noise annoyance. Even earthmoving machine manufacturers are facing the problem of customer complaints concerning the noise quality of their machines with increasing frequency. Unfortunately, all the studies geared to the understanding of the relationship between multidimensional characteristics of noise signals and the auditory perception of annoyance require repeated sessions of jury listening tests, which are time-consuming. In this respect, an annoyance prediction model was developed for compact loaders to assess the annoyance sensation perceived by operators at their workplaces without repeating the full sound quality assessment but using objective parameters only. This paper aims at verifying the feasibility of the developed annoyance prediction model when applied to other kinds of earthmoving machines. For this purpose, an experimental investigation was performed on five earthmoving machines, different in type, dimension, and engine mechanical power, and the annoyance predicted by the numerical model was compared to the annoyance given by subjective listening tests. The results were evaluated by means of the squared value of the correlation coefficient, R2, and they confirm the possible applicability of the model to other kinds of machines.
Directory of Open Access Journals (Sweden)
Luca eBrillante
2016-06-01
Full Text Available In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay located in eight experimental plots (Burgundy, France along a hillslope were monitored weekly for three years for leaf water potentials, both at predawn (Ψpd and at midday (Ψstem. The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall and soil characteristics (soil texture, gravel content, slope by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ13C of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd, comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ13C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions.
Brillante, Luca; Mathieu, Olivier; Lévêque, Jean; Bois, Benjamin
2016-01-01
In a climate change scenario, successful modeling of the relationships between plant-soil-meteorology is crucial for a sustainable agricultural production, especially for perennial crops. Grapevines (Vitis vinifera L. cv Chardonnay) located in eight experimental plots (Burgundy, France) along a hillslope were monitored weekly for 3 years for leaf water potentials, both at predawn (Ψpd) and at midday (Ψstem). The water stress experienced by grapevine was modeled as a function of meteorological data (minimum and maximum temperature, rainfall) and soil characteristics (soil texture, gravel content, slope) by a gradient boosting machine. Model performance was assessed by comparison with carbon isotope discrimination (δ(13)C) of grape sugars at harvest and by the use of a test-set. The developed models reached outstanding prediction performance (RMSE < 0.08 MPa for Ψstem and < 0.06 MPa for Ψpd), comparable to measurement accuracy. Model predictions at a daily time step improved correlation with δ(13)C data, respect to the observed trend at a weekly time scale. The role of each predictor in these models was described in order to understand how temperature, rainfall, soil texture, gravel content and slope affect the grapevine water status in the studied context. This work proposes a straight-forward strategy to simulate plant water stress in field condition, at a local scale; to investigate ecological relationships in the vineyard and adapt cultural practices to future conditions.
Modeling and Dynamic Analysis of Cutterhead Driving System in Tunnel Boring Machine
Directory of Open Access Journals (Sweden)
Wei Sun
2017-01-01
Full Text Available Failure of cutterhead driving system (CDS of tunnel boring machine (TBM often occurs under shock and vibration conditions. To investigate the dynamic characteristics and reduce system vibration further, an electromechanical coupling model of CDS is established which includes the model of direct torque control (DTC system for three-phase asynchronous motor and purely torsional dynamic model of multistage gear transmission system. The proposed DTC model can provide driving torque just as the practical inverter motor operates so that the influence of motor operating behavior will not be erroneously estimated. Moreover, nonlinear gear meshing factors, such as time-variant mesh stiffness and transmission error, are involved in the dynamic model. Based on the established nonlinear model of CDS, vibration modes can be classified into three types, that is, rigid motion mode, rotational vibration mode, and planet vibration mode. Moreover, dynamic responses under actual driving torque and idealized equivalent torque are compared, which reveals that the ripple of actual driving torque would aggravate vibration of gear transmission system. Influence index of torque ripple is proposed to show that vibration of system increases with torque ripple. This study provides useful guideline for antivibration design and motor control of CDS in TBM.
Hidden Markov models and other machine learning approaches in computational molecular biology
Energy Technology Data Exchange (ETDEWEB)
Baldi, P. [California Inst. of Tech., Pasadena, CA (United States)
1995-12-31
This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.
Machine Learning-based discovery of closures for reduced models of dynamical systems
Pan, Shaowu; Duraisamy, Karthik
2017-11-01
Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.
Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.
Cong, W L; Pei, Z J; Sun, X; Zhang, C L
2014-02-01
Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model. Copyright © 2013 Elsevier B.V. All rights reserved.
Teunter, RH; Haneveld, WKK
1998-01-01
When the service department of a company selling machines stops producing and supplying spare parts for certain machines, customers are offered an opportunity to place a so-called final order for these spare parts. We focus on one customer with one machine. The customer plans to use this machine up
Energy Technology Data Exchange (ETDEWEB)
Kessinger, Glen Frank; Nelson, Lee Orville; Grandy, Jon Drue; Zuck, Larry Douglas; Kong, Peter Chuen Sun; Anderson, Gail
1999-08-01
The purpose of LDRD #2349, Characterize and Model Final Waste Formulations and Offgas Solids from Thermal Treatment Processes, was to develop a set of tools that would allow the user to, based on the chemical composition of a waste stream to be immobilized, predict the durability (leach behavior) of the final waste form and the phase assemblages present in the final waste form. The objectives of the project were: • investigation, testing and selection of thermochemical code • development of auxiliary thermochemical database • synthesis of materials for leach testing • collection of leach data • using leach data for leach model development • thermochemical modeling The progress toward completion of these objectives and a discussion of work that needs to be completed to arrive at a logical finishing point for this project will be presented.
Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling
Fayed, Hassan E.; Ragab, Saad
2015-01-01
A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm) to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD)-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction) determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.
Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling
Directory of Open Access Journals (Sweden)
Hassan Fayed
2015-03-01
Full Text Available A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.
Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling
Fayed, Hassan E.
2015-03-30
A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm) to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD)-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction) determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.
DEFF Research Database (Denmark)
Li, Qiyuan; Jorgensen, Flemming Steen; Oprea, Tudor
2008-01-01
and diverse library of 495 compounds. The models combine pharmacophore-based GRIND descriptors with a support vector machine (SVM) classifier in order to discriminate between hERG blockers and nonblockers. Our models were applied at different thresholds from 1 to 40 mu m and achieved an overall accuracy up...
Directory of Open Access Journals (Sweden)
Gabere MN
2016-06-01
Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1
This presentation will outline how data was collected on how chemicals are used in products, models were built using this data to then predict how chemicals can be used in products, and, finally, how combining this information with Tox21 in vitro assays can be use to rapidly scre...
The Machine / Job Features Mechanism
Energy Technology Data Exchange (ETDEWEB)
Alef, M. [KIT, Karlsruhe; Cass, T. [CERN; Keijser, J. J. [NIKHEF, Amsterdam; McNab, A. [Manchester U.; Roiser, S. [CERN; Schwickerath, U. [CERN; Sfiligoi, I. [Fermilab
2017-11-22
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.
International Nuclear Information System (INIS)
Lv, You; Liu, Jizhen; Yang, Tingting; Zeng, Deliang
2013-01-01
Real operation data of power plants are inclined to be concentrated in some local areas because of the operators’ habits and control system design. In this paper, a novel least squares support vector machine (LSSVM)-based ensemble learning paradigm is proposed to predict NO x emission of a coal-fired boiler using real operation data. In view of the plant data characteristics, a soft fuzzy c-means cluster algorithm is proposed to decompose the original data and guarantee the diversity of individual learners. Subsequently the base LSSVM is trained in each individual subset to solve the subtask. Finally, partial least squares (PLS) is applied as the combination strategy to eliminate the collinear and redundant information of the base learners. Considering that the fuzzy membership also has an effect on the ensemble output, the membership degree is added as one of the variables of the combiner. The single LSSVM and other ensemble models using different decomposition and combination strategies are also established to make a comparison. The result shows that the new soft FCM-LSSVM-PLS ensemble method can predict NO x emission accurately. Besides, because of the divide and conquer frame, the total time consumed in the searching the parameters and training also decreases evidently. - Highlights: • A novel LSSVM ensemble model to predict NO x emissions is presented. • LSSVM is used as the base learner and PLS is employed as the combiner. • The model is applied to process data from a 660 MW coal-fired boiler. • The generalization ability of the model is enhanced. • The time consuming in training and searching the parameters decreases sharply
Directory of Open Access Journals (Sweden)
Antonio Blanco-Oliver
2014-10-01
Full Text Available Despite the leading role that micro-entrepreneurship plays in economic development, and the high failure rate of microenterprise start-ups in their early years, very few studies have designed financial distress models to detect the financial problems of micro-entrepreneurs. Moreover, due to a lack of research, nothing is known about whether non-financial information and nonparametric statistical techniques improve the predictive capacity of these models. Therefore, this paper provides an innovative financial distress model specifically designed for microenterprise startups via support vector machines (SVMs that employs financial, non-financial, and macroeconomic variables. Based on a sample of almost 5,500 micro- entrepreneurs from a Peruvian Microfinance Institution (MFI, our findings show that the introduction of non-financial information related to the zone in which the entrepreneurs live and situate their business, the duration of the MFI-entrepreneur relationship, the number of loans granted by the MFI in the last year, the loan destination, and the opinion of experts on the probability that microenterprise start-ups may experience financial problems, significantly increases the accuracy performance of our financial distress model. Furthermore, the results reveal that the models that use SVMs outperform those which employ traditional logistic regression (LR analysis.
Assessing biomass of diverse coastal marsh ecosystems using statistical and machine learning models
Mo, Yu; Kearney, Michael S.; Riter, J. C. Alexis; Zhao, Feng; Tilley, David R.
2018-06-01
The importance and vulnerability of coastal marshes necessitate effective ways to closely monitor them. Optical remote sensing is a powerful tool for this task, yet its application to diverse coastal marsh ecosystems consisting of different marsh types is limited. This study samples spectral and biophysical data from freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops statistical and machine learning models to assess the marshes' biomass with combined ground, airborne, and spaceborne remote sensing data. It is found that linear models derived from NDVI and EVI are most favorable for assessing Leaf Area Index (LAI) using multispectral data (R2 = 0.7 and 0.67, respectively), and the random forest models are most useful in retrieving LAI and Aboveground Green Biomass (AGB) using hyperspectral data (R2 = 0.91 and 0.84, respectively). It is also found that marsh type and plant species significantly impact the linear model development (P biomass of Louisiana's coastal marshes using various optical remote sensing techniques, and highlights the impacts of the marshes' species composition on the model development and the sensors' spatial resolution on biomass mapping, thereby providing useful tools for monitoring the biomass of coastal marshes in Louisiana and diverse coastal marsh ecosystems elsewhere.
Unsupervised machine learning account of magnetic transitions in the Hubbard model
Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan
2018-01-01
We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.
FACT. Streamed data analysis and online application of machine learning models
Energy Technology Data Exchange (ETDEWEB)
Bruegge, Kai Arno; Buss, Jens [Technische Universitaet Dortmund (Germany). Astroteilchenphysik; Collaboration: FACT-Collaboration
2016-07-01
Imaging Atmospheric Cherenkov Telescopes (IACTs) like FACT produce a continuous flow of data during measurements. Analyzing the data in near real time is essential for monitoring sources. One major task of a monitoring system is to detect changes in the gamma-ray flux of a source, and to alert other experiments if some predefined limit is reached. In order to calculate the flux of an observed source, it is necessary to run an entire data analysis process including calibration, image cleaning, parameterization, signal-background separation and flux estimation. Software built on top of a data streaming framework has been implemented for FACT and generalized to work with the data acquisition framework of the Cherenkov Telescope Array (CTA). We present how the streams-framework is used to apply supervised machine learning models to an online data stream from the telescope.
Multi-fidelity machine learning models for accurate bandgap predictions of solids
International Nuclear Information System (INIS)
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
2016-01-01
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.
Reliability enumeration model for the gear in a multi-functional machine
Nasution, M. K. M.; Ambarita, H.
2018-02-01
The angle and direction of motion play an important role in the ability of a multifunctional machine to be able to perform the task to be charged. The movement can be a rotational action that appears to perform a round, by which the rotation can be done by connecting the generator by hand through the help of a hinge formed from two rounded surfaces. The rotation of the entire arm can be carried out by the interconnection between two surfaces having a jagged ring. This link will change according to the angle of motion, and any yeast of the serration will have a share in the success of this process, therefore a robust hand measurement model is established based on canonical provisions.
Machine learning based cloud mask algorithm driven by radiative transfer modeling
Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.
2017-12-01
Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.
Data on Support Vector Machines (SVM model to forecast photovoltaic power
Directory of Open Access Journals (Sweden)
M. Malvoni
2016-12-01
Full Text Available The data concern the photovoltaic (PV power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled “Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data” (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015 [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA are applied to the Least Squares Support Vector Machines (LS-SVM to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.
Hornbrook, Mark C; Goshen, Ran; Choman, Eran; O'Keeffe-Rosetti, Maureen; Kinar, Yaron; Liles, Elizabeth G; Rust, Kristal C
2017-10-01
Machine learning tools identify patients with blood counts indicating greater likelihood of colorectal cancer and warranting colonoscopy referral. To validate a machine learning colorectal cancer detection model on a US community-based insured adult population. Eligible colorectal cancer cases (439 females, 461 males) with complete blood counts before diagnosis were identified from Kaiser Permanente Northwest Region's Tumor Registry. Control patients (n = 9108) were randomly selected from KPNW's population who had no cancers, received at ≥1 blood count, had continuous enrollment from 180 days prior to the blood count through 24 months after the count, and were aged 40-89. For each control, one blood count was randomly selected as the pseudo-colorectal cancer diagnosis date for matching to cases, and assigned a "calendar year" based on the count date. For each calendar year, 18 controls were randomly selected to match the general enrollment's 10-year age groups and lengths of continuous enrollment. Prediction performance was evaluated by area under the curve, specificity, and odds ratios. Area under the receiver operating characteristics curve for detecting colorectal cancer was 0.80 ± 0.01. At 99% specificity, the odds ratio for association of a high-risk detection score with colorectal cancer was 34.7 (95% CI 28.9-40.4). The detection model had the highest accuracy in identifying right-sided colorectal cancers. ColonFlag ® identifies individuals with tenfold higher risk of undiagnosed colorectal cancer at curable stages (0/I/II), flags colorectal tumors 180-360 days prior to usual clinical diagnosis, and is more accurate at identifying right-sided (compared to left-sided) colorectal cancers.
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.
2012-08-01
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.
Numerical modelling of micro-machining of f.c.c. single crystal: Influence of strain gradients
Demiral, Murat
2014-11-01
A micro-machining process becomes increasingly important with the continuous miniaturization of components used in various fields from military to civilian applications. To characterise underlying micromechanics, a 3D finite-element model of orthogonal micro-machining of f.c.c. single crystal copper was developed. The model was implemented in a commercial software ABAQUS/Explicit employing a user-defined subroutine VUMAT. Strain-gradient crystal-plasticity and conventional crystal-plasticity theories were used to demonstrate the influence of pre-existing and evolved strain gradients on the cutting process for different combinations of crystal orientations and cutting directions. Crown Copyright © 2014.
Directory of Open Access Journals (Sweden)
Kyle A McQuisten
2009-10-01
Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are
Prediction of Aerosol Optical Depth in West Asia: Machine Learning Methods versus Numerical Models
Omid Nabavi, Seyed; Haimberger, Leopold; Abbasi, Reyhaneh; Samimi, Cyrus
2017-04-01
Dust-prone areas of West Asia are releasing increasingly large amounts of dust particles during warm months. Because of the lack of ground-based observations in the region, this phenomenon is mainly monitored through remotely sensed aerosol products. The recent development of mesoscale Numerical Models (NMs) has offered an unprecedented opportunity to predict dust emission, and, subsequently Aerosol Optical Depth (AOD), at finer spatial and temporal resolutions. Nevertheless, the significant uncertainties in input data and simulations of dust activation and transport limit the performance of numerical models in dust prediction. The presented study aims to evaluate if machine-learning algorithms (MLAs), which require much less computational expense, can yield the same or even better performance than NMs. Deep blue (DB) AOD, which is observed by satellites but also predicted by MLAs and NMs, is used for validation. We concentrate our evaluations on the over dry Iraq plains, known as the main origin of recently intensified dust storms in West Asia. Here we examine the performance of four MLAs including Linear regression Model (LM), Support Vector Machine (SVM), Artificial Neural Network (ANN), Multivariate Adaptive Regression Splines (MARS). The Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) and the Dust REgional Atmosphere Model (DREAM) are included as NMs. The MACC aerosol re-analysis of European Centre for Medium-range Weather Forecast (ECMWF) is also included, although it has assimilated satellite-based AOD data. Using the Recursive Feature Elimination (RFE) method, nine environmental features including soil moisture and temperature, NDVI, dust source function, albedo, dust uplift potential, vertical velocity, precipitation and 9-month SPEI drought index are selected for dust (AOD) modeling by MLAs. During the feature selection process, we noticed that NDVI and SPEI are of the highest importance in MLAs predictions. The data set was divided
Pontier, Matthijs
2015-01-01
The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...
Modeling and control of PEMFC based on least squares support vector machines
International Nuclear Information System (INIS)
Li Xi; Cao Guangyi; Zhu Xinjian
2006-01-01
The proton exchange membrane fuel cell (PEMFC) is one of the most important power supplies. The operating temperature of the stack is an important controlled variable, which impacts the performance of the PEMFC. In order to improve the generating performance of the PEMFC, prolong its life and guarantee safety, credibility and low cost of the PEMFC system, it must be controlled efficiently. A nonlinear predictive control algorithm based on a least squares support vector machine (LS-SVM) model is presented for a family of complex systems with severe nonlinearity, such as the PEMFC, in this paper. The nonlinear off line model of the PEMFC is built by a LS-SVM model with radial basis function (RBF) kernel so as to implement nonlinear predictive control of the plant. During PEMFC operation, the off line model is linearized at each sampling instant, and the generalized predictive control (GPC) algorithm is applied to the predictive control of the plant. Experimental results demonstrate the effectiveness and advantages of this approach
Model of Peatland Vegetation Species using HyMap Image and Machine Learning
Dayuf Jusuf, Muhammad; Danoedoro, Projo; Muljo Sukojo, Bangun; Hartono
2017-12-01
Species Tumih / Parepat (Combretocarpus-rotundatus Mig. Dancer) family Anisophylleaceae and Meranti (Shorea Belangerang, Shorea Teysmanniana Dyer ex Brandis) family Dipterocarpaceae is a group of vegetation species distribution model. Species pioneer is predicted as an indicator of the succession of ecosystem restoration of tropical peatland characteristics and extremely fragile (unique) in the endemic hot spot of Sundaland. Climate change projections and conservation planning are hot topics of current discussion, analysis of alternative approaches and the development of combinations of species projection modelling algorithms through geospatial information systems technology. Approach model to find out the research problem of vegetation level based on the machine learning hybrid method, wavelet and artificial neural networks. Field data are used as a reference collection of natural resource field sample objects and biodiversity assessment. The testing and training ANN data set iterations times 28, achieve a performance value of 0.0867 MSE value is smaller than the ANN training data, above 50%, and spectral accuracy 82.1 %. Identify the location of the sample point position of the Tumih / Parepat vegetation species using HyMap Image is good enough, at least the modelling, design of the species distribution can reach the target in this study. The computation validation rate above 90% proves the calculation can be considered.
DEFF Research Database (Denmark)
Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin
2013-01-01
such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show...
Rohrbach, F; Vesztergombi, G
1997-01-01
In the near future, the computer performance will be completely determined by how long it takes to access memory. There are bottle-necks in memory latency and memory-to processor interface bandwidth. The IRAM initiative could be the answer by putting Processor-In-Memory (PIM). Starting from the massively parallel processing concept, one reached a similar conclusion. The MPPC (Massively Parallel Processing Collaboration) project and the 8K processor ASTRA machine (Associative String Test bench for Research \\& Applications) developed at CERN \\cite{kuala} can be regarded as a forerunner of the IRAM concept. The computing power of the ASTRA machine, regarded as an IRAM with 64 one-bit processors on a 64$\\times$64 bit-matrix memory chip machine, has been demonstrated by running statistical physics algorithms: one-dimensional stochastic cellular automata, as a simple model for dynamical phase transitions. As a relevant result for physics, the damage spreading of this model has been investigated.
Luo, Gang
2017-01-01
For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic. PMID:29177022
He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong
2015-09-01
In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne
2016-04-01
Existing evidence suggests that ambient ultrafine particles (UFPs) (regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Ransom, Katherine M.; Nolan, Bernard T.; Traum, Jonathan A.; Faunt, Claudia; Bell, Andrew M.; Gronberg, Jo Ann M.; Wheeler, David C.; Zamora, Celia; Jurgens, Bryant; Schwarz, Gregory E.; Belitz, Kenneth; Eberts, Sandra; Kourakos, George; Harter, Thomas
2017-01-01
Intense demand for water in the Central Valley of California and related increases in groundwater nitrate concentration threaten the sustainability of the groundwater resource. To assess contamination risk in the region, we developed a hybrid, non-linear, machine learning model within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface. A database of 145 predictor variables representing well characteristics, historical and current field and landscape-scale nitrogen mass balances, historical and current land use, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The boosted regression tree (BRT) method was used to screen and rank variables to predict nitrate concentration at the depths of domestic and public well supplies. The novel approach included as predictor variables outputs from existing physically based models of the Central Valley. The top five most important predictor variables included two oxidation/reduction variables (probability of manganese concentration to exceed 50 ppb and probability of dissolved oxygen concentration to be below 0.5 ppm), field-scale adjusted unsaturated zone nitrogen input for the 1975 time period, average difference between precipitation and evapotranspiration during the years 1971–2000, and 1992 total landscape nitrogen input. Twenty-five variables were selected for the final model for log-transformed nitrate. In general, increasing probability of anoxic conditions and increasing precipitation relative to potential evapotranspiration had a corresponding decrease in nitrate concentration predictions. Conversely, increasing 1975 unsaturated zone nitrogen leaching flux and 1992 total landscape nitrogen input had an increasing relative
Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu
2016-03-01
This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitoring probes programmed in microcontroller using C language. The developed hardware consists of vibration sensor of ADXL345, temperature sensor of AD594/595 of type K thermocouple, microcontroller, graphic liquid crystal display, real time clock, etc. The hardware is divided into two: one is based at the workstation (majorly meant to monitor machines behaviour) and the other at the base station (meant to receive transmission of machines information sent from the workstation), working cooperatively for effective functionalities. The resulting hardware built was calibrated, tested using model verification and validated through principles pivoted on least square and regression analysis approach using data read from the gear boxes of extruding and cutting machines used for polyethylene bag production. The results got therein confirmed related correlation existing between time, vibration and temperature, which are reflections of effective formulation of the developed concept.
Directory of Open Access Journals (Sweden)
Abazar Solgi
2017-06-01
Full Text Available Introduction: Chemical pollution of surface water is one of the serious issues that threaten the quality of water. This would be more important when the surface waters used for human drinking supply. One of the key parameters used to measure water pollution is BOD. Because many variables affect the water quality parameters and a complex nonlinear relationship between them is established conventional methods can not solve the problem of quality management of water resources. For years, the Artificial Intelligence methods were used for prediction of nonlinear time series and a good performance of them has been reported. Recently, the wavelet transform that is a signal processing method, has shown good performance in hydrological modeling and is widely used. Extensive research has been globally provided in use of Artificial Neural Network and Adaptive Neural Fuzzy Inference System models to forecast the BOD. But support vector machine has not yet been extensively studied. For this purpose, in this study the ability of support vector machine to predict the monthly BOD parameter based on the available data, temperature, river flow, DO and BOD was evaluated. Materials and Methods: SVM was introduced in 1992 by Vapnik that was a Russian mathematician. This method has been built based on the statistical learning theory. In recent years the use of SVM, is highly taken into consideration. SVM was used in applications such as handwriting recognition, face recognition and has good results. Linear SVM is simplest type of SVM, consists of a hyperplane that dataset of positive and negative is separated with maximum distance. The suitable separator has maximum distance from every one of two dataset. So about this machine that its output groups label (here -1 to +1, the aim is to obtain the maximum distance between categories. This is interpreted to have a maximum margin. Wavelet transform is one of methods in the mathematical science that its main idea was
An Overall Perspective of Machine Translation with its Shortcomings
Directory of Open Access Journals (Sweden)
Alireza Akbari
2014-01-01
Full Text Available The petition for language translation has strikingly augmented recently due to cross-cultural communication and exchange of information. In order to communicate well, text should be translated correctly and completely in each field such as legal documents, technical texts, scientific texts, publicity leaflets, and instructional materials. In this connection, Machine translation is of great importance in translation. The term “Machine Translation” was first proposed by George Artsrouni and Smirnov Troyanski (1933 to design a storage design on paper tape. This paper sought to investigate an overall perspective of Machine Translation models and its metrics in detail. Finally, it scrutinized the ins and outs shortcomings of Machine Translation.
Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris
2017-07-01
While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
International Nuclear Information System (INIS)
Ainslie, Mark D; Yuan Weijia; Flack, Timothy J; Coombs, Timothy A; Rodriguez-Zermeno, Victor M; Hong Zhiyong
2011-01-01
AC loss can be a significant problem for any applications that utilize or produce an AC current or magnetic field, such as an electric machine. The authors investigate the electromagnetic properties of high temperature superconductors with a particular focus on the AC loss in superconducting coils made from YBCO coated conductors for use in an all-superconducting electric machine. This paper presents an improved 2D finite element model for the cross-section of such coils, based on the H formulation. The model is used to calculate the transport AC loss of a racetrack-shaped coil using constant and magnetic field-dependent critical current densities, and the inclusion and exclusion of a magnetic substrate, as found in RABiTS (rolling-assisted biaxially textured substrate) YBCO coated conductors. The coil model is based on the superconducting stator coils used in the University of Cambridge EPEC Superconductivity Group's all-superconducting permanent magnet synchronous motor design. To validate the modeling results, the transport AC loss of a stator coil is measured using an electrical method based on inductive compensation by means of a variable mutual inductance. Finally, the implications of the findings on the performance of the motor are discussed.
Energy Technology Data Exchange (ETDEWEB)
Ainslie, Mark D; Yuan Weijia; Flack, Timothy J; Coombs, Timothy A [Department of Engineering, University of Cambridge, 9 J J Thomson Avenue, Cambridge CB3 0FA (United Kingdom); Rodriguez-Zermeno, Victor M [Department of Mathematics, Technical University of Denmark, Kongens Lyngby 2800 (Denmark); Hong Zhiyong, E-mail: mda36@cam.ac.uk [School of Electronic, Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China)
2011-04-15
AC loss can be a significant problem for any applications that utilize or produce an AC current or magnetic field, such as an electric machine. The authors investigate the electromagnetic properties of high temperature superconductors with a particular focus on the AC loss in superconducting coils made from YBCO coated conductors for use in an all-superconducting electric machine. This paper presents an improved 2D finite element model for the cross-section of such coils, based on the H formulation. The model is used to calculate the transport AC loss of a racetrack-shaped coil using constant and magnetic field-dependent critical current densities, and the inclusion and exclusion of a magnetic substrate, as found in RABiTS (rolling-assisted biaxially textured substrate) YBCO coated conductors. The coil model is based on the superconducting stator coils used in the University of Cambridge EPEC Superconductivity Group's all-superconducting permanent magnet synchronous motor design. To validate the modeling results, the transport AC loss of a stator coil is measured using an electrical method based on inductive compensation by means of a variable mutual inductance. Finally, the implications of the findings on the performance of the motor are discussed.
Modeling workflow to design machine translation applications for public health practice.
Turner, Anne M; Brownstein, Megumu K; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin
2015-02-01
Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. Copyright © 2014 Elsevier Inc. All rights reserved.
Cappi, R; Martini, M; Métral, Elias; Métral, G; Steerenberg, R; Müller, A S
2003-01-01
The CERN Proton Synchrotron machine is built using combined function magnets. The control of the linear tune as well as the chromaticity in both planes is achieved by means of special coils added to the main magnets, namely two pole-face-windings and one figure-of-eight loop. As a result, the overall magnetic field configuration is rather complex not to mention the saturation effects induced at top-energy. For these reasons a linear model of the PS main magnet does not provide sufficient precision to model particle dynamics. On the other hand, a sophisticated optical model is the key element for the foreseen intensity upgrade and, in particular, for the novel extraction mode based on adiabatic capture of beam particles inside stable islands in transverse phase space. A solution was found by performing accurate measurement of the nonlinear tune as a function of both amplitude and momentum offset so to extract both linear and nonlinear properties of the lattice. In this paper the measurement results are present...
Directory of Open Access Journals (Sweden)
Kryukov Igor Yu.
2017-01-01
Full Text Available Present article is devoted to the development of the mathematical model, which describes thermal state and crystallization process of the rectangular cross-section blank while continious process of extraction from a horysontal continious casting machine (HCCM.The developed model took cue for the heat-transfer properties of non-iron metal teeming; its temperature on entry to the casting mold; cooling conditions of blank in the carbon molds in the presence of a copper water cooler. Besides, has been considered the asymmetry of heat interchange from blank`s head and drag at mold, coming out from fluid contraction and features of the horizontal casting mold. The developed mathematical model allows to determine alterations in crystallizing blank of the following factors with respect to time: temperature pattern of crystallizing blank under different technical working regimes of HCCM; boundaries of solid two-phase field and liquid two-phase filed; blank`s thickness variation under shrinkage of the ingot`s material
MODEL OF THE QUALITY MANAGEMENT SYSTEM OF A MACHINE TOOL COMPANY
Directory of Open Access Journals (Sweden)
Катерина Вікторівна КОЛЕСНІКОВА
2016-02-01
Full Text Available Development of models and methods such that would improve the competitive position of enterprises by improving management processes is an important task of project management. Lack of project management within the information technology and continuous improvement of methods for the management of the environment, interaction, community, value and trust, based on the strategic objectives of enterprises and based on models that take into account the relationship of the system, resulting in significant material and resource costs. In the current work the improvement of the quality management system machine-tool company HC MIKRON® and proved that the introduction of new processes critical analysis requirements for products, support processes of the products to consumers and enterprises in the formation of a system of responsibility, division of responsibilities and reporting (according to ISO 9001: 2009 is an important scientific and reasonable step to improve the level of technological maturity and structural modernization of enterprise management. For the improved structure of the analysis model and test the properties of ergodicity, as a condition of efficiency, a new quality management system.
Modeling the Pan-Arctic terrestrial and atmospheric water cycle. Final report; FINAL
International Nuclear Information System (INIS)
Gutowski, W.J. Jr.
2001-01-01
This report describes results of DOE grant DE-FG02-96ER61473 to Iowa State University (ISU). Work on this grant was performed at Iowa State University and at the University of New Hampshire in collaboration with Dr. Charles Vorosmarty and fellow scientists at the University of New Hampshire's (UNH's) Institute for the Study of the Earth, Oceans, and Space, a subcontractor to the project. Research performed for the project included development, calibration and validation of a regional climate model for the pan-Arctic, modeling river networks, extensive hydrologic database development, and analyses of the water cycle, based in part on the assembled databases and models. Details appear in publications produced from the grant
Machine listening intelligence
Cella, C. E.
2017-05-01
This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.
Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.
2017-12-01
With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967
Directory of Open Access Journals (Sweden)
Man Zhu
2017-03-01
Full Text Available Determination of ship maneuvering models is a tough task of ship maneuverability prediction. Among several prime approaches of estimating ship maneuvering models, system identification combined with the full-scale or free- running model test is preferred. In this contribution, real-time system identification programs using recursive identification method, such as the recursive least square method (RLS, are exerted for on-line identification of ship maneuvering models. However, this method seriously depends on the objects of study and initial values of identified parameters. To overcome this, an intelligent technology, i.e., support vector machines (SVM, is firstly used to estimate initial values of the identified parameters with finite samples. As real measured motion data of the Mariner class ship always involve noise from sensors and external disturbances, the zigzag simulation test data include a substantial quantity of Gaussian white noise. Wavelet method and empirical mode decomposition (EMD are used to filter the data corrupted by noise, respectively. The choice of the sample number for SVM to decide initial values of identified parameters is extensively discussed and analyzed. With de-noised motion data as input-output training samples, parameters of ship maneuvering models are estimated using RLS and SVM-RLS, respectively. The comparison between identification results and true values of parameters demonstrates that both the identified ship maneuvering models from RLS and SVM-RLS have reasonable agreements with simulated motions of the ship, and the increment of the sample for SVM positively affects the identification results. Furthermore, SVM-RLS using data de-noised by EMD shows the highest accuracy and best convergence.
Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling
Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.
2016-01-01
The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...
Heddam, Salim; Kisi, Ozgur
2018-04-01
In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.
Simulation model for man-machine systems in nuclear power plants. Vol. 4
Energy Technology Data Exchange (ETDEWEB)
Falk, A
1981-01-01
At first, volume 4 was to describe the EDP-bound simulation model in the final programme version. This problem was tackled as a development project, however, it could not be solved because of the difficult time schedule and the commissioning of the new computing centre at the Fachhochschule Flensburg. The assignment described in volume 3, chapter 2, already contains the optimization stages planned for 1981 but still uses a special programming language 'Prozess-FORTRAN' for older digital computers. That is why the development of a new version of the programming system in Standard-FORTRAN should be postponed. It should also include a more comprehensive practical data bank for nuclear power plants. The present short report as well as the following more comprehensive final report lay more emphasis on the presentation of a functional range of application of the simulation model.
Gross, Charles A
2006-01-01
BASIC ELECTROMAGNETIC CONCEPTSBasic Magnetic ConceptsMagnetically Linear Systems: Magnetic CircuitsVoltage, Current, and Magnetic Field InteractionsMagnetic Properties of MaterialsNonlinear Magnetic Circuit AnalysisPermanent MagnetsSuperconducting MagnetsThe Fundamental Translational EM MachineThe Fundamental Rotational EM MachineMultiwinding EM SystemsLeakage FluxThe Concept of Ratings in EM SystemsSummaryProblemsTRANSFORMERSThe Ideal n-Winding TransformerTransformer Ratings and Per-Unit ScalingThe Nonideal Three-Winding TransformerThe Nonideal Two-Winding TransformerTransformer Efficiency and Voltage RegulationPractical ConsiderationsThe AutotransformerOperation of Transformers in Three-Phase EnvironmentsSequence Circuit Models for Three-Phase Transformer AnalysisHarmonics in TransformersSummaryProblemsBASIC MECHANICAL CONSIDERATIONSSome General PerspectivesEfficiencyLoad Torque-Speed CharacteristicsMass Polar Moment of InertiaGearingOperating ModesTranslational SystemsA Comprehensive Example: The ElevatorP...
Distress modeling for DARWin-ME : final report.
2013-12-01
Distress prediction models, or transfer functions, are key components of the Pavement M-E Design and relevant analysis. The accuracy of such models depends on a successful process of calibration and subsequent validation of model coefficients in the ...
Energy Technology Data Exchange (ETDEWEB)
Feidt, M. [Universite Henri Poincare - Nancy-1, 54 - Nancy (France)
2003-10-01
The machines presented in this article are not the common reverse cycle machines. They use some systems based on different physical principles which have some consequences on the analysis of cycles: 1 - permanent gas machines (thermal separators, pulse gas tube, thermal-acoustic machines); 2 - phase change machines (mechanical vapor compression machines, absorption machines, ejection machines, adsorption machines); 3 - thermoelectric machines (thermoelectric effects, thermodynamic model of a thermoelectric machine). (J.S.)
Machine Shop Grinding Machines.
Dunn, James
This curriculum manual is one in a series of machine shop curriculum manuals intended for use in full-time secondary and postsecondary classes, as well as part-time adult classes. The curriculum can also be adapted to open-entry, open-exit programs. Its purpose is to equip students with basic knowledge and skills that will enable them to enter the…
Numerical modelling of micro-machining of f.c.c. single crystal: Influence of strain gradients
Demiral, Murat; Roy, Anish; El Sayed, Tamer S.; Silberschmidt, Vadim V.
2014-01-01
of orthogonal micro-machining of f.c.c. single crystal copper was developed. The model was implemented in a commercial software ABAQUS/Explicit employing a user-defined subroutine VUMAT. Strain-gradient crystal-plasticity and conventional crystal
Chen, Chau-Kuang
2010-01-01
Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…
DEFF Research Database (Denmark)
Kjær, Lene Jung; Korslund, L.; Kjelland, V.
30 sites (forests and meadows) in each of Denmark, southern Norway and south-eastern Sweden. At each site we measured presence/absence of ticks, and used the data obtained along with environmental satellite images to run Boosted Regression Tree machine learning algorithms to predict overall spatial...... and Sweden), areas with high population densities tend to overlap with these zones.Machine learning techniques allow us to predict for larger areas without having to perform extensive sampling all over the region in question, and we were able to produce models and maps with high predictive value. The results...
International Nuclear Information System (INIS)
Zhang, Hao H.; D'Souza, Warren D.; Shi Leyuan; Meyer, Robert R.
2009-01-01
Purpose: To predict organ-at-risk (OAR) complications as a function of dose-volume (DV) constraint settings without explicit plan computation in a multiplan intensity-modulated radiotherapy (IMRT) framework. Methods and Materials: Several plans were generated by varying the DV constraints (input features) on the OARs (multiplan framework), and the DV levels achieved by the OARs in the plans (plan properties) were modeled as a function of the imposed DV constraint settings. OAR complications were then predicted for each of the plans by using the imposed DV constraints alone (features) or in combination with modeled DV levels (plan properties) as input to machine learning (ML) algorithms. These ML approaches were used to model two OAR complications after head-and-neck and prostate IMRT: xerostomia, and Grade 2 rectal bleeding. Two-fold cross-validation was used for model verification and mean errors are reported. Results: Errors for modeling the achieved DV values as a function of constraint settings were 0-6%. In the head-and-neck case, the mean absolute prediction error of the saliva flow rate normalized to the pretreatment saliva flow rate was 0.42% with a 95% confidence interval of (0.41-0.43%). In the prostate case, an average prediction accuracy of 97.04% with a 95% confidence interval of (96.67-97.41%) was achieved for Grade 2 rectal bleeding complications. Conclusions: ML can be used for predicting OAR complications during treatment planning allowing for alternative DV constraint settings to be assessed within the planning framework.
Tang, Jiajing; Yang, Xiaodong
2017-09-01
A novel thermo-hydraulic coupling model was proposed in this study to investigate the crater formation in electrical discharge machining (EDM). The temperature distribution of workpiece materials was included, and the crater formation process was explained from the perspective of hydrodynamic characteristics of the molten region. To better track the morphology of the crater and the movement of debris, the level-set method was introduced in this study. Simulation results showed that the crater appears shortly after the ignition of the discharge, and the molten material is removed by vaporizing in the initial stage, then by splashing at the following time. The driving force for the detachment of debris in the splashing removal stage comes from the extremely large pressure difference in the upper part of the molten region, and the morphology of the crater is also influenced by the shearing flow of molten material. It was found that the removal ratio of molten material is only about 7.63% under the studied conditions, leaving most to form the re-solidification layer on the surface of the crater. The size of the crater reaches the maximum at the end of discharge duration then experiences a slight reduction because of the reflux of molten material after the discharge. The results of single pulse discharge experiments showed that the morphologies and sizes between the simulation crater and actual crater are good at agreement, verifying the feasibility of the proposed thermo-hydraulic coupling model in explaining the mechanisms of crater formation in EDM.
Functional copmponents produced by multi-jet modelling combined with electroforming and machining
Directory of Open Access Journals (Sweden)
Baier, Oliver
2014-08-01
Full Text Available In fuel cell technology, certain components are used that are responsible for guiding liquid media. When these components are produced by conventional manufacturing, there are often sealing issues, and trouble- and maintenance-free deployment cannot be ensured. Against this background, a new process combination has been developed in a joint project between the University of Duisburg-Essen, the Center for Fuel Cell Technology (ZBT, and the company Galvano-T electroplating forming GmbH. The approach is to combine multi-jet modelling (MJM, electroforming and milling in order to produce a defined external geometry. The wax models are generated on copper base plates and copper-coated to a desirable thickness. Following this, the undefined electroplated surfaces are machined to achieve the desired measurement, and the wax is melted out. This paper presents, first, how this process is technically feasible, then describes how the MJM on a 3-D Systems ThermoJet was adapted to stabilise the process.In the AiF-sponsored ZIM project, existing limits and possibilities are shown and different approaches of electroplating are investigated. This paper explores whether or not activation of the wax structure by a conductive initial layer is required. Using the described process chain, different parts were built: a heat exchanger, a vaporiser, and a reformer (in which pellets were integrated in an intermediate step. In addition, multiple-layer parts with different functions were built by repeating the process combination several times.
CloudLM: a Cloud-based Language Model for Machine Translation
Directory of Open Access Journals (Sweden)
Ferrández-Tordera Jorge
2016-04-01
Full Text Available Language models (LMs are an essential element in statistical approaches to natural language processing for tasks such as speech recognition and machine translation (MT. The advent of big data leads to the availability of massive amounts of data to build LMs, and in fact, for the most prominent languages, using current techniques and hardware, it is not feasible to train LMs with all the data available nowadays. At the same time, it has been shown that the more data is used for a LM the better the performance, e.g. for MT, without any indication yet of reaching a plateau. This paper presents CloudLM, an open-source cloud-based LM intended for MT, which allows to query distributed LMs. CloudLM relies on Apache Solr and provides the functionality of state-of-the-art language modelling (it builds upon KenLM, while allowing to query massive LMs (as the use of local memory is drastically reduced, at the expense of slower decoding speed.
The Use of an Acellular Oxygen Carrier in a Human Liver Model of Normothermic Machine Perfusion.
Laing, Richard W; Bhogal, Ricky H; Wallace, Lorraine; Boteon, Yuri; Neil, Desley A H; Smith, Amanda; Stephenson, Barney T F; Schlegel, Andrea; Hübscher, Stefan G; Mirza, Darius F; Afford, Simon C; Mergental, Hynek
2017-11-01
Normothermic machine perfusion of the liver (NMP-L) is a novel technique that preserves liver grafts under near-physiological conditions while maintaining their normal metabolic activity. This process requires an adequate oxygen supply, typically delivered by packed red blood cells (RBC). We present the first experience using an acellular hemoglobin-based oxygen carrier (HBOC) Hemopure in a human model of NMP-L. Five discarded high-risk human livers were perfused with HBOC-based perfusion fluid and matched to 5 RBC-perfused livers. Perfusion parameters, oxygen extraction, metabolic activity, and histological features were compared during 6 hours of NMP-L. The cytotoxicity of Hemopure was also tested on human hepatic primary cell line cultures using an in vitro model of ischemia reperfusion injury. The vascular flow parameters and the perfusate lactate clearance were similar in both groups. The HBOC-perfused livers extracted more oxygen than those perfused with RBCs (O2 extraction ratio 13.75 vs 9.43 % ×10 per gram of tissue, P = 0.001). In vitro exposure to Hemopure did not alter intracellular levels of reactive oxygen species, and there was no increase in apoptosis or necrosis observed in any of the tested cell lines. Histological findings were comparable between groups. There was no evidence of histological damage caused by Hemopure. Hemopure can be used as an alternative oxygen carrier to packed red cells in NMP-L perfusion fluid.
Directory of Open Access Journals (Sweden)
José Ignacio Rojas-Sola
2018-02-01
Full Text Available This article presents the steps followed to obtain a three-dimensional model of one of the most recognized historical inventions of Agustín de Betancourt y Molina from the scant documentation found about it. Specifically, this was a machine for cutting cane and other aquatic plants in navigable waterways, presented in London in 1795. The study is based on computer-aided design (CAD techniques using Autodesk Inventor Professional, from the information provided by the only two sheets that exist from the machine, one with specifications in English and the other in French, both very similar. In order to obtain a functional result on which to carry out further studies, it has been necessary to make some geometrical hypotheses on the models, aimed to find the correct dimension of each element. In addition, it has also been necessary to define the relationship of each element with those that set up its environment, defining movement restrictions, so that the final model, behaves as real as possible.
Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J
2008-02-01
Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial
Zhou, Chao; Yin, Kunlong; Cao, Ying; Ahmed, Bayes; Li, Yuanyao; Catani, Filippo; Pourghasemi, Hamid Reza
2018-03-01
Landslide is a common natural hazard and responsible for extensive damage and losses in mountainous areas. In this study, Longju in the Three Gorges Reservoir area in China was taken as a case study for landslide susceptibility assessment in order to develop effective risk prevention and mitigation strategies. To begin, 202 landslides were identified, including 95 colluvial landslides and 107 rockfalls. Twelve landslide causal factor maps were prepared initially, and the relationship between these factors and each landslide type was analyzed using the information value model. Later, the unimportant factors were selected and eliminated using the information gain ratio technique. The landslide locations were randomly divided into two groups: 70% for training and 30% for verifying. Two machine learning models: the support vector machine (SVM) and artificial neural network (ANN), and a multivariate statistical model: the logistic regression (LR), were applied for landslide susceptibility modeling (LSM) for each type. The LSM index maps, obtained from combining the assessment results of the two landslide types, were classified into five levels. The performance of the LSMs was evaluated using the receiver operating characteristics curve and Friedman test. Results show that the elimination of noise-generating factors and the separated modeling of each landslide type have significantly increased the prediction accuracy. The machine learning models outperformed the multivariate statistical model and SVM model was found ideal for the case study area.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
Zhang, Xia; Amin, Elizabeth Ambrose
2016-01-01
Anthrax is a highly lethal, acute infectious disease caused by the rod-shaped, Gram-positive bacterium Bacillus anthracis. The anthrax toxin lethal factor (LF), a zinc metalloprotease secreted by the bacilli, plays a key role in anthrax pathogenesis and is chiefly responsible for anthrax-related toxemia and host death, partly via inactivation of mitogen-activated protein kinase kinase (MAPKK) enzymes and consequent disruption of key cellular signaling pathways. Antibiotics such as fluoroquinolones are capable of clearing the bacilli but have no effect on LF-mediated toxemia; LF itself therefore remains the preferred target for toxin inactivation. However, currently no LF inhibitor is available on the market as a therapeutic, partly due to the insufficiency of existing LF inhibitor scaffolds in terms of efficacy, selectivity, and toxicity. In the current work, we present novel support vector machine (SVM) models with high prediction accuracy that are designed to rapidly identify potential novel, structurally diverse LF inhibitor chemical matter from compound libraries. These SVM models were trained and validated using 508 compounds with published LF biological activity data and 847 inactive compounds deposited in the Pub Chem BioAssay database. One model, M1, demonstrated particularly favorable selectivity toward highly active compounds by correctly predicting 39 (95.12%) out of 41 nanomolar-level LF inhibitors, 46 (93.88%) out of 49 inactives, and 844 (99.65%) out of 847 Pub Chem inactives in external, unbiased test sets. These models are expected to facilitate the prediction of LF inhibitory activity for existing molecules, as well as identification of novel potential LF inhibitors from large datasets. Copyright © 2015 Elsevier Inc. All rights reserved.
Khumrin, Piyapong; Ryan, Anna; Judd, Terry; Verspoor, Karin
2017-01-01
Computer-aided learning systems (e-learning systems) can help medical students gain more experience with diagnostic reasoning and decision making. Within this context, providing feedback that matches students' needs (i.e. personalised feedback) is both critical and challenging. In this paper, we describe the development of a machine learning model to support medical students' diagnostic decisions. Machine learning models were trained on 208 clinical cases presenting with abdominal pain, to predict five diagnoses. We assessed which of these models are likely to be most effective for use in an e-learning tool that allows students to interact with a virtual patient. The broader goal is to utilise these models to generate personalised feedback based on the specific patient information requested by students and their active diagnostic hypotheses.
DEFF Research Database (Denmark)
Iov, F.; Blaabjerg, Frede; Hansen, A.D.
2002-01-01
In the last years Matlab/Simulink® has become the most used software for modelling and simulation of dynamic systems. Wind energy conversion systems are for example such systems because they contain parts with different range for the time constant: wind, turbine, generator, power electronics...... the different implementations of induction machine model, influence of the solvers from Simulink and how the simulation speed can be increase for a wind turbine....
Modeling of Soil Aggregate Stability using Support Vector Machines and Multiple Linear Regression
Directory of Open Access Journals (Sweden)
Ali Asghar Besalatpour
2016-02-01
stability. Conclusion: The pixel-scale soil aggregate stability predicted that using the developed SVM and MLR models demonstrates the usefulness of incorporating topographic and vegetation information along with the soil properties as predictors. However, the SVM model achieved more accuracy in predicting soil aggregate stability compared to the MLR model. Therefore, it appears that support vector machines can be used for prediction of some soil physical properties such as geometric mean diameter of soil aggregates in the study area. Furthermore, despite the high predictive accuracy of the SVM method compared to the MLR technique which was confirmed by the obtained results in the current study, the advantages of the SVM method such as its intrinsic effectiveness with respect to traditional prediction methods, less effort in setting up the control parameters for architecture design, the possibility of solving the learning problem according to constrained quadratic programming methods, etc., should motivate soil scientists to work on it further in the future.
Development of temperature statistical model when machining of aerospace alloy materials
Directory of Open Access Journals (Sweden)
Kadirgama Kumaran
2014-01-01
Full Text Available This paper presents to develop first-order models for predicting the cutting temperature for end-milling operation of Hastelloy C-22HS by using four different coated carbide cutting tools and two different cutting environments. The first-order equations of cutting temperature are developed using the response surface methodology (RSM. The cutting variables are cutting speed, feed rate, and axial depth. The analyses are carried out with the aid of the statistical software package. It can be seen that the model is suitable to predict the longitudinal component of the cutting temperature close to those readings recorded experimentally with a 95% confident level. The results obtained from the predictive models are also compared with results obtained from finite-element analysis (FEA. The developed first-order equations for the cutting temperature revealed that the feed rate is the most crucial factor, followed by axial depth and cutting speed. The PVD coated cutting tools perform better than the CVD-coated cutting tools in terms of cutting temperature. The cutting tools coated with TiAlN perform better compared with other cutting tools during the machining performance of Hastelloy C-22HS. It followed by TiN/TiCN/TiN and CVD coated with TiN/TiCN/Al2O3 and TiN/TiCN/TiN. From the finite-element analysis, the distribution of the cutting temperature can be discussed. High temperature appears in the lower sliding friction zone and at the cutting tip of the cutting tool. Maximum temperature is developed at the rake face some distance away from the tool nose, however, before the chip lift away.
Efficient Prediction of Progesterone Receptor Interactome Using a Support Vector Machine Model
Directory of Open Access Journals (Sweden)
Ji-Long Liu
2015-03-01
Full Text Available Protein-protein interaction (PPI is essential for almost all cellular processes and identification of PPI is a crucial task for biomedical researchers. So far, most computational studies of PPI are intended for pair-wise prediction. Theoretically, predicting protein partners for a single protein is likely a simpler problem. Given enough data for a particular protein, the results can be more accurate than general PPI predictors. In the present study, we assessed the potential of using the support vector machine (SVM model with selected features centered on a particular protein for PPI prediction. As a proof-of-concept study, we applied this method to identify the interactome of progesterone receptor (PR, a protein which is essential for coordinating female reproduction in mammals by mediating the actions of ovarian progesterone. We achieved an accuracy of 91.9%, sensitivity of 92.8% and specificity of 91.2%. Our method is generally applicable to any other proteins and therefore may be of help in guiding biomedical experiments.
Aoun, Bachir
2016-05-05
A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.
Application of three dimensional finite element modeling for the simulation of machining processes
International Nuclear Information System (INIS)
Fischer, C.E.; Wu, W.T.; Chigurupati, P.; Jinn, J.T.
2004-01-01
For many years, metal cutting simulations have been performed using two dimensional approximations of the actual process. Factors such as chip morphology, cutting force, temperature, and tool wear can all be predicted on the computer. However, two dimensional simulation is limited to processes which are orthogonal, or which can be closely approximated as orthogonal.Advances in finite element technology, coupled with continuing improvement in the availability of low cost, high performance computer hardware, have made the three dimensional simulation of a large variety of metal cutting processes practical. Specific improvements include efficient FEM solvers, and robust adaptive remeshing. As researchers continue to gain an improved understanding of wear, material representation, tool coatings, fracture, and other such phenomena, the machining simulation system also must adapt to incorporate these evolving models.To demonstrate the capabilities of the 3D simulation system, a variety of drilling, milling, and turning processes have been simulated and will be presented in this paper. Issues related to computation time and simulation accuracy will also be addressed
Seismic analysis during development stage of CANDU Model 2 fueling machine design
International Nuclear Information System (INIS)
Lee, L.S.S.; Mansfield, R.A.
1989-01-01
The CANDU Model 3 is a new small reactor presently being designed. This reactor is 450 MWe, and as with current operating CANDU's, is based on a heavy water moderated and cooled system using on-power fuelling for the once-through natural uranium fuel cycle. The CANDU 3 Standard plant is designed to be adaptable to a range of world-wide site conditions, i.e. for a peak ground acceleration of 0.3 g and a wide range of soft, medium and hard foundation medium properties. Consequently, a conservatism in the design of structure and equipment is accounted by using enveloped floor response spectra generated by the soil-structure interaction analysis. Seismic qualification of the fuelling machine (F/M) and its support structure are an essential design requirement for maintaining the integrity of the reactor coolant heat transport system (HTS) pressure boundary and the service ports penetrating the containment structure during on-power fueling. This paper deals with the initial conceptual phase of design where the details of the design are in fundamental outline form only and basic mass distribution plus layout geometry is defined
Dynamic process model of a plutonium oxalate precipitator. Final report
Energy Technology Data Exchange (ETDEWEB)
Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.
1977-11-01
In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts.
Dynamic process model of a plutonium oxalate precipitator. Final report
International Nuclear Information System (INIS)
Miller, C.L.; Hammelman, J.E.; Borgonovi, G.M.
1977-11-01
In support of LLL material safeguards program, a dynamic process model was developed which simulates the performance of a plutonium (IV) oxalate precipitator. The plutonium oxalate precipitator is a component in the plutonium oxalate process for making plutonium oxide powder from plutonium nitrate. The model is based on state-of-the-art crystallization descriptive equations, the parameters of which are quantified through the use of batch experimental data. The dynamic model predicts performance very similar to general Hanford oxalate process experience. The utilization of such a process model in an actual plant operation could promote both process control and material safeguards control by serving as a baseline predictor which could give early warning of process upsets or material diversion. The model has been incorporated into a FORTRAN computer program and is also compatible with the DYNSYS 2 computer code which is being used at LLL for process modeling efforts
Training Restricted Boltzmann Machines
DEFF Research Database (Denmark)
Fischer, Asja
relies on sampling based approximations of the log-likelihood gradient. I will present an empirical and theoretical analysis of the bias of these approximations and show that the approximation error can lead to a distortion of the learning process. The bias decreases with increasing mixing rate......Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can also be interpreted as stochastic neural networks. Training RBMs is known to be challenging. Computing the likelihood of the model parameters or its gradient is in general computationally intensive. Thus, training...... of the applied sampling procedure and I will introduce a transition operator that leads to faster mixing. Finally, a different parametrisation of RBMs will be discussed that leads to better learning results and more robustness against changes in the data representation....
Maximising profits for an EPQ model with unreliable machine and rework of random defective items
Pal, Brojeswar; Sankar Sana, Shib; Chaudhuri, Kripasindhu
2013-03-01
This article deals with an economic production quantity (EPQ) model in an imperfect production system. The production system may undergo in 'out-of-control' state from 'in-control' state, after a certain time that follows a probability density function. The density function varies with reliability of the machinery system that may be controlled by new technologies, investing more costs. The defective items produced in 'out-of-control' state are reworked at a cost just after the regular production time. Occurrence of the 'out-of-control' state during or after regular production-run time is analysed and also graphically illustrated separately. Finally, an expected profit function regarding the inventory cost, unit production cost and selling price is maximised analytically. Sensitivity analysis of the model with respect to key parameters of the system is carried out. Two numerical examples are considered to test the model and one of them is illustrated graphically.
Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow
2017-01-01
Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.
Bahl, Manisha; Barzilay, Regina; Yedidia, Adam B; Locascio, Nicholas J; Yu, Lili; Lehman, Constance D
2018-03-01
Purpose To develop a machine learning model that allows high-risk breast lesions (HRLs) diagnosed with image-guided needle biopsy that require surgical excision to be distinguished from HRLs that are at low risk for upgrade to cancer at surgery and thus could be surveilled. Materials and Methods Consecutive patients with biopsy-proven HRLs who underwent surgery or at least 2 years of imaging follow-up from June 2006 to April 2015 were identified. A random forest machine learning model was developed to identify HRLs at low risk for upgrade to cancer. Traditional features such as age and HRL histologic results were used in the model, as were text features from the biopsy pathologic report. Results One thousand six HRLs were identified, with a cancer upgrade rate of 11.4% (115 of 1006). A machine learning random forest model was developed with 671 HRLs and tested with an independent set of 335 HRLs. Among the most important traditional features were age and HRL histologic results (eg, atypical ductal hyperplasia). An important text feature from the pathologic reports was "severely atypical." Instead of surgical excision of all HRLs, if those categorized with the model to be at low risk for upgrade were surveilled and the remainder were excised, then 97.4% (37 of 38) of malignancies would have been diagnosed at surgery, and 30.6% (91 of 297) of surgeries of benign lesions could have been avoided. Conclusion This study provides proof of concept that a machine learning model can be applied to predict the risk of upgrade of HRLs to cancer. Use of this model could decrease unnecessary surgery by nearly one-third and could help guide clinical decision making with regard to surveillance versus surgical excision of HRLs. © RSNA, 2017.
Model structure learning: A support vector machine approach for LPV linear-regression models
Toth, R.; Laurain, V.; Zheng, W-X.; Poolla, K.
2011-01-01
Accurate parametric identification of Linear Parameter-Varying (LPV) systems requires an optimal prior selection of a set of functional dependencies for the parametrization of the model coefficients. Inaccurate selection leads to structural bias while over-parametrization results in a variance
Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting
Directory of Open Access Journals (Sweden)
Xuejun Li
2011-01-01
Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.
Natural Language-based Machine Learning Models for the Annotation of Clinical Radiology Reports.
Zech, John; Pain, Margaret; Titano, Joseph; Badgeley, Marcus; Schefflein, Javin; Su, Andres; Costa, Anthony; Bederson, Joshua; Lehar, Joseph; Oermann, Eric Karl
2018-05-01
Purpose To compare different methods for generating features from radiology reports and to develop a method to automatically identify findings in these reports. Materials and Methods In this study, 96 303 head computed tomography (CT) reports were obtained. The linguistic complexity of these reports was compared with that of alternative corpora. Head CT reports were preprocessed, and machine-analyzable features were constructed by using bag-of-words (BOW), word embedding, and Latent Dirichlet allocation-based approaches. Ultimately, 1004 head CT reports were manually labeled for findings of interest by physicians, and a subset of these were deemed critical findings. Lasso logistic regression was used to train models for physician-assigned labels on 602 of 1004 head CT reports (60%) using the constructed features, and the performance of these models was validated on a held-out 402 of 1004 reports (40%). Models were scored by area under the receiver operating characteristic curve (AUC), and aggregate AUC statistics were reported for (a) all labels, (b) critical labels, and (c) the presence of any critical finding in a report. Sensitivity, specificity, accuracy, and F1 score were reported for the best performing model's (a) predictions of all labels and (b) identification of reports containing critical findings. Results The best-performing model (BOW with unigrams, bigrams, and trigrams plus average word embeddings vector) had a held-out AUC of 0.966 for identifying the presence of any critical head CT finding and an average 0.957 AUC across all head CT findings. Sensitivity and specificity for identifying the presence of any critical finding were 92.59% (175 of 189) and 89.67% (191 of 213), respectively. Average sensitivity and specificity across all findings were 90.25% (1898 of 2103) and 91.72% (18 351 of 20 007), respectively. Simpler BOW methods achieved results competitive with those of more sophisticated approaches, with an average AUC for presence of any
Gallo, A.; Arana, A.; Oyanguren, A.; García, G.; Barbero, A.; Larrañaga, J.; Ulacia, I.
2013-07-01
In this work the properties of thermoelectric modules (TEMs) and their behavior have been numerically modeled. Moreover, their applications very often require modeling not only of the TEM but also of the working environment and the product in which they will be working. A clear example is the fact that TEMs are very often installed with heat-dissipating elements such as fans, heat sinks, and heat exchangers; thus, the module will only work according to the heat dissipation conditions that these external sources can provide in a certain environment. In this context, analytic approaches, even though they have been proved to be useful, do not provide enough, accurate information in this regard. Therefore, numerical modeling has been identified as a powerful tool to improve detailed designs of thermoelectric solutions. This paper presents numerical simulations of a TEM in different working conditions, as well as with different commercial dissipation devices. The objective is to obtain the characteristic curve of a TEM using a valid numerical model that can be introduced into larger models of different applications. Also, the numerical model of the module and different cooling devices is provided. Both of them are compared against real tested modules, so that the deviation between them can be measured and discussed. Finally, the TEM is introduced into a manufacturing application and results are discussed to validate the model for further use.
Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki
2013-01-01
The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented. PMID:23558425
Le Doeuff, René
2013-01-01
In this book a general matrix-based approach to modeling electrical machines is promulgated. The model uses instantaneous quantities for key variables and enables the user to easily take into account associations between rotating machines and static converters (such as in variable speed drives). General equations of electromechanical energy conversion are established early in the treatment of the topic and then applied to synchronous, induction and DC machines. The primary characteristics of these machines are established for steady state behavior as well as for variable speed scenarios. I
Regional forecasting with global atmospheric models; Final report
Energy Technology Data Exchange (ETDEWEB)
Crowley, T.J.; Smith, N.R. [Applied Research Corp., College Station, TX (United States)
1994-05-01
The purpose of the project was to conduct model simulations for past and future climate change with respect to the proposed Yucca Mtn. repository. The authors report on three main topics, one of which is boundary conditions for paleo-hindcast studies. These conditions are necessary for the conduction of three to four model simulations. The boundary conditions have been prepared for future runs. The second topic is (a) comparing the atmospheric general circulation model (GCM) with observations and other GCMs; and (b) development of a better precipitation data base for the Yucca Mtn. region for comparisons with models. These tasks have been completed. The third topic is preliminary assessments of future climate change. Energy balance model (EBM) simulations suggest that the greenhouse effect will likely dominate climate change at Yucca Mtn. for the next 10,000 years. The EBM study should improve rational choice of GCM CO{sub 2} scenarios for future climate change.