WorldWideScience

Sample records for model near-term machines

  1. Long-Term Functional Outcomes and Correlation with Regional Brain Connectivity by MRI Diffusion Tractography Metrics in a Near-Term Rabbit Model of Intrauterine Growth Restriction

    Science.gov (United States)

    Illa, Miriam; Eixarch, Elisenda; Batalle, Dafnis; Arbat-Plana, Ariadna; Muñoz-Moreno, Emma; Figueras, Francesc; Gratacos, Eduard

    2013-01-01

    Background Intrauterine growth restriction (IUGR) affects 5–10% of all newborns and is associated with increased risk of memory, attention and anxiety problems in late childhood and adolescence. The neurostructural correlates of long-term abnormal neurodevelopment associated with IUGR are unknown. Thus, the aim of this study was to provide a comprehensive description of the long-term functional and neurostructural correlates of abnormal neurodevelopment associated with IUGR in a near-term rabbit model (delivered at 30 days of gestation) and evaluate the development of quantitative imaging biomarkers of abnormal neurodevelopment based on diffusion magnetic resonance imaging (MRI) parameters and connectivity. Methodology At +70 postnatal days, 10 cases and 11 controls were functionally evaluated with the Open Field Behavioral Test which evaluates anxiety and attention and the Object Recognition Task that evaluates short-term memory and attention. Subsequently, brains were collected, fixed and a high resolution MRI was performed. Differences in diffusion parameters were analyzed by means of voxel-based and connectivity analysis measuring the number of fibers reconstructed within anxiety, attention and short-term memory networks over the total fibers. Principal Findings The results of the neurobehavioral and cognitive assessment showed a significant higher degree of anxiety, attention and memory problems in cases compared to controls in most of the variables explored. Voxel-based analysis (VBA) revealed significant differences between groups in multiple brain regions mainly in grey matter structures, whereas connectivity analysis demonstrated lower ratios of fibers within the networks in cases, reaching the statistical significance only in the left hemisphere for both networks. Finally, VBA and connectivity results were also correlated with functional outcome. Conclusions The rabbit model used reproduced long-term functional impairments and their neurostructural

  2. Long-term functional outcomes and correlation with regional brain connectivity by MRI diffusion tractography metrics in a near-term rabbit model of intrauterine growth restriction.

    Directory of Open Access Journals (Sweden)

    Miriam Illa

    Full Text Available BACKGROUND: Intrauterine growth restriction (IUGR affects 5-10% of all newborns and is associated with increased risk of memory, attention and anxiety problems in late childhood and adolescence. The neurostructural correlates of long-term abnormal neurodevelopment associated with IUGR are unknown. Thus, the aim of this study was to provide a comprehensive description of the long-term functional and neurostructural correlates of abnormal neurodevelopment associated with IUGR in a near-term rabbit model (delivered at 30 days of gestation and evaluate the development of quantitative imaging biomarkers of abnormal neurodevelopment based on diffusion magnetic resonance imaging (MRI parameters and connectivity. METHODOLOGY: At +70 postnatal days, 10 cases and 11 controls were functionally evaluated with the Open Field Behavioral Test which evaluates anxiety and attention and the Object Recognition Task that evaluates short-term memory and attention. Subsequently, brains were collected, fixed and a high resolution MRI was performed. Differences in diffusion parameters were analyzed by means of voxel-based and connectivity analysis measuring the number of fibers reconstructed within anxiety, attention and short-term memory networks over the total fibers. PRINCIPAL FINDINGS: The results of the neurobehavioral and cognitive assessment showed a significant higher degree of anxiety, attention and memory problems in cases compared to controls in most of the variables explored. Voxel-based analysis (VBA revealed significant differences between groups in multiple brain regions mainly in grey matter structures, whereas connectivity analysis demonstrated lower ratios of fibers within the networks in cases, reaching the statistical significance only in the left hemisphere for both networks. Finally, VBA and connectivity results were also correlated with functional outcome. CONCLUSIONS: The rabbit model used reproduced long-term functional impairments and their

  3. Initialized near-term regional climate change prediction.

    Science.gov (United States)

    Doblas-Reyes, F J; Andreu-Burillo, I; Chikamoto, Y; García-Serrano, J; Guemas, V; Kimoto, M; Mochizuki, T; Rodrigues, L R L; van Oldenborgh, G J

    2013-01-01

    Climate models are seen by many to be unverifiable. However, near-term climate predictions up to 10 years into the future carried out recently with these models can be rigorously verified against observations. Near-term climate prediction is a new information tool for the climate adaptation and service communities, which often make decisions on near-term time scales, and for which the most basic information is unfortunately very scarce. The Fifth Coupled Model Intercomparison Project set of co-ordinated climate-model experiments includes a set of near-term predictions in which several modelling groups participated and whose forecast quality we illustrate here. We show that climate forecast systems have skill in predicting the Earth's temperature at regional scales over the past 50 years and illustrate the trustworthiness of their predictions. Most of the skill can be attributed to changes in atmospheric composition, but also partly to the initialization of the predictions.

  4. Machine learning in sedimentation modelling.

    Science.gov (United States)

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    The paper presents machine learning (ML) models that predict sedimentation in the harbour basin of the Port of Rotterdam. The important factors affecting the sedimentation process such as waves, wind, tides, surge, river discharge, etc. are studied, the corresponding time series data is analysed, missing values are estimated and the most important variables behind the process are chosen as the inputs. Two ML methods are used: MLP ANN and M5 model tree. The latter is a collection of piece-wise linear regression models, each being an expert for a particular region of the input space. The models are trained on the data collected during 1992-1998 and tested by the data of 1999-2000. The predictive accuracy of the models is found to be adequate for the potential use in the operational decision making.

  5. A Comparison of Different Machine Transliteration Models

    CERN Document Server

    Choi, K; Oh, J; 10.1613/jair.1999

    2011-01-01

    Machine transliteration is a method for automatically converting words in one language into phonetically equivalent ones in another language. Machine transliteration plays an important role in natural language applications such as information retrieval and machine translation, especially for handling proper nouns and technical terms. Four machine transliteration models -- grapheme-based transliteration model, phoneme-based transliteration model, hybrid transliteration model, and correspondence-based transliteration model -- have been proposed by several researchers. To date, however, there has been little research on a framework in which multiple transliteration models can operate simultaneously. Furthermore, there has been no comparison of the four models within the same framework and using the same data. We addressed these problems by 1) modeling the four models within the same framework, 2) comparing them under the same conditions, and 3) developing a way to improve machine transliteration through this com...

  6. Rough set models of Physarum machines

    Science.gov (United States)

    Pancerz, Krzysztof; Schumann, Andrew

    2015-04-01

    In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.

  7. A chaotic agricultural machines production growth model

    OpenAIRE

    Jablanović, Vesna D.

    2011-01-01

    Chaos theory, as a set of ideas, explains the structure in aperiodic, unpredictable dynamic systems. The basic aim of this paper is to provide a relatively simple agricultural machines production growth model that is capable of generating stable equilibrium, cycles, or chaos. A key hypothesis of this work is based on the idea that the coefficient π = 1 + α plays a crucial role in explaining local stability of the agricultural machines production, where α is an autonomous growth rate of the ag...

  8. Short-acting sulfonamides near term and neonatal jaundice

    DEFF Research Database (Denmark)

    Klarskov, Pia; Andersen, Jon Trærup; Jimenez-Solem, Espen;

    2013-01-01

    To investigate the association between maternal use of sulfamethizole near term and the risk of neonatal jaundice.......To investigate the association between maternal use of sulfamethizole near term and the risk of neonatal jaundice....

  9. Modelling and simulation of multitechnological machine systems

    Energy Technology Data Exchange (ETDEWEB)

    Holopainen, T. (ed.) [VTT Manufacturing Technology, Espoo (Finland)

    2001-07-01

    The Smart Machines and Systems 2010 (SMART) technology programme 1997-2000 aimed at supporting the machine and electromechanical industries in incorporating the modern technology into their products and processes. The public research projects in this programme were planned to accumulate the latest research results and transfer them for the benefit of industrial product development. The major research topic in the SMART programme was called Modelling and Simulation of Multitechnological Mechatronic Systems. The behaviour of modern machine systems and subsystems addresses many different types of physical phenomena and their mutual interactions: mechanical behaviour of structures, electromagnetic effects, hydraulics, vibrations and acoustics etc. together with associated control systems and software. The actual research was carried out in three separate projects called Modelling and Simulation of Mechtronic Machine Systems for Product Development and Condition Monitoring Purposes (MASI), Virtual Testing of Hydraulically Driven Machines (HYSI), and Control of Low Frequency Vibration of a Mobile Machine (AKSUS). This publication contains the papers presented at the final seminar of these three research projects, held on November 30th at Otaniemi Espoo. (orig.)

  10. Irreducible uncertainty in near-term climate projections

    Science.gov (United States)

    Hawkins, Ed; Smith, Robin S.; Gregory, Jonathan M.; Stainforth, David A.

    2016-06-01

    Model simulations of the next few decades are widely used in assessments of climate change impacts and as guidance for adaptation. Their non-linear nature reveals a level of irreducible uncertainty which it is important to understand and quantify, especially for projections of near-term regional climate. Here we use large idealised initial condition ensembles of the FAMOUS global climate model with a 1 %/year compound increase in hbox {CO}_2 levels to quantify the range of future temperatures in model-based projections. These simulations explore the role of both atmospheric and oceanic initial conditions and are the largest such ensembles to date. Short-term simulated trends in global temperature are diverse, and cooling periods are more likely to be followed by larger warming rates. The spatial pattern of near-term temperature change varies considerably, but the proportion of the surface showing a warming is more consistent. In addition, ensemble spread in inter-annual temperature declines as the climate warms, especially in the North Atlantic. Over Europe, atmospheric initial condition uncertainty can, for certain ocean initial conditions, lead to 20 year trends in winter and summer in which every location can exhibit either strong cooling or rapid warming. However, the details of the distribution are highly sensitive to the ocean initial condition chosen and particularly the state of the Atlantic meridional overturning circulation. On longer timescales, the warming signal becomes more clear and consistent amongst different initial condition ensembles. An ensemble using a range of different oceanic initial conditions produces a larger spread in temperature trends than ensembles using a single ocean initial condition for all lead times. This highlights the potential benefits from initialising climate predictions from ocean states informed by observations. These results suggest that climate projections need to be performed with many more ensemble members than at

  11. Modeling of synchronous machines with magnetic saturation

    Energy Technology Data Exchange (ETDEWEB)

    Rehaoulia, H. [Universite de Tunis-Ecole Superieure des Sciences et Techniques de Tunis (Unite de Recherche CSSS), 5 Avenue Taha Hussein Tunis 10008 (Tunisia); Henao, H.; Capolino, G.A. [Universite de Picardie Jules Vernes-Centre de Robotique, d' Electrotechnique et d' Automatique (UPRES-EA3299), 33 Rue Saint Leu, 80039 Amiens Cedex 1 (France)

    2007-04-15

    This paper deals with a method to derive multiple models of saturated round rotor synchronous machines, based on different selections of state-space variables. By considering the machine currents and fluxes as space vectors, possible d-q models are discussed and adequately numbered. As a result several novel models are found and presented. It is shown that the total number of d-q models for a synchronous machine, with basic dampers, is 64 and therefore much higher than known. Found models are classified into three families: current, flux and mixed models. These latter, the mixed ones, constitute the major part (52) and hence offer a large choice. Regarding magnetic saturation, the paper also presents a method to account for whatever the choice of state-space variables. The approach consists of just elaborating the saturation model with winding currents as main variables and deriving all the other models from it, by ordinary mathematical manipulations. The paper emphasizes the ability of the proposed approach to develop any existing model without exception. An application to prove the validity of the method and the equivalence between all developed models is reported. (author)

  12. Prototype-based models in machine learning

    NARCIS (Netherlands)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of poten

  13. Prototype-based models in machine learning

    NARCIS (Netherlands)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of

  14. Support vector machine applied in QSAR modelling

    Institute of Scientific and Technical Information of China (English)

    MEI Hu; ZHOU Yuan; LIANG Guizhao; LI Zhiliang

    2005-01-01

    Support vector machine (SVM), partial least squares (PLS), and Back-Propagation artificial neural network (ANN) were employed to establish QSAR models of 2 dipeptide datasets. In order to validate predictive capabilities on external dataset of the resulting models, both internal and external validations were performed. The division of dataset into both training and test sets was carried out by D-optimal design. The results showed that support vector machine (SVM) behaved well in both calibration and prediction. For the dataset of 48 bitter tasting dipeptides (BTD), the results obtained by support vector regression (SVR) were superior to that by PLS in both calibration and prediction. When compared with BP artificial neural network, SVR showed less calibration power but more predictive capability. For the dataset of angiotensin-converting enzyme (ACE) inhibitors, the results obtained by support vector machine (SVM) regression were equivalent to those by PLS and BP artificial neural network. In both datasets, SVR using linear kernel function behaved well as that using radial basis kernel function. The results showed that there is wide prospect for the application of support vector machine (SVM) into QSAR modeling.

  15. Magnetic field modelling of machine and multiple machine systems using dynamic reluctance mesh modelling

    OpenAIRE

    Yao, Li

    2006-01-01

    This thesis concerns the modified and improved, time-stepping, dynamic reluctance mesh (DRM) modelling technique for machines and its application to multiple machine systems with their control algorithms. Improvements are suggested which enable the stable solution of the resulting complex non-linear equations. The concept of finite element (FE) derived, overlap-curves has been introduced to facilitate the evaluation of the air-gap reluctances linking the teeth on the rotor to those on the sta...

  16. Modeling software with finite state machines a practical approach

    CERN Document Server

    Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter

    2006-01-01

    Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr

  17. 近足月胎兔持续宫内缺氧缺血性脑损伤模型的建立%Establishment of intrauterine hypoxic-ischemic brain damage model in near term fetal rabbits

    Institute of Scientific and Technical Information of China (English)

    王能里; 南燕; 柳艳丽; 林素; 叶伟; 唐震海; 林锦; 林振浪

    2012-01-01

    目的:建立近足月(29 d胎龄)胎兔持续宫内缺氧缺血性脑损伤模型,为深入研究新生儿缺氧缺血性脑损伤发病机制和治疗提供合适模型.方法:选择孕29 d健康新西兰白兔24只,联合全身麻醉和腰麻对孕兔进行麻醉,从左侧股动脉插入4F Fogarty动脉取栓导管,实验组向导管球囊内注入生理盐水0.3 mL阻断孕兔子宫血供,阻断时间分别为20 min、25 min、28 min、30 min和40 min,每组4只;对照组插管后不注入生理盐水,共4只.24 h后行剖宫产,记录新生兔一般情况,评估新生兔神经行为学和脑组织病理学改变.结果:麻醉过程中孕兔生命体征稳定,未发生低氧血症,对麻醉耐受性良好.实验组向导管球囊内注入生理盐水0.3 mL后孕兔右侧股动脉搏动消失,血压测不出;而对照组血压无明显波动(P>0.05).持续阻断子宫血供导致胎兔和新生兔死亡,存活新生兔神经行为学异常,脑细胞发生凋亡.阻断子宫血供20 min时,未发现死胎,新生兔行为学和脑组织病理学改变不明显;阻断子宫血供25 min和28 min时,死胎率分别为12.9%和40.6%,存活新生兔出现不同程度的神经行为异常,脑组织切片发现神经元细胞肿胀,小胶质细胞活化,脑细胞凋亡;而阻断子宫血供超过30 min时,死胎率高达80.0%.结论:持续阻断孕兔子宫血供导致胎兔死亡、新生兔神经行为学异常及脑组织病理学改变,且不同阻断时间引起不同程度的脑损伤;持续阻断子宫血供25~28 min,可为缺氧缺血性脑损伤的相关研究提供合适的胎儿期全身性缺氧缺血性脑损伤胎兔模型.%AIM: To establish intrauterine hypoxic - ischemic brain damage ( HIBD) model in near term fetal rabbits at 29 d gestation age for the investigation of the pathogenesis and treatment of newborn HIBD . METHODS: Twenty - four pregnant New Zealand white rabbits at 29th gestational day were chosen for this project. Under combined general

  18. Machine Learning Approaches for Modeling Spammer Behavior

    CERN Document Server

    Islam, Md Saiful; Islam, Md Rafiqul

    2010-01-01

    Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.

  19. A Knowledge base model for complex forging die machining

    CERN Document Server

    Mawussi, Kwamiwi; 10.1016/j.cie.2011.02.016

    2011-01-01

    Recent evolutions on forging process induce more complex shape on forging die. These evolutions, combined with High Speed Machining (HSM) process of forging die lead to important increase in time for machining preparation. In this context, an original approach for generating machining process based on machining knowledge is proposed in this paper. The core of this approach is to decompose a CAD model of complex forging die in geometric features. Technological data and topological relations are aggregated to a geometric feature in order to create machining features. Technological data, such as material, surface roughness and form tolerance are defined during forging process and dies design. These data are used to choose cutting tools and machining strategies. Topological relations define relative positions between the surfaces of the die CAD model. After machining features identification cutting tools and machining strategies currently used in HSM of forging die, are associated to them in order to generate mac...

  20. A language for easy and efficient modeling of Turing machines

    Institute of Scientific and Technical Information of China (English)

    Pinaki Chakraborty

    2007-01-01

    A Turing Machine Description Language (TMDL) is developed for easy and efficient modeling of Turing machines.TMDL supports formal symbolic representation of Turing machines. The grammar for the language is also provided. Then a fast singlepass compiler is developed for TMDL. The scope of code optimization in the compiler is examined. An interpreter is used to simulate the exact behavior of the compiled Turing machines. A dynamically allocated and resizable array is used to simulate the infinite tape of a Turing machine. The procedure for simulating composite Turing machines is also explained. In this paper, two sample Turing machines have been designed in TMDL and their simulations are discussed. The TMDL can be extended to model the different variations of the standard Turing machine.

  1. A Knowledge base model for complex forging die machining

    OpenAIRE

    Mawussi, Kwamiwi; Tapie, Laurent

    2011-01-01

    International audience; Recent evolutions on forging process induce more complex shape on forging die. These evolutions, combined with High Speed Machining (HSM) process of forging die lead to important increase in time for machining preparation. In this context, an original approach for generating machining process based on machining knowledge is proposed in this paper. The core of this approach is to decompose a CAD model of complex forging die in geometric features. Technological data and ...

  2. Screening for Prediabetes Using Machine Learning Models

    Directory of Open Access Journals (Sweden)

    Soo Beom Choi

    2014-01-01

    Full Text Available The global prevalence of diabetes is rapidly increasing. Studies support the necessity of screening and interventions for prediabetes, which could result in serious complications and diabetes. This study aimed at developing an intelligence-based screening model for prediabetes. Data from the Korean National Health and Nutrition Examination Survey (KNHANES were used, excluding subjects with diabetes. The KNHANES 2010 data (n=4685 were used for training and internal validation, while data from KNHANES 2011 (n=4566 were used for external validation. We developed two models to screen for prediabetes using an artificial neural network (ANN and support vector machine (SVM and performed a systematic evaluation of the models using internal and external validation. We compared the performance of our models with that of a screening score model based on logistic regression analysis for prediabetes that had been developed previously. The SVM model showed the areas under the curve of 0.731 in the external datasets, which is higher than those of the ANN model (0.729 and the screening score model (0.712, respectively. The prescreening methods developed in this study performed better than the screening score model that had been developed previously and may be more effective method for prediabetes screening.

  3. TRANSLATOR OF FINITE STATE MACHINE MODEL PARAMETERS FROM MATLAB ENVIRONMENT INTO HUMAN-MACHINE INTERFACE APPLICATION

    OpenAIRE

    2012-01-01

    Technology and means for automatic translation of FSM model parameters from Matlab application to human-machine interface application is proposed. The example of technology application to the electric apparatus model is described.

  4. Thermal models of electric machines with dynamic workloads

    Directory of Open Access Journals (Sweden)

    Christian Pohlandt

    2015-07-01

    Full Text Available Electric powertrains are increasingly used in off-highway machines because of easy controllability and excellent overall efficiency. The main goals are increasing the energy efficiency of the machine and the optimization of the work process. The thermal behaviour of electric machines with dynamic workloads applied to is a key design factor for electric powertrains in off-highway machines. This article introduces a methodology to model the thermal behaviour of electric machines. Using a noncausal modelling approach, an electric powertrain is analysed for dynamic workloads. Cause-effect relationships and reasons for increasing temperature are considered as well as various cooling techniques. The validation of the overall simulation model of the powertrain with measured field data workloads provides convincing results to evaluate numerous applications of electric machines in off-highway machines.

  5. A Machine-Learning-Driven Sky Model.

    Science.gov (United States)

    Satylmys, Pynar; Bashford-Rogers, Thomas; Chalmers, Alan; Debattista, Kurt

    2017-01-01

    Sky illumination is responsible for much of the lighting in a virtual environment. A machine-learning-based approach can compactly represent sky illumination from both existing analytic sky models and from captured environment maps. The proposed approach can approximate the captured lighting at a significantly reduced memory cost and enable smooth transitions of sky lighting to be created from a small set of environment maps captured at discrete times of day. The author's results demonstrate accuracy close to the ground truth for both analytical and capture-based methods. The approach has a low runtime overhead, so it can be used as a generic approach for both offline and real-time applications.

  6. Modeling of Exterior Rotor Permanent Magnet Machines with Concentrated Windings

    NARCIS (Netherlands)

    Vu Xuan, H.

    2012-01-01

    In this thesis modeling, analysis, design and measurement of exterior rotor permanent magnet (PM) machines with concentrated windings are dealt with. Special attention is paid to slotting effect. The PM machine is integrated in flywheel and used for small-scale ship application. Analytical model and

  7. Near-Term Fetuses Process Temporal Features of Speech

    Science.gov (United States)

    Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie

    2011-01-01

    The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…

  8. Magnetic equivalent circuit model for unipolar hybrid excitation synchronous machine

    Directory of Open Access Journals (Sweden)

    Kupiec Emil

    2015-03-01

    Full Text Available Lately, there has been increased interest in hybrid excitation electrical machines. Hybrid excitation is a construction that combines permanent magnet excitation with wound field excitation. Within the general classification, these machines can be classified as modified synchronous machines or inductor machines. These machines may be applied as motors and generators. The complexity of electromagnetic phenomena which occur as a result of coupling of magnetic fluxes of separate excitation systems with perpendicular magnetic axis is a motivation to formulate various mathematical models of these machines. The presented paper discusses the construction of a unipolar hybrid excitation synchronous machine. The magnetic equivalent circuit model including nonlinear magnetization curves is presented. Based on this model, it is possible to determine the multi-parameter relationships between the induced voltage and magnetomotive force in the excitation winding. Particular attention has been paid to the analysis of the impact of additional stator and rotor yokes on above relationship. Induced voltage determines the remaining operating parameters of the machine, both in the motor and generator mode of operation. The analysis of chosen correlations results in an identification of the effective control range of electromotive force of the machine.

  9. MODEL STUDY OF THE DOUBLE FED MACHINE WITH CURRENT CONTROL

    Directory of Open Access Journals (Sweden)

    A. S. Lyapin

    2016-07-01

    Full Text Available The paper deals with modeling results of the double fed induction machine with current control in the rotor circuit. We show the most promising applications of electric drives on the basis of the double fed induction machine and their advantages. We present and consider functional scheme of the electric drive on the basis of the double fed induction machine with current control. Equations are obtained for creation of such machine mathematical model. Expressions for vector projections of rotor current are given. According to the obtained results, the change of the vector projections of rotor current ensures operation of the double fed induction machine with the specified values of active and reactive stator power throughout the variation range of sliding motion. We consider static characteristics of double fed machine with current control. Energy processes proceeding in the machine are analyzed. We confirm the operationpossibility of double fed induction machine with current controlin the rotor circuit with given values of active and reactive stator power. The presented results can be used for creation of mathematical models and static characteristics of double fed machines with current control of various capacities.

  10. Machine Cognition Models: EPAM and GPS

    CERN Document Server

    Elouafiq, Ali

    2012-01-01

    Through history, the human being tried to relay its daily tasks to other creatures, which was the main reason behind the rise of civilizations. It started with deploying animals to automate tasks in the field of agriculture(bulls), transportation (e.g. horses and donkeys), and even communication (pigeons). Millenniums after, come the Golden age with "Al-jazari" and other Muslim inventors, which were the pioneers of automation, this has given birth to industrial revolution in Europe, centuries after. At the end of the nineteenth century, a new era was to begin, the computational era, the most advanced technological and scientific development that is driving the mankind and the reason behind all the evolutions of science; such as medicine, communication, education, and physics. At this edge of technology engineers and scientists are trying to model a machine that behaves the same as they do, which pushed us to think about designing and implementing "Things that-Thinks", then artificial intelligence was. In this...

  11. Modelling Effectiveness of Machine Gun Fire

    OpenAIRE

    Dutta, D.; S. Sabhanval

    2002-01-01

    Machine gun is an effective infantry weapon which can cause heavy damage to enemy targets, if sited in a tactically favourable position. It can be engaged effectively against both static and moving targets. The paper deals with the determination of target vulnerability under effective machine gun fire considering relevant tactical parameters, eg, target aiming point, trajectory of fire, sweep angle, target frontage, posture, direction of attack, etc.

  12. Modelling Effectiveness of Machine Gun Fire

    Directory of Open Access Journals (Sweden)

    D. Dutta

    2002-04-01

    Full Text Available Machine gun is an effective infantry weapon which can cause heavy damage to enemy targets, if sited in a tactically favourable position. It can be engaged effectively against both static and moving targets. The paper deals with the determination of target vulnerability under effective machine gun fire considering relevant tactical parameters, eg, target aiming point, trajectory of fire, sweep angle, target frontage, posture, direction of attack, etc.

  13. ABOUT COMPLEX APPROACH TO MODELLING OF TECHNOLOGICAL MACHINES FUNCTIONING

    Directory of Open Access Journals (Sweden)

    A. A. Honcharov

    2015-01-01

    Full Text Available Problems arise in the process of designing, production and investigation of a complicated technological machine. These problems concern not only properties of some types of equipment but they have respect to regularities of control object functioning as a whole. A technological machine is thought of as such technological complex where it is possible to lay emphasis on a control system (or controlling device and a controlled object. The paper analyzes a number of existing approaches to construction of models for controlling devices and their functioning. A complex model for a technological machine operation has been proposed in the paper; in other words it means functioning of a controlling device and a controlled object of the technological machine. In this case models of the controlling device and the controlled object of the technological machine can be represented as aggregate combination (elements of these models. The paper describes a conception on realization of a complex model for a technological machine as a model for interaction of units (elements in the controlling device and the controlled object. When a control activation is given to the controlling device of the technological machine its modelling is executed at an algorithmic or logic level and the obtained output signals are interpreted as events and information about them is transferred to executive mechanisms.The proposed scheme of aggregate integration considers element models as object classes and the integration scheme is presented as a combination of object property values (combination of a great many input and output contacts and combination of object interactions (in the form of an integration operator. Spawn of parent object descendants of the technological machine model and creation of their copies in various project parts is one of the most important means of the distributed technological machine modelling that makes it possible to develop complicated models of

  14. Model of Pulsed Electrical Discharge Machining (EDM using RL Circuit

    Directory of Open Access Journals (Sweden)

    Ade Erawan Bin Minhat

    2014-10-01

    Full Text Available This article presents a model of pulsed Electrical Discharge Machining (EDM using RL circuit. There are several mathematical models have been successfully developed based on the initial, ignition and discharge phase of current and voltage gap. According to these models, the circuit schematic of transistor pulse power generator has been designed using electrical model in Matlab Simulink software to identify the profile of voltage and current during machining process. Then, the simulation results are compared with the experimental results.

  15. Generative Modeling for Machine Learning on the D-Wave

    Energy Technology Data Exchange (ETDEWEB)

    Thulasidasan, Sunil [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Information Sciences Group

    2016-11-15

    These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.

  16. Advanced wind turbine near-term product development. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-01-01

    In 1990 the US Department of Energy initiated the Advanced Wind Turbine (AWT) Program to assist the growth of a viable wind energy industry in the US. This program, which has been managed through the National Renewable Energy Laboratory (NREL) in Golden, Colorado, has been divided into three phases: (1) conceptual design studies, (2) near-term product development, and (3) next-generation product development. The goals of the second phase were to bring into production wind turbines which would meet the cost goal of $0.05 kWh at a site with a mean (Rayleigh) windspeed of 5.8 m/s (13 mph) and a vertical wind shear exponent of 0.14. These machines were to allow a US-based industry to compete domestically with other sources of energy and to provide internationally competitive products. Information is given in the report on design values of peak loads and of fatigue spectra and the results of the design process are summarized in a table. Measured response is compared with the results from mathematical modeling using the ADAMS code and is discussed. Detailed information is presented on the estimated costs of maintenance and on spare parts requirements. A failure modes and effects analysis was carried out and resulted in approximately 50 design changes including the identification of ten previously unidentified failure modes. The performance results of both prototypes are examined and adjusted for air density and for correlation between the anemometer site and the turbine location. The anticipated energy production at the reference site specified by NREL is used to calculate the final cost of energy using the formulas indicated in the Statement of Work. The value obtained is $0.0514/kWh in January 1994 dollars. 71 figs., 30 tabs.

  17. Cranial sonography in term and near-term infants

    Energy Technology Data Exchange (ETDEWEB)

    Yikilmaz, Ali [Gevher Nesibe Hospital and Erciyes Medical School, Department of Radiology, Talas, Kayseri (Turkey); Taylor, George A. [Children' s Hospital Boston and Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2008-06-15

    Sonographic patterns of brain injury in the term and near-term infant are quite different from those in the premature infant. Although periventricular leukomalacia and germinal matrix hemorrhage are rarely seen in term infants, selective neuronal injury, parasagittal infarction, focal stroke, diffuse hypoxic-ischemic injury, and deep parenchymal hemorrhages are more common lesions. In addition, congenital brain tumors, hamartomatous lesions, such as hemimegalencephaly, and tuberous sclerosis can mimic ischemic and hemorrhagic injury. Sonography remains an important tool in the initial evaluation of intracranial abnormalities in critically ill term and near-term infants. An understanding of the differences in etiology, sonographic patterns, and limitations of sonography in the term infant is essential for accurate and effective diagnoses in this age group. (orig.)

  18. Near-term hybrid vehicle program, phase 1

    Science.gov (United States)

    1979-01-01

    The preliminary design of a hybrid vehicle which fully meets or exceeds the requirements set forth in the Near Term Hybrid Vehicle Program is documented. Topics addressed include the general layout and styling, the power train specifications with discussion of each major component, vehicle weight and weight breakdown, vehicle performance, measures of energy consumption, and initial cost and ownership cost. Alternative design options considered and their relationship to the design adopted, computer simulation used, and maintenance and reliability considerations are also discussed.

  19. Generating Turing Machines by Use of Other Computation Models

    Directory of Open Access Journals (Sweden)

    Leszek Dubiel

    2003-01-01

    Full Text Available For each problem that can be solved there exists algorithm, which can be described with a program of Turing machine. Because this is very simple model programs tend to be very complicated and hard to analyse by human. The best practice to solve given type of problems is to define a new model of computation that allows for quick and easy programming, and then to emulate its operation with Turing machine. This article shows how to define most suitable model for computation on natural numbers and defines Turing machine that emulates its operation.

  20. Testing and Modeling of Machine Properties in Resistance Welding

    DEFF Research Database (Denmark)

    Wu, Pei

    electrode force, and the time of stabilizing does not depend on the level of the force. An additional spring mounted in the welding head improves the machine touching behavior due to a soft electrode application, but this results in longer time of oscillation of the electrode force, especially when......The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...

  1. Probabilistic models and machine learning in structural bioinformatics

    DEFF Research Database (Denmark)

    Hamelryck, Thomas

    2009-01-01

    . Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis...

  2. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  3. EQUIVALENT NORMAL CURVATURE APPROACH MILLING MODEL OF MACHINING FREEFORM SURFACES

    Institute of Scientific and Technical Information of China (English)

    YI Xianzhong; MA Weiguo; QI Haiying; YAN Zesheng; GAO Deli

    2008-01-01

    A new milling methodology with the equivalent normal curvature milling model machining freeform surfaces is proposed based on the normal curvature theorems on differential geometry. Moreover, a specialized whirlwind milling tool and a 5-axis CNC horizontal milling machine are introduced. This new milling model can efficiently enlarge the material removal volume at the tip of the whirlwind milling tool and improve the producing capacity. The machining strategy of this model is to regulate the orientation of the whirlwind milling tool relatively to the principal directions of the workpiece surface at the point of contact, so as to create a full match with collision avoidance between the workpiece surface and the symmetric rotational surface of the milling tool. The practical results show that this new milling model is an effective method in machining complex three- dimensional surfaces. This model has a good improvement on finishing machining time and scallop height in machining the freeform surfaces over other milling processes. Some actual examples for manufacturing the freeform surfaces with this new model are given.

  4. Trajectories for a Near Term Mission to the Interstellar Medium

    Science.gov (United States)

    Arora, Nitin; Strange, Nathan; Alkalai, Leon

    2015-01-01

    Trajectories for rapid access to the interstellar medium (ISM) with a Kuiper Belt Object (KBO) flyby, launching between 2022 and 2030, are described. An impulsive-patched-conic broad search algorithm combined with a local optimizer is used for the trajectory computations. Two classes of trajectories, (1) with a powered Jupiter flyby and (2) with a perihelion maneuver, are studied and compared. Planetary flybys combined with leveraging maneuvers reduce launch C3 requirements (by factor of 2 or more) and help satisfy mission-phasing constraints. Low launch C3 combined with leveraging and a perihelion maneuver is found to be enabling for a near-term potential mission to the ISM.

  5. Near-term electric vehicle program. Phase II

    Energy Technology Data Exchange (ETDEWEB)

    1978-12-01

    The Integrated Vehicle Tests will be performed to determine the degree to which the (DOE) performance goals for the near-term electric vehicle program have been met, to provide a subjective evaluation of the regeneration brake system, to provide a general customer acceptability review. The specific tests covered in this plan are enumerated. Group 1 tests will be performed on the first available vehicle and will, in general, concentrate on performance tests to satisfy the DOE goals. Group 2 tests, to be performed on Vehicle No. 2, will provide additional test data (braking, suspension system, shake, noise level, ride and handling evaluations, and general customer acceptability review).

  6. Near-term lunar nuclear thermal rocket engine options

    Science.gov (United States)

    Pelaccio, Dennis G.; Scheil, Christine M.; Collins, John T.

    1991-01-01

    The Nuclear Thermal Rocket (NTR) is an attractive candidate propulsion system option for manned planetary missions. Its high performance capability for such missions translates into a substantial reduction in low-earth-orbit (LEO) required mass and trip times with increased operational flexibility. This study examined NTR engine options that could support near-term lunar mission operations. Expander and gas generator cycle, solid-core NERVA derivative reactor-based NTR engines were investigated. Weight, size, operational characteristics, and design features for representative NTR engine concepts are presented. The impact of using these NTR engines for a typical lunar mission scenario is also examined.

  7. Monitoring Vibration of A Model of Rotating Machine

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2012-03-01

    Full Text Available Mechanical movement or motion of a rotating machine normally causes additional vibration. A vibration sensing device must be added to constantly monitor vibration level of the system having a rotating machine, since the vibration frequency and amplitude cannot be measured quantitatively by only sight or touch. If the vibration signals from the machine have a lot of noise, there are possibilities that the rotating machine has defects that can lead to failure. In this experimental research project, a vibration structure is constructed in a scaled model to simulate vibration and to monitor system performance in term of vibration level in case of rotation with balanced and unbalanced condition. In this scaled model, the output signal of the vibration sensor is processed in a microcontroller and then transferred to a computer via a serial communication medium, and plotted on the screen with data plotter software developed using C language. The signal waveform of the vibration is displayed to allow further analysis of the vibration. Vibration level monitor can be set in the microcontroller to allow shutdown of the rotating machine in case of excessive vibration to protect the rotating machine from further damage. Experiment results show the agreement with theory that unbalance condition on a rotating machine can lead to larger vibration amplitude compared to balance condition. Adding and reducing the mass for balancing can be performed to obtain lower vibration level. 

  8. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  9. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  10. Atlantic near-term climate variability and the role of a resolved Gulf Stream

    Science.gov (United States)

    Siqueira, Leo; Kirtman, Ben P.

    2016-04-01

    There is a continually increasing demand for near-term (i.e., lead times up to a couple of decades) climate information. This demand is partly driven by the need to have robust forecasts and is partly driven by the need to assess how much of the ongoing climate change is due to natural variability and how much is due to anthropogenic increases in greenhouse gases or other external factors. Here we discuss results from a set of state-of-the-art climate model experiments in comparison with observational estimates that show that an assessment of predictability requires models that capture the variability of major oceanic fronts, which are, at best, poorly resolved and may even be absent in the near-term prediction of Intergovernmental Panel on Climate Change class models. This is the first time that air-sea interactions associated with resolved Gulf Stream sea surface temperature have been identified in the context of a state-of-the-art global coupled climate model with inferred near-term predictability.

  11. X: A Comprehensive Analytic Model for Parallel Machines

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ang; Song, Shuaiwen; Brugel, Eric; Kumar, Akash; Chavarría-Miranda, Daniel; Corporaal, Henk

    2016-05-23

    To continuously comply with Moore’s Law, modern parallel machines become increasingly complex. Effectively tuning application performance for these machines therefore becomes a daunting task. Moreover, identifying performance bottlenecks at application and architecture level, as well as evaluating various optimization strategies, are becoming extremely difficult when the entanglement of numerous correlated factors is being presented. To tackle these challenges, we present a visual analytical model named “X”. It is intuitive and sufficiently flexible to track all the typical features of a parallel machine.

  12. Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels;

    2003-01-01

    The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... for both upper and lower electrode systems. This has laid a foundation for modeling the welding process and selecting the welding parameters considering the machine factors. The method is straightforward and easy to be applied in industry since the whole procedure is based on tests with no requirements...

  13. An Access Control Model of Virtual Machine Security

    Directory of Open Access Journals (Sweden)

    QIN Zhong-yuan

    2013-07-01

    Full Text Available Virtualization technology becomes a hot IT technolo gy with the popu-larity of Cloud Computing. However, new security issues arise with it. Specifically, the resources sharing and data communication in virtual machines are most con cerned. In this paper an access control model is proposed which combines the Chinese Wall a nd BLP model. BLP multi-level security model is introduced with corresponding improvement based on PCW (Prioritized Chinese Wall security model. This model can be used to safely co ntrol the resources and event behaviors in virtual machines. Experimental results show its eff ectiveness and safety.

  14. Modeling powder encapsulation in dosator-based machines: I. Theory.

    Science.gov (United States)

    Khawam, Ammar

    2011-12-15

    Automatic encapsulation machines have two dosing principles: dosing disc and dosator. Dosator-based machines compress the powder to plugs that are transferred into capsules. The encapsulation process in dosator-based capsule machines was modeled in this work. A model was proposed to predict the weight and length of produced plugs. According to the model, the plug weight is a function of piston dimensions, powder-bed height, bulk powder density and precompression densification inside dosator while plug length is a function of piston height, set piston displacement, spring stiffness and powder compressibility. Powder densification within the dosator can be achieved by precompression, compression or both. Precompression densification depends on the powder to piston height ratio while compression densification depends on piston displacement against powder. This article provides the theoretical basis of the encapsulation model, including applications and limitations. The model will be applied to experimental data separately.

  15. Virtual Sensor for Calibration of Thermal Models of Machine Tools

    Directory of Open Access Journals (Sweden)

    Alexander Dementjev

    2014-01-01

    strictly depends on the accuracy of these machines, but they are prone to deformation caused by their own heat. The deformation needs to be compensated in order to assure accurate production. So an adequate model of the high-dimensional thermal deformation process must be created and parameters of this model must be evaluated. Unfortunately, such parameters are often unknown and cannot be calculated a priori. Parameter identification during real experiments is not an option for these models because of its high engineering and machine time effort. The installation of additional sensors to measure these parameters directly is uneconomical. Instead, an effective calibration of thermal models can be reached by combining real and virtual measurements on a machine tool during its real operation, without additional sensors installation. In this paper, a new approach for thermal model calibration is presented. The expected results are very promising and can be recommended as an effective solution for this class of problems.

  16. Committee of machine learning predictors of hydrological models uncertainty

    Science.gov (United States)

    Kayastha, Nagendra; Solomatine, Dimitri

    2014-05-01

    In prediction of uncertainty based on machine learning methods, the results of various sampling schemes namely, Monte Carlo sampling (MCS), generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), particle swarm optimization (PSO) and adaptive cluster covering (ACCO)[1] used to build a predictive models. These models predict the uncertainty (quantiles of pdf) of a deterministic output from hydrological model [2]. Inputs to these models are the specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. For each sampling scheme three machine learning methods namely, artificial neural networks, model tree, locally weighted regression are applied to predict output uncertainties. The problem here is that different sampling algorithms result in different data sets used to train different machine learning models which leads to several models (21 predictive uncertainty models). There is no clear evidence which model is the best since there is no basis for comparison. A solution could be to form a committee of all models and to sue a dynamic averaging scheme to generate the final output [3]. This approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model HBV in the Nzoia catchment in Kenya. [1] N. Kayastha, D. L. Shrestha and D. P. Solomatine. Experiments with several methods of parameter uncertainty estimation in hydrological modeling. Proc. 9th Intern. Conf. on Hydroinformatics, Tianjin, China, September 2010. [2] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press

  17. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh

    2017-01-01

    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  18. Near-term electric vehicle program: Phase I, final report

    Energy Technology Data Exchange (ETDEWEB)

    Rowlett, B. H.; Murry, R.

    1977-08-01

    A final report is given for an Energy Research and Development Administration effort aimed at a preliminary design of an energy-efficient electric commuter car. An electric-powered passenger vehicle using a regenerative power system was designed to meet the near-term ERDA electric automobile goals. The program objectives were to (1) study the parameters that affect vehicle performance, range, and cost; (2) design an entirely new electric vehicle that meets performance and economic requirements; and (3) define a program to develop this vehicle design for production in the early 1980's. The design and performance features of the preliminary (baseline) electric-powered passenger vehicle design are described, including the baseline power system, system performance, economic analysis, reliability and safety, alternate designs and options, development plan, and conclusions and recommendations. All aspects of the baseline design were defined in sufficient detail to verify performance expectations and system feasibility.

  19. Near term hybrid passenger vehicle development program, phase 1

    Science.gov (United States)

    1980-01-01

    Missions for hybrid vehicles that promise to yield high petroleum impact were identified and a preliminary design, was developed that satisfies the mission requirements and performance specifications. Technologies that are critical to successful vehicle design, development and fabrication were determined. Trade-off studies to maximize fuel savings were used to develop initial design specifications of the near term hybrid vehicle. Various designs were "driven" through detailed computer simulations which calculate the petroleum consumption in standard driving cycles, the petroleum and electricity consumptions over the specified missions, and the vehicle's life cycle costs over a 10 year vehicle lifetime. Particular attention was given to the selection of the electric motor, heat engine, drivetrain, battery pack and control system. The preliminary design reflects a modified current compact car powered by a currently available turbocharged diesel engine and a 24 kW (peak) compound dc electric motor.

  20. Near-Term Laser Launch Capability: The Heat Exchanger Thruster

    Science.gov (United States)

    Kare, Jordin T.

    2003-05-01

    The heat exchanger (HX) thruster concept uses a lightweight (up to 1 MW/kg) flat-plate heat exchanger to couple laser energy into flowing hydrogen. Hot gas is exhausted via a conventional nozzle to generate thrust. The HX thruster has several advantages over ablative thrusters, including high efficiency, design flexibility, and operation with any type of laser. Operating the heat exchanger at a modest exhaust temperature, nominally 1000 C, allows it to be fabricated cheaply, while providing sufficient specific impulse (~600 seconds) for a single-stage vehicle to reach orbit with a useful payload; a nominal vehicle design is described. The HX thruster is also comparatively easy to develop and test, and offers an extremely promising route to near-term demonstration of laser launch.

  1. Model for performance prediction in multi-axis machining

    CERN Document Server

    Lavernhe, Sylvain; Lartigue, Claire; 10.1007/s00170-007-1001-4

    2009-01-01

    This paper deals with a predictive model of kinematical performance in 5-axis milling within the context of High Speed Machining. Indeed, 5-axis high speed milling makes it possible to improve quality and productivity thanks to the degrees of freedom brought by the tool axis orientation. The tool axis orientation can be set efficiently in terms of productivity by considering kinematical constraints resulting from the set machine-tool/NC unit. Capacities of each axis as well as some NC unit functions can be expressed as limiting constraints. The proposed model relies on each axis displacement in the joint space of the machine-tool and predicts the most limiting axis for each trajectory segment. Thus, the calculation of the tool feedrate can be performed highlighting zones for which the programmed feedrate is not reached. This constitutes an indicator for trajectory optimization. The efficiency of the model is illustrated through examples. Finally, the model could be used for optimizing process planning.

  2. From Points to Forecasts: Predicting Invasive Species Habitat Suitability in the Near Term

    Directory of Open Access Journals (Sweden)

    Tracy R. Holcombe

    2010-05-01

    Full Text Available We used near-term climate scenarios for the continental United States, to model 12 invasive plants species. We created three potential habitat suitability models for each species using maximum entropy modeling: (1 current; (2 2020; and (3 2035. Area under the curve values for the models ranged from 0.92 to 0.70, with 10 of the 12 being above 0.83 suggesting strong and predictable species-environment matching. Change in area between the current potential habitat and 2035 ranged from a potential habitat loss of about 217,000 km2, to a potential habitat gain of about 133,000 km2.

  3. Assessing Implicit Knowledge in BIM Models with Machine Learning

    DEFF Research Database (Denmark)

    Krijnen, Thomas; Tamke, Martin

    2015-01-01

    architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models.......The promise, which comes along with Building Information Models, is that they are information rich, machine readable and represent the insights of multiple building disciplines within single or linked models. However, this knowledge has to be stated explicitly in order to be understood. Trained...

  4. Parameter optimization model in electrical discharge machining process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.

  5. Multi products single machine EPQ model with immediate rework process

    Directory of Open Access Journals (Sweden)

    Jahangir Biabani

    2012-01-01

    Full Text Available This paper develops an economic production quantity (EPQ inventory model with rework process for a single stage production system with one machine. The existence of a unique machine results in limited production capacity. The aim of this research is to determine both the optimal cycle length and the optimal production quantity for each product to minimize the expected total cost (holding, production, setup, rework costs. The convexity of the inventory model is derived. Also the objective function is proved to be convex. The proposed inventory model is validated with illustrating numerical examples and the optimal period length and the total system cost are analyzed.

  6. Assessing Implicit Knowledge in BIM Models with Machine Learning

    DEFF Research Database (Denmark)

    Krijnen, Thomas; Tamke, Martin

    2015-01-01

    architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models.......The promise, which comes along with Building Information Models, is that they are information rich, machine readable and represent the insights of multiple building disciplines within single or linked models. However, this knowledge has to be stated explicitly in order to be understood. Trained...

  7. Simulink Implementation of Indirect Vector Control of Induction Machine Model

    Directory of Open Access Journals (Sweden)

    V. Dhanunjayanaidu

    2014-04-01

    Full Text Available In this paper, a modular Simulink implementation of an induction machine model is described in a step-by-step approach. With the modular system, each block solves one of the model equations; therefore, unlike in black box models, all of the machine parameters are accessible for control and verification purposes.After the implementation, examples are given with the model used in different drive applications, such as open-loop constant V/Hz control and indirect vector control. To implement the induction machine model, the dynamic equivalent circuit of induction motor is taken and the model equations in flux linkage form are derived.Then the model is implemented in Simulink by transforming three phase voltages to d-q frame and the d-q currents back to three phase, also it includes unit vector calculation and induction machine d-q model.The inputs here are three phase voltages, load torque, speed of stator and the outputs are flux linkages and currents, electrical torque and speed of rotor.

  8. Runtime Optimizations for Tree-Based Machine Learning Models

    NARCIS (Netherlands)

    N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)

    2014-01-01

    htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression

  9. Linguistically motivated statistical machine translation models and algorithms

    CERN Document Server

    Xiong, Deyi

    2015-01-01

    This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

  10. WEB-BASED VIRTUAL CNC MACHINE MODELING AND OPERATION

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A CNC simulation system based on internet for operation training of manufacturing facility and manufacturing process simulation is proposed. Firstly, the system framework and a rapid modeling method of CNC machine tool are studied under the virtual environment based on PolyTrans and CAD software. Then, a new method is proposed to enhance and expand the interactive ability of virtual reality modeling language(VRML) by attaining communication among VRML, JavaApplet, JavaScript and Html so as to realize the virtual operation for CNC machine tool. Moreover, the algorithm of material removed simulation based on VRML Z-map is presented. The advantages of this algorithm include less memory requirement and much higher computation. Lastly, the CNC milling machine is taken as an illustrative example for the prototype development in order to validate the feasibility of the proposed approach.

  11. Near-Term Acceleration In The Rate of Temperature Change

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Steven J.; Edmonds, James A.; Hartin, Corinne A.; Mundra, Anupriya; Calvin, Katherine V.

    2015-03-09

    Anthropogenically-driven climate changes, which are expected to impact human and natural systems, are often expressed in terms of global-mean temperature . The rate of climate change over multi-decadal scales is also important, with faster rates of change resulting in less time for human and natural systems to adapt . We find that current trends in greenhouse gas and aerosol emissions are now moving the Earth system into a regime in terms of multi-decadal rates of change that are unprecedented for at least the last 1000 years. The rate of global-mean temperature increase in the CMIP5 archive over 40-year periods increases to 0.25±0.05 (1σ) °C per decade by 2020, an average greater than peak rates of change during the previous 1-2 millennia. Regional rates of change in Europe, North America and the Arctic are higher than the global average. Research on the impacts of such near-term rates of change is urgently needed.

  12. Rover/NERVA-derived near-term nuclear propulsion

    Science.gov (United States)

    FY-92 accomplishments centered on conceptual design and analyses for 25, 50, and 75 K engines with emphasis on the 50 K engine. During the first period of performance, flow and energy balances were prepared for each of these configurations and thrust-to-weight values were estimated. A review of fuel technology and key data from the Rover/NERVA program established a baseline for proven reactor performance and areas of enhancement to meet near-term goals. Studies were performed of the criticality and temperature profiles for probable fuel and moderator loadings for the three engine sizes, with a more detailed analysis of the 50 K size. During the second period of performance, analyses of the 50 K engine continued. A chamber/nozzle contour was selected and heat transfer and fatigue analyses were performed for likely construction materials. Reactor analyses were performed to determine component radiation heating rates, reactor radiation fields, water immersion poisoning requirements, temperature limits for restartability, and a tie-tube thermal analysis. Finally, a brief assessment of key enabling technologies was made, with a view toward identifying development issues and identification of the critical path toward achieving engine qualification within 10 years.

  13. Developing hydrogen infrastructure through near-term intermediate technology

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, D.M.; Checkel, M.D.; Koch, C.R. [Alberta Univ., Edmonton, AB (Canada). Dept. of Mechanical Engineering

    2003-07-01

    The first step toward widespread application of hydrogen-powered vehicles is the development of a vehicular hydrogen fuelling infrastructure. This paper proposes the use of Dynamic Hydrogen Multifuel (DHM) as an intermediate technology to support and stimulate the development of the hydrogen infrastructure. The DHM technology is designed to optimize emissions and overall fuel economy in a spark ignition engine via an engine control and fuel system which utilizes flexible blending of hydrogen and another fuel. Cold starting or idling on pure hydrogen are techniques that can be used to enhance emissions and fuel economy. The lean operation and exhaust gas recirculation limits can be extended by blending hydrogen, while normal engine power and vehicle range are maintained using conventional fuel. If the hydrogen infrastructure is to be developed further, one must understand the factor that ensure the successful implementation of current hydrogen filling stations. Important lessons on the development of alternative fuel infrastructure derived from natural gas were discussed in this paper. The authors explained why Argentina was successful in converting vehicles to natural gas while similar attempts met failure in both Canada and New Zealand. The authors suggest that one solution may be to introduce a catalytic, near-term technology to provide fuel station demand and operating experience. 18 refs.

  14. Support vector machine-based multi-model predictive control

    Institute of Scientific and Technical Information of China (English)

    Zhejing BA; Youxian SUN

    2008-01-01

    In this paper,a support vector machine-based multi-model predictive control is proposed,in which SVM classification combines well with SVM regression.At first,each working environment is modeled by SVM regression and the support vector machine network-based model predictive control(SVMN-MPC)algorithm corresponding to each environment is developed,and then a multi-class SVM model is established to recognize multiple operating conditions.As for control,the current environment is identified by the multi-class SVM model and then the corresponding SVMN.MPCcontroller is activated at each sampling instant.The proposed modeling,switching and controller design is demonstrated in simulation results.

  15. Model checking abstract state machines with answer set programming

    OpenAIRE

    2006-01-01

    Answer Set Programming (ASP) is a logic programming paradigm that has been shown as a useful tool in various application areas due to its expressive modelling language. These application areas include Bourided Model Checking (BMC). BMC is a verification technique that is recognized for its strong ability of finding errors in computer systems. To apply BMC, a system needs to be modelled in a formal specification language, such as the widely used formalism of Abstract State Machines (ASMs). In ...

  16. Support Vector Machine-Based Nonlinear System Modeling and Control

    Institute of Scientific and Technical Information of China (English)

    张浩然; 韩正之; 冯瑞; 于志强

    2003-01-01

    This paper provides an introduction to a support vector machine, a new kernel-based technique introduced in statistical learning theory and structural risk minimization, then presents a modeling-control framework based on SVM.At last a numerical experiment is taken to demonstrate the proposed approach's correctness and effectiveness.

  17. Modelling, Construction, and Testing of a Simple HTS Machine Demonstrator

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Abrahamsen, Asger Bech

    2011-01-01

    This paper describes the construction, modeling and experimental testing of a high temperature superconducting (HTS) machine prototype employing second generation (2G) coated conductors in the field winding. The prototype is constructed in a simple way, with the purpose of having an inexpensive w...

  18. MULTI SUPPORT VECTOR MACHINES DECISION MODEL AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    阎威武; 陈治纲; 邵惠鹤

    2002-01-01

    Support Vector Machines (SVM) is a powerful machine learning method developed from statistical learning theory and is currently an active field in artificial intelligent technology. SVM is sensitive to noise vectors near hyperplane since it is determined only by few support vectors. In this paper, Multi SVM decision model(MSDM)was proposed. MSDM consists of multiple SVMs and makes decision by synthetic information based on multi SVMs. MSDM is applied to heart disease diagnoses based on UCI benchmark data set. MSDM somewhat inproves the robust of decision system.

  19. Machine learning models in breast cancer survival prediction.

    Science.gov (United States)

    Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin

    2016-01-01

    Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of

  20. A general thermal model of machine tool spindle

    Directory of Open Access Journals (Sweden)

    Yanfang Dong

    2017-01-01

    Full Text Available As the core component of machine tool, the thermal characteristics of the spindle have a significant influence on machine tool running status. Lack of an accurate model of the spindle system, particularly the model of load–deformation coefficient between the bearing rolling elements and rings, severely limits the thermal error analytic precision of the spindle. In this article, bearing internal loads, especially the function relationships between the principal curvature difference F(ρ and auxiliary parameter nδ, semi-major axis a, and semi-minor axis b, have been determined; furthermore, high-precision heat generation combining the heat sinks in the spindle system is calculated; finally, an accurate thermal model of the spindle was established. Moreover, a conventional spindle with embedded fiber Bragg grating temperature sensors has been developed. By comparing the experiment results with simulation, it indicates that the model has good accuracy, which verifies the reliability of the modeling process.

  1. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  2. Analytical model for Stirling cycle machine design

    CERN Document Server

    Formosa, Fabien; 10.1016/j.enconman.2010.02.010

    2013-01-01

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined.

  3. Thermal-mechanical modeling of laser ablation hybrid machining

    Science.gov (United States)

    Matin, Mohammad Kaiser

    2001-08-01

    Hard, brittle and wear-resistant materials like ceramics pose a problem when being machined using conventional machining processes. Machining ceramics even with a diamond cutting tool is very difficult and costly. Near net-shape processes, like laser evaporation, produce micro-cracks that require extra finishing. Thus it is anticipated that ceramic machining will have to continue to be explored with new-sprung techniques before ceramic materials become commonplace. This numerical investigation results from the numerical simulations of the thermal and mechanical modeling of simultaneous material removal from hard-to-machine materials using both laser ablation and conventional tool cutting utilizing the finite element method. The model is formulated using a two dimensional, planar, computational domain. The process simulation acronymed, LAHM (Laser Ablation Hybrid Machining), uses laser energy for two purposes. The first purpose is to remove the material by ablation. The second purpose is to heat the unremoved material that lies below the ablated material in order to ``soften'' it. The softened material is then simultaneously removed by conventional machining processes. The complete solution determines the temperature distribution and stress contours within the material and tracks the moving boundary that occurs due to material ablation. The temperature distribution is used to determine the distance below the phase change surface where sufficient ``softening'' has occurred, so that a cutting tool may be used to remove additional material. The model incorporated for tracking the ablative surface does not assume an isothermal melt phase (e.g. Stefan problem) for laser ablation. Both surface absorption and volume absorption of laser energy as function of depth have been considered in the models. LAHM, from the thermal and mechanical point of view is a complex machining process involving large deformations at high strain rates, thermal effects of the laser, removal of

  4. NSTX: Facility/Research Highlights and Near Term Facility Plans

    Energy Technology Data Exchange (ETDEWEB)

    M. Ono

    2008-11-19

    The National Spherical Torus Experiment (NSTX) is a collaborative mega-ampere-class spherical torus research facility with high power heating and current drive systems and the state-of-the-art comprehensive diagnostics. For the 2008 experimental campaign, the high harmonic fast wave (HHFW) heating efficiency in deuterium improved significantly with lithium evaporation and produced a record central Te of 5 keV. The HHFW heating of NBI-heated discharges was also demonstrated for the first time with lithium application. The EBW emission in H-mode was also improved dramatically with lithium which was shown to be attributable to reduced edge collisional absorption. Newly installed FIDA energetic particle diagnostic measured significant transport of energetic ions associated with TAE avalanche as well as n=1 kink activities. A full 75 channel poloidal CHERS system is now operational yielding tantalizing initial results. In the near term, major upgrade activities include a liquid-lithium divertor target to achieve lower collisionality regime, the HHFW antenna upgrades to double its power handling capability in H-mode, and a beam-emission spectroscopy diagnostic to extend the localized turbulence measurements toward the ion gyro-radius scale from the present concentration on the electron gyro-radius scale. For the longer term, a new center stack to significantly expand the plasma operating parameters is planned along with a second NBI system to double the NBI heating and CD power and provide current profile control. These upgrades will enable NSTX to explore fully non-inductive operations over a much expanded plasma parameter space in terms of higher plasma temperature and lower collisionality, thereby significantly reducing the physics parameter gap between the present NSTX and the projected next-step ST experiments.

  5. Prostaglandins for prelabour rupture of membranes at or near term.

    Science.gov (United States)

    Tan, B P; Hannah, M E

    2000-01-01

    Induction of labour after prelabour rupture of membranes may reduce the risk of neonatal infection. However an expectant approach may be less likely to result in caesarean section. The objective of this review was to assess the effects of induction of labour with prostaglandins versus expectant management for prelabour rupture of membranes at or near term. We searched the Cochrane Pregnancy and Childbirth Group trials register. Randomised and quasi-randomised trials comparing early use of prostaglandins (with or without oxytocin) with no early use of prostaglandins in women with spontaneous rupture of membranes before labour, and 34 weeks or more of gestation. Trials were assessed for quality and data were abstracted. Fifteen trials were included. Most were of moderate to good quality. Different forms of prostaglandin preparations were used in these trials and it may be inappropriate to combine their results. Induction of labour by prostaglandins was associated with a decreased risk of chorioamnionitis (odds ratio 0.77, 95% confidence interval 0.61 to 0.97) based on eight trials and admission to neonatal intensive care (odds ratio 0.79, 95% confidence interval 0. 66 to 0.94) based on seven trials. No difference was detected for rate of caesarean section, although induction by prostaglandins was associated with a more frequent maternal diarrhoea and use of anaesthesia and/or analgesia. Based on one trial, women were more likely to view their care positively if labour was induced with prostaglandins,. Induction of labour with prostaglandins appears to decrease the risk of maternal infection (chorioamnionitis) and admission to neonatal intensive care. Induction of labour with prostaglandins does not appear to increase the rate of caesarean section, although it is associated with more frequent maternal diarrhoea and pain relief.

  6. Applying Machine Trust Models to Forensic Investigations

    Science.gov (United States)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  7. "Near-term" Natural Catastrophe Risk Management and Risk Hedging in a Changing Climate

    Science.gov (United States)

    Michel, Gero; Tiampo, Kristy

    2014-05-01

    Competing with analytics - Can the insurance market take advantage of seasonal or "near-term" forecasting and temporal changes in risk? Natural perils (re)insurance has been based on models following climatology i.e. the long-term "historical" average. This is opposed to considering the "near-term" and forecasting hazard and risk for the seasons or years to come. Variability and short-term changes in risk are deemed abundant for almost all perils. In addition to hydrometeorological perils whose changes are vastly discussed, earthquake activity might also change over various time-scales affected by earlier local (or even global) events, regional changes in the distribution of stresses and strains and more. Only recently has insurance risk modeling of (stochastic) hurricane-years or extratropical-storm-years started considering our ability to forecast climate variability herewith taking advantage of apparent correlations between climate indicators and the activity of storm events. Once some of these "near-term measures" were in the market, rating agencies and regulators swiftly adopted these concepts demanding companies to deploy a selection of more conservative "time-dependent" models. This was despite the fact that the ultimate effect of some of these measures on insurance risk was not well understood. Apparent short-term success over the last years in near-term seasonal hurricane forecasting was brought to a halt in 2013 when these models failed to forecast the exceptional shortage of hurricanes herewith contradicting an active-year forecast. The focus of earthquake forecasting has in addition been mostly on high rather than low temporal and regional activity despite the fact that avoiding losses does not by itself create a product. This presentation sheds light on new risk management concepts for over-regional and global (re)insurance portfolios that take advantage of forecasting changes in risk. The presentation focuses on the "upside" and on new opportunities

  8. Generalized Quadratic Linearization of Machine Models

    OpenAIRE

    Parvathy Ayalur Krishnamoorthy; Kamaraj Vijayarajan; Devanathan Rajagopalan

    2011-01-01

    In the exact linearization of involutive nonlinear system models, the issue of singularity needs to be addressed in practical applications. The approximate linearization technique due to Krener, based on Taylor series expansion, apart from being applicable to noninvolutive systems, allows the singularity issue to be circumvented. But approximate linearization, while removing terms up to certain order, also introduces terms of higher order than those removed into the system. To overcome th...

  9. Simulation Modeling and Analysis of Operator-Machine Ratio

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on a simulation model of a semiconductor manufacturer, operator-machine ratio (OMR) analysis is made using work study and time study. Through sensitivity analysis, it is found that labor utilization decreases with the increase of lot size.Meanwhile, it is able to identify that the OMR for this company should be improved from 1∶3 to 1∶5. An application result shows that the proposed model can effectively improve the OMR by 33%.

  10. Machine Learning and Cosmological Simulations I: Semi-Analytical Models

    OpenAIRE

    Kamdar, Harshil M.; Turk, Matthew J.; Brunner, Robert J.

    2015-01-01

    We present a new exploratory framework to model galaxy formation and evolution in a hierarchical universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analyzing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various so...

  11. Knowledge in formation: The machine-modeled frame of mind

    Energy Technology Data Exchange (ETDEWEB)

    Shore, B.

    1996-12-31

    Artificial Intelligence researchers have used the digital computer as a model for the human mind in two different ways. Most obviously, the computer has been used as a tool on which simulations of thinking-as-programs are developed and tested. Less obvious, but of great significance, is the use of the computer as a conceptual model for the human mind. This essay traces the sources of this machine-modeled conception of cognition in a great variety of social institutions and everyday experienced treating them as {open_quotes}cultural models{close_quotes} which have contributed to the naturalness of The mine-as-machine paradigm for many Americans. The roots of these models antedate the actual development of modern computers, and take the form of a {open_quotes}modularity schema{close_quotes} that has shaped the cultural and cognitive landscape of modernity. The essay concludes with a consideration of some of the cognitive consequences of this extension of machine logic into modern life, and proposes an important distinction between information processing models of thought and meaning-making in how human cognition is conceptualized.

  12. The rise of machine consciousness: studying consciousness with computational models.

    Science.gov (United States)

    Reggia, James A

    2013-08-01

    Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises

  13. Building Better Ecological Machines: Complexity Theory and Alternative Economic Models

    Directory of Open Access Journals (Sweden)

    Jess Bier

    2016-12-01

    Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.

  14. Machine Learning Based Statistical Prediction Model for Improving Performance of Live Virtual Machine Migration

    Directory of Open Access Journals (Sweden)

    Minal Patel

    2016-01-01

    Full Text Available Service can be delivered anywhere and anytime in cloud computing using virtualization. The main issue to handle virtualized resources is to balance ongoing workloads. The migration of virtual machines has two major techniques: (i reducing dirty pages using CPU scheduling and (ii compressing memory pages. The available techniques for live migration are not able to predict dirty pages in advance. In the proposed framework, time series based prediction techniques are developed using historical analysis of past data. The time series is generated with transferring of memory pages iteratively. Here, two different regression based models of time series are proposed. The first model is developed using statistical probability based regression model and it is based on ARIMA (autoregressive integrated moving average model. The second one is developed using statistical learning based regression model and it uses SVR (support vector regression model. These models are tested on real data set of Xen to compute downtime, total number of pages transferred, and total migration time. The ARIMA model is able to predict dirty pages with 91.74% accuracy and the SVR model is able to predict dirty pages with 94.61% accuracy that is higher than ARIMA.

  15. Near-term Forecasting of Solar Total and Direct Irradiance for Solar Energy Applications

    Science.gov (United States)

    Long, C. N.; Riihimaki, L. D.; Berg, L. K.

    2012-12-01

    Integration of solar renewable energy into the power grid, like wind energy, is hindered by the variable nature of the solar resource. One challenge of the integration problem for shorter time periods is the phenomenon of "ramping events" where the electrical output of the solar power system increases or decreases significantly and rapidly over periods of minutes or less. Advance warning, of even just a few minutes, allows power system operators to compensate for the ramping. However, the ability for short-term prediction on such local "point" scales is beyond the abilities of typical model-based weather forecasting. Use of surface-based solar radiation measurements has been recognized as a likely solution for providing input for near-term (5 to 30 minute) forecasts of solar energy availability and variability. However, it must be noted that while fixed-orientation photovoltaic panel systems use the total (global) downwelling solar radiation, tracking photovoltaic and solar concentrator systems use only the direct normal component of the solar radiation. Thus even accurate near-term forecasts of total solar radiation will under many circumstances include inherent inaccuracies with respect to tracking systems due to lack of information of the direct component of the solar radiation. We will present examples and statistical analyses of solar radiation partitioning showing the differences in the behavior of the total/direct radiation with respect to the near-term forecast issue. We will present an overview of the possibility of using a network of unique new commercially available total/diffuse radiometers in conjunction with a near-real-time adaptation of the Shortwave Radiative Flux Analysis methodology (Long and Ackerman, 2000; Long et al., 2006). The results are used, in conjunction with persistence and tendency forecast techniques, to provide more accurate near-term forecasts of cloudiness, and both total and direct normal solar irradiance availability and

  16. Prediction of near-term breast cancer risk using a Bayesian belief network

    Science.gov (United States)

    Zheng, Bin; Ramalingam, Pandiyarajan; Hariharan, Harishwaran; Leader, Joseph K.; Gur, David

    2013-03-01

    Accurately predicting near-term breast cancer risk is an important prerequisite for establishing an optimal personalized breast cancer screening paradigm. In previous studies, we investigated and tested the feasibility of developing a unique near-term breast cancer risk prediction model based on a new risk factor associated with bilateral mammographic density asymmetry between the left and right breasts of a woman using a single feature. In this study we developed a multi-feature based Bayesian belief network (BBN) that combines bilateral mammographic density asymmetry with three other popular risk factors, namely (1) age, (2) family history, and (3) average breast density, to further increase the discriminatory power of our cancer risk model. A dataset involving "prior" negative mammography examinations of 348 women was used in the study. Among these women, 174 had breast cancer detected and verified in the next sequential screening examinations, and 174 remained negative (cancer-free). A BBN was applied to predict the risk of each woman having cancer detected six to 18 months later following the negative screening mammography. The prediction results were compared with those using single features. The prediction accuracy was significantly increased when using the BBN. The area under the ROC curve increased from an AUC=0.70 to 0.84 (pbreast cancer risk than with a single feature.

  17. Performance model for Micro Tunnelling Boring Machines (MTBM

    Directory of Open Access Journals (Sweden)

    J. Gallo

    2017-06-01

    Full Text Available From the last decades of the XX century, various formulae have been proposed to estimate the performance in tunnelling of disc cutters, mainly employed in Tunnelling Boring Machines (TBM. Nevertheless, their suitability has not been verified in Micro Tunnelling Boring Machines (MTBM, with smaller diameter of excavation, between 1,000 and 2,500 mm and smaller cutter tools, where parameters like joint spacing may have a different influence. This paper analyzes those models proposed for TBM. After having observed very low correlation with data obtained in 15 microtunnels, a new performance model is developed, adapted to the geomechanical data available in this type of works. Moreover, a method to calculate the total amount of hours that are necessary to carry out microtunnels, including all the tasks of the excavation cycle and installation and uninstallation.

  18. Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics

    Science.gov (United States)

    Hristopulos, Dionissios T.

    2015-12-01

    Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.

  19. Optimal Liner Material for Near Term Magnetized Liner Fusion Experiments

    Science.gov (United States)

    Slutz, Stephen

    2012-10-01

    Substantial fusion yields are predicted with existing pulsed power machines driving cylindrical liner implosions with preheated and magnetized deuterium-tritium [S.A. Slutz et al Phys. Plasmas 17, 056303 (2010)]. Experiments are planned using the Z accelerator to drive these implosions. However, the peak current, the laser heating energy, and the applied magnetic field will be less than optimal. We present simulations which show, that under these conditions, the yield can be improved significantly by decreasing the density of the liner material, e.g. Lithium substituted for Beryllium. Furthermore, the simulations show that decreasing the liner density allows the use of very low aspect ratio (R/δR) liners, while still obtaining interesting yields. Low aspect ratio liners should be more robust to the Rayleigh-Taylor instability.

  20. Numerical modeling and optimization of machining duplex stainless steels

    Directory of Open Access Journals (Sweden)

    Rastee D. Koyee

    2015-01-01

    Full Text Available The shortcomings of the machining analytical and empirical models in combination with the industry demands have to be fulfilled. A three-dimensional finite element modeling (FEM introduces an attractive alternative to bridge the gap between pure empirical and fundamental scientific quantities, and fulfill the industry needs. However, the challenging aspects which hinder the successful adoption of FEM in the machining sector of manufacturing industry have to be solved first. One of the greatest challenges is the identification of the correct set of machining simulation input parameters. This study presents a new methodology to inversely calculate the input parameters when simulating the machining of standard duplex EN 1.4462 and super duplex EN 1.4410 stainless steels. JMatPro software is first used to model elastic–viscoplastic and physical work material behavior. In order to effectively obtain an optimum set of inversely identified friction coefficients, thermal contact conductance, Cockcroft–Latham critical damage value, percentage reduction in flow stress, and Taylor–Quinney coefficient, Taguchi-VIKOR coupled with Firefly Algorithm Neural Network System is applied. The optimization procedure effectively minimizes the overall differences between the experimentally measured performances such as cutting forces, tool nose temperature and chip thickness, and the numerically obtained ones at any specified cutting condition. The optimum set of input parameter is verified and used for the next step of 3D-FEM application. In the next stage of the study, design of experiments, numerical simulations, and fuzzy rule modeling approaches are employed to optimize types of chip breaker, insert shapes, process conditions, cutting parameters, and tool orientation angles based on many important performances. Through this study, not only a new methodology in defining the optimal set of controllable parameters for turning simulations is introduced, but also

  1. Quantum Machine and SR Approach: a Unified Model

    CERN Document Server

    Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro

    2005-01-01

    The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.

  2. Modeling of autoresonant control of a parametrically excited screen machine

    Science.gov (United States)

    Abolfazl Zahedi, S.; Babitsky, Vladimir

    2016-10-01

    Modelling of nonlinear dynamic response of a screen machine described by the nonlinear coupled differential equations and excited by the system of autoresonant control is presented. The displacement signal of the screen is fed to the screen excitation directly by means of positive feedback. Negative feedback is used to fix the level of screen amplitude response within the expected range. The screen is anticipated to vibrate with a parametric resonance and the excitation, stabilization and control response of the system are studied in the stable mode. Autoresonant control is thoroughly investigated and output tracking is reported. The control developed provides the possibility of self-tuning and self-adaptation mechanisms that allow the screen machine to maintain a parametric resonant mode of oscillation under a wide range of uncertainty of mass and viscosity.

  3. Modeling Image Structure with Factorized Phase-Coupled Boltzmann Machines

    CERN Document Server

    Cadieu, Charles F

    2010-01-01

    We describe a model for capturing the statistical structure of local amplitude and local spatial phase in natural images. The model is based on a recently developed, factorized third-order Boltzmann machine that was shown to be effective at capturing higher-order structure in images by modeling dependencies among squared filter outputs (Ranzato and Hinton, 2010). Here, we extend this model to $L_p$-spherically symmetric subspaces. In order to model local amplitude and phase structure in images, we focus on the case of two dimensional subspaces, and the $L_2$-norm. When trained on natural images the model learns subspaces resembling quadrature-pair Gabor filters. We then introduce an additional set of hidden units that model the dependencies among subspace phases. These hidden units form a combinatorial mixture of phase coupling distributions, concentrated in the sum and difference of phase pairs. When adapted to natural images, these distributions capture local spatial phase structure in natural images.

  4. Kinematic modelling of a 3-axis NC machine tool in linear and circular interpolation

    CERN Document Server

    Pessoles, Xavier; Rubio, Walter; 10.1007/s00170-009-2236-z

    2010-01-01

    Machining time is a major performance criterion when it comes to high-speed machining. CAM software can help in estimating that time for a given strategy. But in practice, CAM-programmed feed rates are rarely achieved, especially where complex surface finishing is concerned. This means that machining time forecasts are often more than one step removed from reality. The reason behind this is that CAM routines do not take either the dynamic performances of the machines or their specific machining tolerances into account. The present article seeks to improve simulation of high-speed NC machine dynamic behaviour and machining time prediction, offering two models. The first contributes through enhanced simulation of three-axis paths in linear and circular interpolation, taking high-speed machine accelerations and jerks into account. The second model allows transition passages between blocks to be integrated in the simulation by adding in a polynomial transition path that caters for the true machining environment t...

  5. Event-driven process execution model for process virtual machine

    Institute of Scientific and Technical Information of China (English)

    WU Dong-yao; WEI Jun; GAO Chu-shu; DOU Wen-shen

    2012-01-01

    Current orchestration and choreography process engines only serve with dedicate process languages. To solve these problems, an Even~driven Process Execution Model (EPEM) was developed. Formalization and map- ping principles of the model were presented to guarantee the correctness and efficiency for process transformation. As a case study, the EPEM descriptions of Web Services Business Process Execution Language (WS~BPEL) were represented and a Process Virtual Machine (PVM)-OncePVM was implemented in compliance with the EPEM.

  6. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Fangyu Pan

    2013-08-01

    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  7. Artificial emotional model based on finite state machine

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-mei; WU Wei-guo

    2008-01-01

    According to the basic emotional theory, the artificial emotional model based on the finite state machine(FSM) was presented. In finite state machine model of emotion, the emotional space included the basic emotional space and the multiple emotional spaces. The emotion-switching diagram was defined and transition function was developed using Markov chain and linear interpolation algorithm. The simulation model was built using Stateflow toolbox and Simulink toolbox based on the Matlab platform.And the model included three subsystems: the input one, the emotion one and the behavior one. In the emotional subsystem, the responses of different personalities to the external stimuli were described by defining personal space. This model takes states from an emotional space and updates its state depending on its current state and a state of its input (also a state-emotion). The simulation model realizes the process of switching the emotion from the neutral state to other basic emotions. The simulation result is proved to correspond to emotion-switching law of human beings.

  8. TYRE DYNAMICS MODELLING OF VEHICLE BASED ON SUPPORT VECTOR MACHINES

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shuibo; TANG Houjun; HAN Zhengzhi; ZHANG Yong

    2006-01-01

    Various methods of tyre modelling are implemented from pure theoretical to empirical or semi-empirical models based on experimental results. A new way of representing tyre data obtained from measurements is presented via support vector machines (SVMs). The feasibility of applying SVMs to steady-state tyre modelling is investigated by comparison with three-layer backpropagation(BP) neural network at pure slip and combined slip. The results indicate SVMs outperform the BP neural network in modelling the tyre characteristics with better generalization performance. The SVMs-tyre is implemented in 8-DOF vehicle model for vehicle dynamics simulation by means of the PAC 2002 Magic Formula as reference. The SVMs-tyre can be a competitive and accurate method to model a tyre for vehicle dynamics simulation.

  9. Near-term electric-vehicle program. Phase II. Mid-term review summary report

    Energy Technology Data Exchange (ETDEWEB)

    1978-07-27

    The general objective of the Near-Term Electric Vehicle Program is to confirm that, in fact, the complete spectrum of requirements placed on the automobile (e.g., safety, producibility, utility, etc.) can still be satisfied if electric power train concepts are incorporated in lieu of contemporary power train concepts, and that the resultant set of vehicle characteristics are mutually compatible, technologically achievable, and economically achievable. The focus of the approach to meeting this general objective involves the design, development, and fabrication of complete electric vehicles incorporating, where necessary, extensive technological advancements. A mid-term summary is presented of Phase II which is a continuation of the preliminary design study conducted in Phase I of the program. Information is included on vehicle performance and performance simulation models; battery subsystems; control equipment; power systems; vehicle design and components for suspension, steering, and braking; scale model testing; structural analysis; and vehicle dynamics analysis. (LCL)

  10. Machine learning and docking models for Mycobacterium tuberculosis topoisomerase I.

    Science.gov (United States)

    Ekins, Sean; Godbole, Adwait Anand; Kéri, György; Orfi, Lászlo; Pato, János; Bhat, Rajeshwari Subray; Verma, Rinkee; Bradley, Erin K; Nagaraja, Valakunja

    2017-03-01

    There is a shortage of compounds that are directed towards new targets apart from those targeted by the FDA approved drugs used against Mycobacterium tuberculosis. Topoisomerase I (Mttopo I) is an essential mycobacterial enzyme and a promising target in this regard. However, it suffers from a shortage of known inhibitors. We have previously used computational approaches such as homology modeling and docking to propose 38 FDA approved drugs for testing and identified several active molecules. To follow on from this, we now describe the in vitro testing of a library of 639 compounds. These data were used to create machine learning models for Mttopo I which were further validated. The combined Mttopo I Bayesian model had a 5 fold cross validation receiver operator characteristic of 0.74 and sensitivity, specificity and concordance values above 0.76 and was used to select commercially available compounds for testing in vitro. The recently described crystal structure of Mttopo I was also compared with the previously described homology model and then used to dock the Mttopo I actives norclomipramine and imipramine. In summary, we describe our efforts to identify small molecule inhibitors of Mttopo I using a combination of machine learning modeling and docking studies in conjunction with screening of the selected molecules for enzyme inhibition. We demonstrate the experimental inhibition of Mttopo I by small molecule inhibitors and show that the enzyme can be readily targeted for lead molecule development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL; Parker, Lynne Edwards [ORNL

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  12. Using machine learning to model dose-response relationships.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R; Nallamothu, Brahmajee K

    2016-12-01

    Establishing the relationship between various doses of an exposure and a response variable is integral to many studies in health care. Linear parametric models, widely used for estimating dose-response relationships, have several limitations. This paper employs the optimal discriminant analysis (ODA) machine-learning algorithm to determine the degree to which exposure dose can be distinguished based on the distribution of the response variable. By framing the dose-response relationship as a classification problem, machine learning can provide the same functionality as conventional models, but can additionally make individual-level predictions, which may be helpful in practical applications like establishing responsiveness to prescribed drug regimens. Using data from a study measuring the responses of blood flow in the forearm to the intra-arterial administration of isoproterenol (separately for 9 black and 13 white men, and pooled), we compare the results estimated from a generalized estimating equations (GEE) model with those estimated using ODA. Generalized estimating equations and ODA both identified many statistically significant dose-response relationships, separately by race and for pooled data. Post hoc comparisons between doses indicated ODA (based on exact P values) was consistently more conservative than GEE (based on estimated P values). Compared with ODA, GEE produced twice as many instances of paradoxical confounding (findings from analysis of pooled data that are inconsistent with findings from analyses stratified by race). Given its unique advantages and greater analytic flexibility, maximum-accuracy machine-learning methods like ODA should be considered as the primary analytic approach in dose-response applications. © 2016 John Wiley & Sons, Ltd.

  13. A mechanical model of an axial piston machine

    OpenAIRE

    Löfstrand Grip, Rasmus

    2009-01-01

    A mechanical model of an axial piston-type machine with a so-called wobble plate and Z-shaft mechanism is presented. The overall aim is to design and construct an oil-free piston expander demonstrator as a first step to realizing an advanced and compact small-scale steam engine system. The benefits of a small steam engine are negligible NOx emissions (due to continuous, low-temperature combustion), no gearbox needed, fuel flexibility (e.g., can run on biofuel and solar), high part-load effici...

  14. Electric machines modeling, condition monitoring, and fault diagnosis

    CERN Document Server

    Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun

    2012-01-01

    With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi

  15. MODEL RESEARCH OF THE ACIVE VIBROIZOLATION CABS MACHINE

    Directory of Open Access Journals (Sweden)

    Jerzy MARGIELEWICZ

    2014-03-01

    Full Text Available The study was carried out computer simulations of mechatronic model bridge crane, which is intended to theoretical evaluation of the possibility of eliminating the mechanical vibrations affecting the operator's cab driven machine. The model studies used fixed value control, the controlled variable is selected as the vertical displacement of the cab. Also included in the research model rheological model of the operator's body. We examined four overhead cranes with lifting capacity of 50T, which are classified in accordance with the directive of the European Union concerning the design of cranes, the four classes of cranes HC stiffness. The use of an active vibration isolation system in which distinguishes two negative feedback loops, very well eliminate mechanical vibration to the operator.

  16. Support Vector Machine active learning for 3D model retrieval

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects which are marked by the user, and then creates a boundary separating the relevant models from irrelevant ones. What it needs is only a small number of 3D models labelled by the user. It can grasp the user's semantic knowledge rapidly and accurately. Experimental results showed that the proposed algorithm significantly improves the retrieval effectiveness. Compared with four state-of-the-art query refinement schemes for 3D model retrieval, it provides superior retrieval performance after no more than two rounds of relevance feedback.

  17. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  18. Machine Learning and Cosmological Simulations I: Semi-Analytical Models

    CERN Document Server

    Kamdar, Harshil M; Brunner, Robert J

    2016-01-01

    We present a new exploratory framework to model galaxy formation and evolution in a hierarchical universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analyzing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various sophisticated machine learning algorithms (k-Nearest Neighbors, decision trees, random forests and extremely randomized trees). By using only essential dark matter halo physical properties for haloes of $M>10^{12} M_{\\odot}$ and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenological framework to explore...

  19. Applications and modelling of bulk HTSs in brushless ac machines

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, G.J. [Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ (United Kingdom). E-mail: gary.barnes at eng.ox.ac.uk; McCulloch, M.D.; Dew-Hughes, D. [Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ (United Kingdom)

    2000-06-01

    The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation. (author)

  20. Applications and modelling of bulk HTSs in brushless ac machines

    Science.gov (United States)

    Barnes, G. J.; McCulloch, M. D.; Dew-Hughes, D.

    2000-06-01

    The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation.

  1. Implications of weak near-term climate policies on long-term climate mitigation pathways

    Energy Technology Data Exchange (ETDEWEB)

    Luderer, Gunnar; Bertram, Christoph; Calvin, Katherine V.; De Cian, Enrica; Kriegler, Elmar

    2016-05-09

    While the international community has set a target to limit global warming to no more than 2°C above pre-industrial levels, only a few concrete climate policies and measures to reduce greenhouse gas (GHG) emissions have been implemented. We use a set of three global integrated assessment models to analyze the implications of current climate policies on long-term mitigation targets. We define a weak-policy baseline scenario, which extrapolates the current policy environment by assuming that the global climate regime remains fragmented and that emission reduction efforts remain unambitious in most of the world’s regions. In this scenario, GHG concentrations stabilize at approximately 650 ppm CO2e, which clearly falls short of the international community’s long-term climate target. We investigate the cost and achievability of the stabilization of atmospheric GHG concentrations at 450 ppm CO2e by 2100, if countries follow the weak policy pathway until 2020 or 2030, before global cooperative action is taken to pursue the long-term mitigation target. Despite weak near-term action, a 450 ppm CO2e target is achievable in all the models. However, we find that a deferral of ambitious action exacerbates the challenges of low stabilization. Specifically, weak near-term action leads to (a) higher temporary overshooting of radiative forcing, (b) faster and more aggressive transformations of energy systems after target adoption, (c) more stranded investments in fossil-based capacities, and (d) higher long-term mitigation costs and carbon prices._

  2. Long-term perspective underscores need for stronger near-term policies on climate change

    Science.gov (United States)

    Marcott, S. A.; Shakun, J. D.; Clark, P. U.; Mix, A. C.; Pierrehumbert, R.; Goldner, A. P.

    2014-12-01

    Despite scientific consensus that substantial anthropogenic climate change will occur during the 21st century and beyond, the social, economic and political will to address this global challenge remains mired in uncertainty and indecisiveness. One contributor to this situation may be that scientific findings are often couched in technical detail focusing on near-term changes and uncertainties and often lack a relatable long-term context. We argue that viewing near-term changes from a long-term perspective provides a clear demonstration that policy decisions made in the next few decades will affect the Earth's climate, and with it our socio-economic well-being, for the next ten millennia or more. To provide a broader perspective, we present a graphical representation of Earth's long-term climate history that clearly identifies the connection between near-term policy options and the geological scale of future climate change. This long view is based on a combination of recently developed global proxy temperature reconstructions of the last 20,000 years and model projections of surface temperature for the next 10,000 years. Our synthesis places the 20th and 21st centuries, when most emissions are likely to occur, into the context of the last twenty millennia over which time the last Ice Age ended and human civilization developed, and the next ten millennia, over which time the projected impacts will occur. This long-term perspective raises important questions about the most effective adaptation and mitigation policies. For example, although some consider it economically viable to raise seawalls and dikes in response to 21st century sea level change, such a strategy does not account for the need for continuously building much higher defenses in the 22nd century and beyond. Likewise, avoiding tipping points in the climate system in the short term does not necessarily imply that such thresholds will not still be crossed in the more distant future as slower components

  3. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  4. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  5. ANALYTICAL MODEL OF CALCULUS FOR INFLUENCE THE TRANSLATION GUIDE WEAR OVER THE MACHINING ACCURACY ON THE MACHINE TOOL

    Directory of Open Access Journals (Sweden)

    Ivona PETRE

    2010-10-01

    Full Text Available The wear of machine tools guides influences favorably to vibrations. As a result of guides wear, the initial trajectory of cutting tools motion will be modified, the generating dimensional accuracy discrepancies and deviations of geometrical shape of the work pieces. As it has already been known, the wear of mobile and rigid guides is determined by many parameters (pressure, velocity, friction length, lubrication, material. The choice of one or another analytic model and/or the experimental model of the wear is depending by the working conditions, assuming that the coupling material is known.The present work’s goal is to establish an analytic model of calculus showing the influence of the translation guides wear over the machining accuracy on machine-tools.

  6. RM-structure alignment based statistical machine translation model

    Institute of Scientific and Technical Information of China (English)

    Sun Jiadong; Zhao Tiejun

    2008-01-01

    A novel model based on structure alignments is proposed for statistical machine translation in this paper.Meta-structure and sequence of meta-structure for a parse tree are defined.During the translation process, a parse tree is decomposed to deal with the structure divergence and the alignments can be constructed at different levels of recombination of meta-structure (RM).This method can perform the structure mapping across the sub-tree structure between languages.As a result, we get not only the translation for the target language, but sequence of meta-structure of its parse tree at the same time.Experiments show that the model in the framework of log-linear model has better generative ability and significantly outperforms Pharaoh, a phrase-based system.

  7. Systematic improvement of molecular representations for machine learning models

    CERN Document Server

    Huang, Bing

    2016-01-01

    The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. We introduce a hierarchy of representations based on uniqueness and target similarity criteria. To systematically control target similarity, we rely on interatomic many body expansions including Bonding, Angular, and higher order terms (BA). Addition of higher order contributions systematically increases similarity to the potential energy function as well as predictive accuracy of the resulting ML models. Numerical evidence is presented for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heatcapacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After tr...

  8. Modelling of the dynamic behaviour of hard-to-machine alloys

    OpenAIRE

    Bäker M.; Shrot A.; Leemet T.; Hokka M.; Kuokkala V.-T.

    2012-01-01

    Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining proc...

  9. Modeling and Simulation of Process-Machine Interaction in Grinding of Cemented Carbide Indexable Inserts

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2015-01-01

    Full Text Available Interaction of process and machine in grinding of hard and brittle materials such as cemented carbide may cause dynamic instability of the machining process resulting in machining errors and a decrease in productivity. Commonly, the process and machine tools were dealt with separately, which does not take into consideration the mutual interaction between the two subsystems and thus cannot represent the real cutting operations. This paper proposes a method of modeling and simulation to understand well the process-machine interaction in grinding process of cemented carbide indexable inserts. First, a virtual grinding wheel model is built by considering the random nature of abrasive grains and a kinematic-geometrical simulation is adopted to describe the grinding process. Then, a wheel-spindle model is simulated by means of the finite element method to represent the machine structure. The characteristic equation of the closed-loop dynamic grinding system is derived to provide a mathematic description of the process-machine interaction. Furthermore, a coupling simulation of grinding wheel-spindle deformations and grinding process force by combining both the process and machine model is developed to investigate the interaction between process and machine. This paper provides an integrated grinding model combining the machine and process models, which can be used to predict process-machine interactions in grinding process.

  10. The Abstract Machine Model for Transaction-based System Control

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.

    2003-01-31

    Recent work applying statistical mechanics to economic modeling has demonstrated the effectiveness of using thermodynamic theory to address the complexities of large scale economic systems. Transaction-based control systems depend on the conjecture that when control of thermodynamic systems is based on price-mediated strategies (e.g., auctions, markets), the optimal allocation of resources in a market-based control system results in an emergent optimal control of the thermodynamic system. This paper proposes an abstract machine model as the necessary precursor for demonstrating this conjecture and establishes the dynamic laws as the basis for a special theory of emergence applied to the global behavior and control of complex adaptive systems. The abstract machine in a large system amounts to the analog of a particle in thermodynamic theory. The permit the establishment of a theory dynamic control of complex system behavior based on statistical mechanics. Thus we may be better able to engineer a few simple control laws for a very small number of devices types, which when deployed in very large numbers and operated as a system of many interacting markets yields the stable and optimal control of the thermodynamic system.

  11. Ecological Footprint Model Using the Support Vector Machine Technique

    Science.gov (United States)

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  12. Modelling and Simulation of a Synchronous Machine with Power Electronic Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede

    2005-01-01

    This paper reports the modeling and simulation of a synchronous machine with a power electronic interface in direct phase model. The implementation of a direct phase model of synchronous machines in MATLAB/SIMULINK is presented .The power electronic system associated with the synchronous machine...... is modelled in SIMULINK as well. The resulting model can more accurately represent non-idea situations such as non-symmetrical parameters of the electrical machines and unbalance conditions. The model may be used for both steady state and large-signal dynamic analysis. This is particularly useful...

  13. Modelling and Simulation of a Synchronous Machine with Power Electronic Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede

    2005-01-01

    This paper reports the modeling and simulation of a synchronous machine with a power electronic interface in direct phase model. The implementation of a direct phase model of synchronous machines in MATLAB/SIMULINK is presented .The power electronic system associated with the synchronous machine...... is modelled in SIMULINK as well. The resulting model can more accurately represent non-idea situations such as non-symmetrical parameters of the electrical machines and unbalance conditions. The model may be used for both steady state and large-signal dynamic analysis. This is particularly useful...

  14. Fourier transform based dynamic error modeling method for ultra-precision machine tool

    Science.gov (United States)

    Chen, Guoda; Liang, Yingchun; Ehmann, Kornel F.; Sun, Yazhou; Bai, Qingshun

    2014-08-01

    In some industrial fields, the workpiece surface need to meet not only the demand of surface roughness, but the strict requirement of multi-scale frequency domain errors. Ultra-precision machine tool is the most important carrier for the ultra-precision machining of the parts, whose errors is the key factor to influence the multi-scale frequency domain errors of the machined surface. The volumetric error modeling is the important bridge to link the relationship between the machine error and machined surface error. However, the available error modeling method from the previous research is hard to use to analyze the relationship between the dynamic errors of the machine motion components and multi-scale frequency domain errors of the machined surface, which plays the important reference role in the design and accuracy improvement of the ultra-precision machine tool. In this paper, a fourier transform based dynamic error modeling method is presented, which is also on the theoretical basis of rigid body kinematics and homogeneous transformation matrix. A case study is carried out, which shows the proposed method can successfully realize the identical and regular numerical description of the machine dynamic errors and the volumetric errors. The proposed method has strong potential for the prediction of the frequency domain errors on the machined surface, extracting of the information of multi-scale frequency domain errors, and analysis of the relationship between the machine motion components and frequency domain errors of the machined surface.

  15. Model-driven Migration of Supervisory Machine Control Architectures

    NARCIS (Netherlands)

    Graaf, B.; Weber, S.; Van Deursen, A.

    2006-01-01

    Supervisory machine control is the high-level control in advanced manufacturing machines that is responsible for the coordination of manufacturing activities. Traditionally, the design of such control systems is based on finite state machines. An alternative, more flexible approach is based on

  16. HIDDEN MARKOV MODELS WITH COVARIATES FOR ANALYSIS OF DEFECTIVE INDUSTRIAL MACHINE PARTS

    OpenAIRE

    2014-01-01

    Monthly counts of industrial machine part errors are modeled using a two-state Hidden Markov Model (HMM) in order to describe the effect of machine part error correction and the amount of time spent on the error correction on the likelihood of the machine part to be in a “defective” or “non-defective” state. The number of machine parts errors were collected from a thermo plastic injection molding machine in a car bumper auto parts manufacturer in Liberec city, Czech Re...

  17. Crystal Structure Representations for Machine Learning Models of Formation Energies

    CERN Document Server

    Faber, Felix; von Lilienfeld, O Anatole; Armiento, Rickard

    2015-01-01

    We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an Ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix by using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a data set of 3938 crystal structures obtained from the Materials Project. For training sets consi...

  18. Ontological modelling of knowledge management for human-machine integrated design of ultra-precision grinding machine

    Science.gov (United States)

    Hong, Haibo; Yin, Yuehong; Chen, Xing

    2016-11-01

    Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.

  19. HIDDEN MARKOV MODELS WITH COVARIATES FOR ANALYSIS OF DEFECTIVE INDUSTRIAL MACHINE PARTS

    Directory of Open Access Journals (Sweden)

    Pornpit Sirima

    2014-01-01

    Full Text Available Monthly counts of industrial machine part errors are modeled using a two-state Hidden Markov Model (HMM in order to describe the effect of machine part error correction and the amount of time spent on the error correction on the likelihood of the machine part to be in a “defective” or “non-defective” state. The number of machine parts errors were collected from a thermo plastic injection molding machine in a car bumper auto parts manufacturer in Liberec city, Czech Republic from January 2012 to November 2012. A Bayesian method is used for parameter estimation. The results of this study indicate that the machine part error correction and the amount of time spent on the error correction do not improve the machine part status of the individual part, but there is a very strong month-to-month dependence of the machine part states. Using the Mean Absolute Error (MAE criterion, the performance of the proposed model (MAE = 1.62 and the HMM including machine part error correction only (MAE = 1.68, from our previous study, is not significantly different. However, the proposed model has more advantage in the fact that the machine part state can be explained by both the machine part error correction and the amount of time spent on the error correction.

  20. Rotary ATPases: models, machine elements and technical specifications.

    Science.gov (United States)

    Stewart, Alastair G; Sobti, Meghna; Harvey, Richard P; Stock, Daniela

    2013-01-01

    Rotary ATPases are molecular rotary motors involved in biological energy conversion. They either synthesize or hydrolyze the universal biological energy carrier adenosine triphosphate. Recent work has elucidated the general architecture and subunit compositions of all three sub-types of rotary ATPases. Composite models of the intact F-, V- and A-type ATPases have been constructed by fitting high-resolution X-ray structures of individual subunits or sub-complexes into low-resolution electron densities of the intact enzymes derived from electron cryo-microscopy. Electron cryo-tomography has provided new insights into the supra-molecular arrangement of eukaryotic ATP synthases within mitochondria and mass-spectrometry has started to identify specifically bound lipids presumed to be essential for function. Taken together these molecular snapshots show that nano-scale rotary engines have much in common with basic design principles of man made machines from the function of individual "machine elements" to the requirement of the right "fuel" and "oil" for different types of motors.

  1. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Document Server

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  2. A Reference Model for Virtual Machine Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young

    2016-07-01

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  3. Application of Krylov Reduction Technique for a Machine Tool Multibody Modelling

    Directory of Open Access Journals (Sweden)

    M. Sulitka

    2014-02-01

    Full Text Available Quick calculation of machine tool dynamic response represents one of the major requirements for machine tool virtual modelling and virtual machining, aiming at simulating the machining process performance, quality, and precision of a workpiece. Enhanced time effectiveness in machine tool dynamic simulations may be achieved by employing model order reduction (MOR techniques of the full finite element (FE models. The paper provides a case study aimed at comparison of Krylov subspace base and mode truncation technique. Application of both of the reduction techniques for creating a machine tool multibody model is evaluated. The Krylov subspace reduction technique shows high quality in terms of both dynamic properties of the reduced multibody model and very low time demands at the same time.

  4. Modeling of tool path for the CNC sheet cutting machines

    Science.gov (United States)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  5. Calibration of parallel kinematics machine using generalized distance error model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper focus on the accuracy enhancement of parallel kinematics machine through kinematics calibration. In the calibration processing, well-structured identification Jacobian matrix construction and end-effector position and orientation measurement are two main difficulties. In this paper, the identification Jacobian matrix is constructed easily by numerical calculation utilizing the unit virtual velocity method. The generalized distance errors model is presented for avoiding measuring the position and orientation directly which is difficult to be measured. At last, a measurement tool is given for acquiring the data points in the calibration processing.Experimental studies confirmed the effectiveness of method. It is also shown in the paper that the proposed approach can be applied to other typed parallel manipulators.

  6. Reliability modeling of hydraulic system of drum shearer machine

    Institute of Scientific and Technical Information of China (English)

    SEYED HADI Hoseinie; MOHAMMAD Ataie; REZA Khalookakaei; UDAY Kumar

    2011-01-01

    The hydraulic system plays an important role in supplying power and its transition to other working parts of a coal shearer machine.In this paper,the reliability of the hydraulic system of a drum shearer was analyzed.A case study was done in the Tabas Coal Mine in Iran for failure data collection.The results of the statistical analysis show that the time between failures (TBF)data of this system followed the 3-parameters Weibull distribution.There is about a 54% chance that the hydraulic system of the drum shearer will not fail for the first 50 h of operation.The developed model shows that the reliability of the hydraulic system reduces to a zero value after approximately 1 650 hours of operation.The failure rate of this system decreases when time increases.Therefore,corrective maintenance(run-to-failure)was selected as the best maintenance strategy for it.

  7. Risk factors for near-term myocardial infarction in apparently healthy men and women

    DEFF Research Database (Denmark)

    Nordestgaard, Børge; Adourian, Aram S; Freiberg, Jacob Johannes von S;

    2010-01-01

    Limited information is available regarding risk factors for the near-term (4 years) onset of myocardial infarction (MI). We evaluated established cardiovascular risk factors and putative circulating biomarkers as predictors for MI within 4 years of measurement....

  8. Design and Modeling of a 3 DOF Machine

    OpenAIRE

    Marjan Sikandar

    2014-01-01

    This paper proposes an IT solution to implement a 3DOF machine by using several design patterns. A Degree of Freedom machine is a motion platform synchronized with the simulation game visually displayed. Through this research we have tried to find an optimal way to implement such a robotic machine that is supposed to be used in different virtual prototypes by industrialists. Since the development cost of an actual product often, is too high to test the initial design, many industrialists now-...

  9. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    Science.gov (United States)

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  10. Developing a PLC-friendly state machine model: lessons learned

    Science.gov (United States)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  11. Modeling the Virtual Machine Launching Overhead under Fermicloud

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  12. Machine learning and cosmological simulations - I. Semi-analytical models

    Science.gov (United States)

    Kamdar, Harshil M.; Turk, Matthew J.; Brunner, Robert J.

    2016-01-01

    We present a new exploratory framework to model galaxy formation and evolution in a hierarchical Universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analysing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various sophisticated ML algorithms (k-Nearest Neighbors, decision trees, random forests, and extremely randomized trees). By using only essential dark matter halo physical properties for haloes of M > 1012 M⊙ and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenological framework to explore the galaxy-halo connection that is built upon SAMs and demonstrably place ML as a promising and a computationally efficient tool to study small-scale structure formation.

  13. Simple models for estimating dementia severity using machine learning.

    Science.gov (United States)

    Shankle, W R; Mania, S; Dick, M B; Pazzani, M J

    1998-01-01

    Estimating dementia severity using the Clinical Dementia Rating (CDR) Scale is a two-stage process that currently is costly and impractical in community settings, and at best has an interrater reliability of 80%. Because staging of dementia severity is economically and clinically important, we used Machine Learning (ML) algorithms with an Electronic Medical Record (EMR) to identify simpler models for estimating total CDR scores. Compared to a gold standard, which required 34 attributes to derive total CDR scores, ML algorithms identified models with as few as seven attributes. The classification accuracy varied with the algorithm used with naïve Bayes giving the highest. (76%) The mildly demented severity class was the only one with significantly reduced accuracy (59%). If one groups the severity classes into normal, very mild-to-mildly demented, and moderate-to-severely demented, then classification accuracies are clinically acceptable (85%). These simple models can be used in community settings where it is currently not possible to estimate dementia severity due to time and cost constraints.

  14. Frame Design and Reality of Numerical Model for Sculptured Part Machining

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The importance of the numerical model for sculptured part machining based on virtual environment is introduced. Meanwhile, the general frame of the numerical model is proposed, and the techniques of developing the numerical model are discussed in detail.

  15. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    Science.gov (United States)

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  16. Design and Modeling of High Performance Permanent Magnet Synchronous Machines

    NARCIS (Netherlands)

    Van der Geest, M.

    2015-01-01

    The electrification of transportation, and especially aerospace transportation, increases the demand for high performance electrical machines. Those machines often need to be fault-tolerant, cheap, highly efficient, light and small, and interface well with the inverter. In addition, the development

  17. Nitric oxide for respiratory failure in infants born at or near term.

    Science.gov (United States)

    Barrington, Keith J; Finer, Neil; Pennaforte, Thomas; Altit, Gabriel

    2017-01-05

    Nitric oxide (NO) is a major endogenous regulator of vascular tone. Inhaled nitric oxide (iNO) gas has been investigated as treatment for persistent pulmonary hypertension of the newborn. To determine whether treatment of hypoxaemic term and near-term newborn infants with iNO improves oxygenation and reduces rate of death and use of extracorporeal membrane oxygenation (ECMO), or affects long-term neurodevelopmental outcomes. We used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL; 2016, Issue 1), MEDLINE via PubMed (1966 to January 2016), Embase (1980 to January 2016) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to January 2016). We searched clinical trials databases, conference proceedings and reference lists of retrieved articles for randomised controlled trials and quasi-randomised trials. We contacted the principal investigators of studies published as abstracts to ascertain the necessary information. Randomised studies of iNO in term and near-term infants with hypoxic respiratory failure, with clinically relevant outcomes, including death, use of ECMO and oxygenation. We analysed trial reports to assess methodological quality using the criteria of the Cochrane Neonatal Review Group. We tabulated mortality, oxygenation, short-term clinical outcomes (particularly use of ECMO) and long-term developmental outcomes. For categorical outcomes, we calculated typical estimates for risk ratios and risk differences. For continuous variables, we calculated typical estimates for weighted mean differences. We used 95% confidence intervals and assumed a fixed-effect model for meta-analysis. We found 17 eligible randomised controlled studies that included term and near-term infants with hypoxia.Ten trials compared iNO versus control (placebo or standard care without iNO) in infants with moderate or severe severity of illness scores (Ninos 1996; Roberts

  18. Supervisory autonomous local-remote control system design: Near-term and far-term applications

    Science.gov (United States)

    Zimmerman, Wayne; Backes, Paul

    1993-01-01

    The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.

  19. Modeling stochastic kinetics of molecular machines at multiple levels: from molecules to modules.

    Science.gov (United States)

    Chowdhury, Debashish

    2013-06-04

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here.

  20. A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 张艳珠; 宋春林; 邵惠鹤

    2003-01-01

    A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.

  1. Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.

    Science.gov (United States)

    Technology Management Corp., Alexandria, VA.

    A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…

  2. Development of Mathematical Model for Lifecycle Management Process of New Type of Multirip Saw Machine

    Directory of Open Access Journals (Sweden)

    B. V. Phung

    2017-01-01

    Full Text Available The subject of research is a new type of the multirip saw machine with circular reciprocating saw blades. This machine has a number of advantages in comparison with other machines of similar purpose. The paper presents an overview of different types of saw equipment and describes basic characteristics of the machine under investigation.Using the concept of lifecycle management of the considered machine in a unified information space is necessary to improve quality and competitiveness in the current production environment. In this lifecycle all the members, namely designers, technologists, customers, etc., have a philosophy to tend to optimize the overall machine design as much as possible. However, it is not always possible to achieve. Conversely, at the boundary between the phases there are several mismatching situations, if not even conflicting inconsistencies. For example, improvement of mass characteristics can lead to poor stability and rigidity of the saw blade. Machine output improvement through increasing frequency of the machine motor rotation, on the other side, results in reducing stable ability of the saw blades and so on.In order to provide a coherent framework for the collaborative environment between the members of the life cycle, the article presents a technique to construct a mathematical model that allows combining all different members’ requirements in the unified information model. The article also gives analysis of kinematic and dynamic behavior and technological characteristics of the machine. Describes in detail all the controlled parameters, functional constraints, and quality criteria of the machine under consideration. Depending on the controlled parameters, the analytical relationships formulate functional constraints and quality criteria of the machine. The proposed algorithm allows fast and exact calculation of all the functional constraints and quality criteria of the machine for a given vector of the control

  3. A comparative study of slope failure prediction using logistic regression, support vector machine and least square support vector machine models

    Science.gov (United States)

    Zhou, Lim Yi; Shan, Fam Pei; Shimizu, Kunio; Imoto, Tomoaki; Lateh, Habibah; Peng, Koay Swee

    2017-08-01

    A comparative study of logistic regression, support vector machine (SVM) and least square support vector machine (LSSVM) models has been done to predict the slope failure (landslide) along East-West Highway (Gerik-Jeli). The effects of two monsoon seasons (southwest and northeast) that occur in Malaysia are considered in this study. Two related factors of occurrence of slope failure are included in this study: rainfall and underground water. For each method, two predictive models are constructed, namely SOUTHWEST and NORTHEAST models. Based on the results obtained from logistic regression models, two factors (rainfall and underground water level) contribute to the occurrence of slope failure. The accuracies of the three statistical models for two monsoon seasons are verified by using Relative Operating Characteristics curves. The validation results showed that all models produced prediction of high accuracy. For the results of SVM and LSSVM, the models using RBF kernel showed better prediction compared to the models using linear kernel. The comparative results showed that, for SOUTHWEST models, three statistical models have relatively similar performance. For NORTHEAST models, logistic regression has the best predictive efficiency whereas the SVM model has the second best predictive efficiency.

  4. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    Directory of Open Access Journals (Sweden)

    Stålring Jonna C

    2011-07-01

    Full Text Available Abstract Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the

  5. A self-calibrating robot based upon a virtual machine model of parallel kinematics

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Eiríksson, Eyþór Rúnar; Hansen, Hans Nørgaard

    2016-01-01

    A delta-type parallel kinematics system for Additive Manufacturing has been created, which through a probing system can recognise its geometrical deviations from nominal and compensate for these in the driving inverse kinematic model of the machine. Novelty is that this model is derived from...... a virtual machine of the kinematics system, built on principles from geometrical metrology. Relevant mathematically non-trivial deviations to the ideal machine are identified and decomposed into elemental deviations. From these deviations, a routine is added to a physical machine tool, which allows...... it to recognise its own geometry by probing the vertical offset from tool point to the machine table, at positions in the horizontal plane. After automatic calibration the positioning error of the machine tool was reduced from an initial error after its assembly of ±170 µm to a calibrated error of ±3 µm...

  6. CPS Modeling of CNC Machine Tool Work Processes Using an Instruction-Domain Based Approach

    Directory of Open Access Journals (Sweden)

    Jihong Chen

    2015-06-01

    Full Text Available Building cyber-physical system (CPS models of machine tools is a key technology for intelligent manufacturing. The massive electronic data from a computer numerical control (CNC system during the work processes of a CNC machine tool is the main source of the big data on which a CPS model is established. In this work-process model, a method based on instruction domain is applied to analyze the electronic big data, and a quantitative description of the numerical control (NC processes is built according to the G code of the processes. Utilizing the instruction domain, a work-process CPS model is established on the basis of the accurate, real-time mapping of the manufacturing tasks, resources, and status of the CNC machine tool. Using such models, case studies are conducted on intelligent-machining applications, such as the optimization of NC processing parameters and the health assurance of CNC machine tools.

  7. Assessment of two mammographic density related features in predicting near-term breast cancer risk

    Science.gov (United States)

    Zheng, Bin; Sumkin, Jules H.; Zuley, Margarita L.; Wang, Xingwei; Klym, Amy H.; Gur, David

    2012-02-01

    In order to establish a personalized breast cancer screening program, it is important to develop risk models that have high discriminatory power in predicting the likelihood of a woman developing an imaging detectable breast cancer in near-term (e.g., BIRADS), and computed mammographic density related features we compared classification performance in estimating the likelihood of detecting cancer during the subsequent examination using areas under the ROC curves (AUC). The AUCs were 0.63+/-0.03, 0.54+/-0.04, 0.57+/-0.03, 0.68+/-0.03 when using woman's age, BIRADS rating, computed mean density and difference in computed bilateral mammographic density, respectively. Performance increased to 0.62+/-0.03 and 0.72+/-0.03 when we fused mean and difference in density with woman's age. The results suggest that, in this study, bilateral mammographic tissue density is a significantly stronger (p<0.01) risk indicator than both woman's age and mean breast density.

  8. Landmine policy in the near-term: a framework for technology analysis and action

    Energy Technology Data Exchange (ETDEWEB)

    Eimerl, D., LLNL

    1997-08-01

    Any effective solution to the problem of leftover landmines and other post-conflict unexploded ordnance (UXO) must take into account the real capabilities of demining technologies and the availability of sufficient resources to carry out demining operations. Economic and operational factors must be included in analyses of humanitarian demining. These factors will provide a framework for using currently available resources and technologies to complete this task in a time frame that is both practical and useful. Since it is likely that reliable advanced technologies for demining are still several years away, this construct applies to the intervening period. It may also provide a framework for utilizing advanced technologies as they become available. This study is an economic system model for demining operations carried out by the developed nations that clarifies the role and impact of technology on the economic performance and viability of these operations. It also provides a quantitative guide to assess the performance penalties arising from gaps in current technology, as well as the potential advantages and desirable features of new technologies that will significantly affect the international community`s ability to address this problem. Implications for current and near-term landmine and landmine technology policies are drawn.

  9. Optimisation of near-term PPCS power plant designs from the material managment stance

    Energy Technology Data Exchange (ETDEWEB)

    Pampin, R.; O' Brian, M.H. [Euratom/UKAEA Fusion Association, Culham Science Centre, Abingdon (United Kingdom)

    2007-07-01

    The effective management of active material arising from fusion power generation is of crucial importance to maximise the environmental benefits of fusion. In recent years, several EU and international activities have focused towards minimising fusion waste and its radiotoxicity. Reviews have been made of industry practices and international standards to support a comprehensive management strategy based on maximum clearance, recycling and refurbishment of materials. Following this effort, the next step is to optimise the power plant designs according to this strategy and following the 'low-activation-design' philosophy of earlier studies. In this paper, the design of two near-term PPCS plant models based on ITER-relevant technology, a helium-cooled pebble bed and lithium-lead blanket concepts, are re-visited to optimise the management of active materials and minimise wastes. Combined use of novel shielding materials, customised radial builds and impurity control achieve maximum clearance and recycling potential of the irradiated material, and minimise the radiotoxicity of any residual secondary wastes. Up to 17% of the material can achieve clearance before 100 years, representing the majority of the decommissioning stream. Of the remaining material, most can be recycled in conventional nuclear foundries. C-14 generation can be reduced by at least 95% with adequate control of nitrogen impurities. Results confirm the trends obtained in previous work pointing to over-conservatism of the original PPCS analyses based on out-of- date criteria and experience. (orig.)

  10. A Near-Term, High-Confidence Heavy Lift Launch Vehicle

    Science.gov (United States)

    Rothschild, William J.; Talay, Theodore A.

    2009-01-01

    The use of well understood, legacy elements of the Space Shuttle system could yield a near-term, high-confidence Heavy Lift Launch Vehicle that offers significant performance, reliability, schedule, risk, cost, and work force transition benefits. A side-mount Shuttle-Derived Vehicle (SDV) concept has been defined that has major improvements over previous Shuttle-C concepts. This SDV is shown to carry crew plus large logistics payloads to the ISS, support an operationally efficient and cost effective program of lunar exploration, and offer the potential to support commercial launch operations. This paper provides the latest data and estimates on the configurations, performance, concept of operations, reliability and safety, development schedule, risks, costs, and work force transition opportunities for this optimized side-mount SDV concept. The results presented in this paper have been based on established models and fully validated analysis tools used by the Space Shuttle Program, and are consistent with similar analysis tools commonly used throughout the aerospace industry. While these results serve as a factual basis for comparisons with other launch system architectures, no such comparisons are presented in this paper. The authors welcome comparisons between this optimized SDV and other Heavy Lift Launch Vehicle concepts.

  11. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  12. Quasilinear Extreme Learning Machine Model Based Internal Model Control for Nonlinear Process

    Directory of Open Access Journals (Sweden)

    Dazi Li

    2015-01-01

    Full Text Available A new strategy for internal model control (IMC is proposed using a regression algorithm of quasilinear model with extreme learning machine (QL-ELM. Aimed at the chemical process with nonlinearity, the learning process of the internal model and inverse model is derived. The proposed QL-ELM is constructed as a linear ARX model with a complicated nonlinear coefficient. It shows some good approximation ability and fast convergence. The complicated coefficients are separated into two parts. The linear part is determined by recursive least square (RLS, while the nonlinear part is identified through extreme learning machine. The parameters of linear part and the output weights of ELM are estimated iteratively. The proposed internal model control is applied to CSTR process. The effectiveness and accuracy of the proposed method are extensively verified through numerical results.

  13. Drifting model approach to modeling based on weighted support vector machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 宋春林; 邵惠鹤

    2004-01-01

    This paper proposes a novel drifting modeling (DM) method. Briefly, we first employ an improved SVMs algorithm named weighted support vector machines (W_SVMs), which is suitable for locally learning, and then the DM method using the algorithm is proposed. By applying the proposed modeling method to Fluidized Catalytic Cracking Unit (FCCU), the simulation results show that the property of this proposed approach is superior to global modeling method based on standard SVMs.

  14. comparative study of moore and mealy machine models adaptation ...

    African Journals Online (AJOL)

    user

    Information and Communications Technology has influenced the need for automated machines that can carry out important production ... enjoys a reputation for improving or eliminating uneven skin tone ..... Information Management Vol. 5, pp.

  15. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Directory of Open Access Journals (Sweden)

    Ariel González

    2015-08-01

    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  16. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View.

    Science.gov (United States)

    Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael

    2016-12-16

    As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community.

  17. Measurements and modelling of low-frequency disturbances in induction machines

    Energy Technology Data Exchange (ETDEWEB)

    Thiringer, T. [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Electric Power Engineering

    1996-12-01

    The thesis deals with the dynamic response of the induction machine to low frequency perturbations in the shaft torque, supply voltage and supply frequency. Also the response of a two-machine group connected to a weak grid is investigated. The results predicted by various induction models are compared with measurements performed on a laboratory set-up. Furthermore, the influence of machine and grid parameters, machine temperature, phase compensating capacitors, skin effect, saturation level and operating points is studied. The results predicted by the fifth-order non-linear Park model agree well with the measured induction machine responses to shaft torque, supply frequency and voltage magnitude perturbations. To determine the electric power response to very low-frequency perturbations in the magnitude of the supply voltage, the Park model must be modified to take varying iron losses into account. The temperature and supply frequency affect the low frequency dynamics of the induction machine significantly. The static shaft torque is, however, of importance for determining the responses to voltage magnitude perturbations. The performance of reduced-order induction machine models depends on the type of induction machine investigated. Best suited to be represented by reduced-order models are high-slip machines as well as machines that have a low ratio between the stator resistance and leakage reactances. A first-order model can predict the rotor speed, electrodynamic torque and electric power responses to shaft torque and supply frequency perturbations up to a perturbation frequency of at least 1 Hz. A second-order model can determine the same responses also for higher perturbation frequencies, at least up to 3 Hz. Using a third-order model all the responses can be determined up to at least 10 Hz. 48 refs, 45 figs, 14 tabs

  18. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Science.gov (United States)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  19. Simulated Near-term Climate Change Impacts on Major Crops across Latin America and the Caribbean

    Science.gov (United States)

    Gourdji, S.; Mesa-Diez, J.; Obando-Bonilla, D.; Navarro-Racines, C.; Moreno, P.; Fisher, M.; Prager, S.; Ramirez-Villegas, J.

    2016-12-01

    Robust estimates of climate change impacts on agricultural production can help to direct investments in adaptation in the coming decades. In this study commissioned by the Inter-American Development Bank, near-term climate change impacts (2020-2049) are simulated relative to a historical baseline period (1971-2000) for five major crops (maize, rice, wheat, soybean and dry bean) across Latin America and the Caribbean (LAC) using the DSSAT crop model. No adaptation or technological change is assumed, thereby providing an analysis of existing climatic stresses on yields in the region and a worst-case scenario in the coming decades. DSSAT is run across irrigated and rain-fed growing areas in the region at a 0.5° spatial resolution for each crop. Crop model inputs for soils, planting dates, crop varieties and fertilizer applications are taken from previously-published datasets, and also optimized for this study. Results show that maize and dry bean are the crops most affected by climate change, followed by wheat, with only minimal changes for rice and soybean. Generally, rain-fed production sees more severe yield declines than irrigated production, although large increases in irrigation water are needed to maintain yields, reducing the yield-irrigation productivity in most areas and potentially exacerbating existing supply limitations in watersheds. This is especially true for rice and soybean, the two crops showing the most neutral yield changes. Rain-fed yields for maize and bean are projected to decline most severely in the sub-tropical Caribbean, Central America and northern South America, where climate models show a consistent drying trend. Crop failures are also projected to increase in these areas, necessitating switches to other crops or investment in adaptation measures. Generally, investment in agricultural adaptation to climate change (such as improved seed and irrigation infrastructure) will be needed throughout the LAC region in the 21st century.

  20. DFT modeling of chemistry on the Z machine

    Science.gov (United States)

    Mattsson, Thomas

    2013-06-01

    Density Functional Theory (DFT) has proven remarkably accurate in predicting properties of matter under shock compression for a wide-range of elements and compounds: from hydrogen to xenon via water. Materials where chemistry plays a role are of particular interest for many applications. For example the deep interiors of Neptune, Uranus, and hundreds of similar exoplanets are composed of molecular ices of carbon, hydrogen, oxygen, and nitrogen at pressures of several hundred GPa and temperatures of many thousand Kelvin. High-quality thermophysical experimental data and high-fidelity simulations including chemical reaction are necessary to constrain planetary models over a large range of conditions. As examples of where chemical reactions are important, and demonstration of the high fidelity possible for these both structurally and chemically complex systems, we will discuss shock- and re-shock of liquid carbon dioxide (CO2) in the range 100 to 800 GPa, shock compression of the hydrocarbon polymers polyethylene (PE) and poly(4-methyl-1-pentene) (PMP), and finally simulations of shock compression of glow discharge polymer (GDP) including the effects of doping with germanium. Experimental results from Sandia's Z machine have time and again validated the DFT simulations at extreme conditions and the combination of experiment and DFT provide reliable data for evaluating existing and constructing future wide-range equations of state models for molecular compounds like CO2 and polymers like PE, PMP, and GDP. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines

    CERN Document Server

    Belyaev, Alexander; Krommer, Michael

    2017-01-01

    The papers in this volume present and discuss the frontiers in the mechanics of controlled machines and structures. They are based on papers presented at the International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines held in Vienna in September 2015. The workshop continues a series of international workshops held in Linz (2008) and St. Petersburg (2010).

  2. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    Science.gov (United States)

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  3. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    Science.gov (United States)

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  4. Static Stiffness Modeling of a Novel PKM-Machine Tool Structure

    Directory of Open Access Journals (Sweden)

    O. K. Akmaev

    2014-07-01

    Full Text Available This article presents a new configuration of a 3-dof machine tool with parallel kinematics. Elastic deformations of the machine tool have been modeled with finite elements, stiffness coefficients at characteristic points of the working area for different cutting forces have been calculated.

  5. Using financial risk measures for analyzing generalization performance of machine learning models.

    Science.gov (United States)

    Takeda, Akiko; Kanamori, Takafumi

    2014-09-01

    We propose a unified machine learning model (UMLM) for two-class classification, regression and outlier (or novelty) detection via a robust optimization approach. The model embraces various machine learning models such as support vector machine-based and minimax probability machine-based classification and regression models. The unified framework makes it possible to compare and contrast existing learning models and to explain their differences and similarities. In this paper, after relating existing learning models to UMLM, we show some theoretical properties for UMLM. Concretely, we show an interpretation of UMLM as minimizing a well-known financial risk measure (worst-case value-at risk (VaR) or conditional VaR), derive generalization bounds for UMLM using such a risk measure, and prove that solving problems of UMLM leads to estimators with the minimized generalization bounds. Those theoretical properties are applicable to related existing learning models.

  6. Assessing the near-term risk of climate uncertainty : interdependencies among the U.S. states.

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.

    2010-04-01

    Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.

  7. The Effect of Unreliable Machine for Two Echelons Deteriorating Inventory Model

    Directory of Open Access Journals (Sweden)

    I Nyoman Sutapa

    2014-01-01

    Full Text Available Many researchers have developed two echelons supply chain, however only few of them consider deteriorating items and unreliable machine in their models In this paper, we develop an inventory deteriorating model for two echelons supply chain with unreliable machine. The unreliable machine time is assumed uniformly distributed. The model is solved using simple heuristic since a closed form model can not be derived. A numerical example is used to show how the model works. A sensitivity analysis is conducted to show effect of different lost sales cost in the model. The result shows that increasing lost sales cost will increase both manufacture and buyer costs however buyer’s total cost increase higher than manufacture’s total cost as manufacture’s machine is more unreliable.

  8. Electromechanical coupling model and analysis of transient behavior for inertial vibrating machines

    Institute of Scientific and Technical Information of China (English)

    HU Ji-yun; YU Cui-ping; YIN Xue-gang

    2004-01-01

    A mathematical model of electromechanical coupling system for a planar inertial vibrating machine is built by setting up dynamical equations of discrete systems with a matrix methodology proposed. The substance of the transient behavior of the machine is unveiled by analyzing the results of the computer simulation to the model, and new methods are presented for diminishing the transient amplitude of the vibrating machine and improving the transient behavior. The reliable mathematical model is provided for intelligent control of the transient behavior of the equipment.

  9. Feature Based Machining Process Planning Modeling and Integration for Life Cycle Engineering

    Institute of Scientific and Technical Information of China (English)

    LIU Changyi

    2006-01-01

    Machining process data is the core of computer aided process planning application systems. It is also provides essential content for product life cycle engineering. The character of CAPP that supports product LCE and virtual manufacturing is analyzed. The structure and content of machining process data concerning green manufacturing is also examined. A logic model of Machining Process Data has been built based on an object oriented approach, using UML technology and a physical model of machining process data that utilizes XML technology. To realize the integration of design and process, an approach based on graph-based volume decomposition was apposed. Instead, to solve the problem of generation in the machining process, case-based reasoning and rule-based reasoning have been applied synthetically. Finally, the integration framework and interface that deal with the CAPP integration with CAD, CAM, PDM, and ERP are discussed.

  10. A Model of Parallel Kinematics for Machine Calibration

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Bæk Nielsen, Morten; Kløve Christensen, Simon

    2016-01-01

    Parallel kinematics have been adopted by more than 25 manufacturers of high-end desktop 3D printers [Wohlers Report (2015), p.118] as well as by research projects such as the WASP project [WASP (2015)], a 12 meter tall linear delta robot for Additive Manufacture of large-scale components...... developed in order to decompose the different types of geometrical errors into 6 elementary cases. Deliberate introduction of errors to the virtual machine has subsequently allowed for the generation of deviation plots that can be used as a strong tool for the identification and correction of geometrical...... errors on a physical machine tool....

  11. Profound hypotension and associated electrocardiographic changes during prolonged cord occlusion in the near term fetal sheep

    NARCIS (Netherlands)

    Wibbens, B; Westgate, JA; Bennet, L; Roelfsema, [No Value; De Haan, HH; Hunter, CJ; Gunn, AJ

    2005-01-01

    Objective: To determine whether the onset of fetal hypotension during profound asphyxia is reflected by alterations in the ratio between the T height, measured from the level of the PQ interval, and the QRS amplitude (T/QRS ratio) and ST waveform. Study design: Chronically instrumented near-term fet

  12. Acute maternal rehydration increases the urine production rate in the near-term human fetus

    NARCIS (Netherlands)

    Haak, MC; Aarnoudse, JG; Oosterhof, H.

    OBJECTIVE: We sought to investigate the effect of a decrease of maternal plasma osmolality produced by hypotonic rehydration on the fetal urine production rate in normal near-term human fetuses. STUDY DESIGN: Twenty-one healthy pregnant women attending the clinic for antenatal care were studied

  13. Cerebral cortical tissue damage after hemorrhagic hypotension in near-term born lambs.

    NARCIS (Netherlands)

    Os, S.H.G. van; Tweel, E. van den; Egberts, H.; Hopman, J.; Ruitenbeek, W.; Bel, F. van; Groenendaal, F.; Bor, M. van de

    2006-01-01

    Hypotension reduces cerebral O(2) supply, which may result in brain cell damage and loss of brain cell function in the near-term neonate. The aim is to elucidate 1) to what extent the functional disturbance of the cerebral cortex, as measured with electrocortical brain activity (ECBA), is related to

  14. Elective caesarean section and respiratory morbidity in the term and near-term neonate

    DEFF Research Database (Denmark)

    Hansen, Anne Kirkeby; Wisborg, Kirsten; Uldbjerg, Niels

    2007-01-01

    AIM: The aim of this review was to assess the relationship between delivery by elective caesarean section and respiratory morbidity in the term and near-term neonate. METHODS: Searches were made in the MEDLINE database, EMBASE, Cochrane database and Web of Science to identify peer-reviewed studie...

  15. Acute maternal alcohol consumption disrupts behavioral state organization in the near-term fetus

    NARCIS (Netherlands)

    Mulder, EJH; Morssink, LP; Van der Schee, T; Visser, GHA

    1998-01-01

    Disturbed sleep regulation is often observed in neonates of women who drank heavily during pregnancy. It is unknown if (and how) an occasional drink affects fetal sleeping behavior. In 28 near-term pregnant women we examined the effects on fetal behavioral state organization of two glasses of wine (

  16. Distributed model for electromechanical interaction in rotordynamics of cage rotor electrical machines

    Science.gov (United States)

    Laiho, Antti; Holopainen, Timo P.; Klinge, Paul; Arkkio, Antero

    2007-05-01

    In this work the effects of the electromechanical interaction on rotordynamics and vibration characteristics of cage rotor electrical machines were considered. An eccentric rotor motion distorts the electromagnetic field in the air-gap between the stator and rotor inducing a total force, the unbalanced magnetic pull, exerted on the rotor. In this paper a low-order parametric model for the unbalanced magnetic pull is coupled with a three-dimensional finite element structural model of the electrical machine. The main contribution of the work is to present a computationally efficient electromechanical model for vibration analysis of cage rotor machines. In this model, the interaction between the mechanical and electromagnetic systems is distributed over the air gap of the machine. This enables the inclusion of rotor and stator deflections into the analysis and, thus, yields more realistic prediction for the effects of electromechanical interaction. The model was tested by implementing it for two electrical machines with nominal speeds close to one of the rotor bending critical speeds. Rated machine data was used in order to predict the effects of the electromechanical interaction on vibration characteristics of the example machines.

  17. Near Term Hybrid Passenger Vehicle Development Program. Phase I, Final report. Appendix B: trade-off studies. Volume II. Appendices. [SPEC-78

    Energy Technology Data Exchange (ETDEWEB)

    Traversi, M.; Piccolo, R.

    1979-06-15

    These appendices to the Near Term Hybrid Vehicle Trade-off Studies reports present data on the SPEC-78 computer model for simulating vehicle performance, fuel economy, and exhaust emissions; propulsion system alternatives; lead-acid and sodium-sulfur batteries; and production cost estimates. (LCL)

  18. Impact of Model Detail of Synchronous Machines on Real-time Transient Stability Assessment

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Østergaard, Jacob

    2013-01-01

    to identify the transient stability mechanism, a simulation with a high-order model was used as reference. The Western System Coordinating Council System (WSCC) and the New England & New York system are considered and simulations of an unstable and a stable scenario are carried out, where the detail......In this paper, it is investigated how detailed the model of a synchronous machine needs to be in order to assess transient stability using a Single Machine Equivalent (SIME). The results will show how the stability mechanism and the stability assessment are affected by the model detail. In order...... of the machine models is varied. Analyses of the results suggest that a 4th-order model may be sufficient to represent synchronous machines in transient stability studies....

  19. STATE SPACE MODELING OF DIMENSIONAL MACHINING ERRORS OF SERIAL-PARALLEL HYBRID MULTI-STAGE MACHINING SYSTEM

    Institute of Scientific and Technical Information of China (English)

    XI Lifeng; DU Shichang

    2007-01-01

    The final product quality is determined by cumulation, coupling and propagation of product quality variations from all stations in multi-stage manufacturing systems (MMSs). Modeling and control of variation propagation is essential to improve product quality. However, the current stream of variations (SOV) theory can only solve the problem that a single SOV affects the product quality. Due to the existence of multiple variation streams, limited research has been done on the quality control in serial-parallel hybrid multi-stage manufacturing systems (SPH-MMSs). A state space model and its modeling strategies are developed to describe the multiple variation streams stack-up in an SPH-MMS. The SOV theory is extended to SPH-MMS. The dimensions of system model are reduced to the production-reality level, and the effect and feasibility of the model is validated by a machining case.

  20. Global and regional temperature-change potentials for near-term climate forcers

    Directory of Open Access Journals (Sweden)

    W. J. Collins

    2013-03-01

    Full Text Available We examine the climate effects of the emissions of near-term climate forcers (NTCFs from 4 continental regions (East Asia, Europe, North America and South Asia using results from the Task Force on Hemispheric Transport of Air Pollution Source-Receptor global chemical transport model simulations. We address 3 aerosol species (sulphate, particulate organic matter and black carbon and 4 ozone precursors (methane, reactive nitrogen oxides (NOx, volatile organic compounds and carbon monoxide. We calculate the global climate metrics: global warming potentials (GWPs and global temperature change potentials (GTPs. For the aerosols these metrics are simply time-dependent scalings of the equilibrium radiative forcings. The GTPs decrease more rapidly with time than the GWPs. The aerosol forcings and hence climate metrics have only a modest dependence on emission region. The metrics for ozone precursors include the effects on the methane lifetime. The impacts via methane are particularly important for the 20 yr GTPs. Emissions of NOx and VOCs from South Asia have GWPs and GTPs of higher magnitude than from the other Northern Hemisphere regions. The analysis is further extended by examining the temperature-change impacts in 4 latitude bands, and calculating absolute regional temperature-change potentials (ARTPs. The latitudinal pattern of the temperature response does not directly follow the pattern of the diagnosed radiative forcing. We find that temperatures in the Arctic latitudes appear to be particularly sensitive to BC emissions from South Asia. The northern mid-latitude temperature response to northern mid-latitude emissions is approximately twice as large as the global average response for aerosol emission, and about 20–30% larger than the global average for methane, VOC and CO emissions.

  1. Global and Regional Temperature-change Potentials for Near-term Climate Forcers

    Science.gov (United States)

    Collins, W.J.; Fry, M.M.; Yu, H.; Fuglestvedt, J. S.; Shindell, D. T.; West, J. J.

    2013-01-01

    We examine the climate effects of the emissions of near-term climate forcers (NTCFs) from 4 continental regions (East Asia, Europe, North America and South Asia) using results from the Task Force on Hemispheric Transport of Air Pollution Source-Receptor global chemical transport model simulations. We address 3 aerosol species (sulphate, particulate organic matter and black carbon) and 4 ozone precursors (methane, reactive nitrogen oxides (NOx), volatile organic compounds and carbon monoxide). We calculate the global climate metrics: global warming potentials (GWPs) and global temperature change potentials (GTPs). For the aerosols these metrics are simply time-dependent scalings of the equilibrium radiative forcings. The GTPs decrease more rapidly with time than the GWPs. The aerosol forcings and hence climate metrics have only a modest dependence on emission region. The metrics for ozone precursors include the effects on the methane lifetime. The impacts via methane are particularly important for the 20 yr GTPs. Emissions of NOx and VOCs from South Asia have GWPs and GTPs of higher magnitude than from the other Northern Hemisphere regions. The analysis is further extended by examining the temperature-change impacts in 4 latitude bands, and calculating absolute regional temperature-change potentials (ARTPs). The latitudinal pattern of the temperature response does not directly follow the pattern of the diagnosed radiative forcing. We find that temperatures in the Arctic latitudes appear to be particularly sensitive to BC emissions from South Asia. The northern mid-latitude temperature response to northern mid-latitude emissions is approximately twice as large as the global average response for aerosol emission, and about 20-30% larger than the global average for methane, VOC and CO emissions.

  2. Global and Regional Temperature-change Potentials for Near-term Climate Forcers

    Science.gov (United States)

    Collins, W.J.; Fry, M. M.; Yu, H.; Fuglestvedt, J. S.; Shindell, D. T.; West, J. J.

    2013-01-01

    The emissions of reactive gases and aerosols can affect climate through the burdens of ozone, methane and aerosols, having both cooling and warming effects. These species are generally referred to near-term climate forcers (NTCFs) or short-lived climate pollutants (SLCPs), because of their short atmospheric residence time. The mitigation of these would be attractive for both air quality and climate on a 30-year timescale, provided it is not at the expense of CO2 mitigation. In this study we examine the climate effects of the emissions of NTCFs from 4 continental regions (East Asia, Europe, North America and South Asia) using results from the Task Force on Hemispheric Transport of Air Pollution Source-Receptor global chemical transport model simulations. We address 3 aerosol species (sulphate, particulate organic matter and black carbon - BC) and 4 ozone precursors (methane, reactive nitrogen oxides - NOx, volatile organic compounds VOC, and carbon monoxide - CO). For the aerosols the global warming potentials (GWPs) and global temperature change potentials (GTPs) are simply time-dependent scaling of the equilibrium radiative forcing, with the GTPs decreasing more rapidly with time than the GWPs. While the aerosol climate metrics have only a modest dependence on emission region, emissions of NOx and VOCs from South Asia have GWPs and GTPs of higher magnitude than from the other northern hemisphere regions. On regional basis, the northern mid-latitude temperature response to northern mid-latitude emissions is approximately twice as large as the global average response for aerosol emission, and about 20-30% larger than the global average for methane, VOC and CO emissions. We also found that temperatures in the Arctic latitudes appear to be particularly sensitive to black carbon emissions from South Asia.

  3. Ecological and biomedical effects of effluents from near-term electric vehicle storage battery cycles

    Energy Technology Data Exchange (ETDEWEB)

    1980-05-01

    An assessment of the ecological and biomedical effects due to commercialization of storage batteries for electric and hybrid vehicles is given. It deals only with the near-term batteries, namely Pb/acid, Ni/Zn, and Ni/Fe, but the complete battery cycle is considered, i.e., mining and milling of raw materials, manufacture of the batteries, cases and covers; use of the batteries in electric vehicles, including the charge-discharge cycles; recycling of spent batteries; and disposal of nonrecyclable components. The gaseous, liquid, and solid emissions from various phases of the battery cycle are identified. The effluent dispersal in the environment is modeled and ecological effects are assessed in terms of biogeochemical cycles. The metabolic and toxic responses by humans and laboratory animals to constituents of the effluents are discussed. Pertinent environmental and health regulations related to the battery industry are summarized and regulatory implications for large-scale storage battery commercialization are discussed. Each of the seven sections were abstracted and indexed individually for EDB/ERA. Additional information is presented in the seven appendixes entitled; growth rate scenario for lead/acid battery development; changes in battery composition during discharge; dispersion of stack and fugitive emissions from battery-related operations; methodology for estimating population exposure to total suspended particulates and SO/sub 2/ resulting from central power station emissions for the daily battery charging demand of 10,000 electric vehicles; determination of As air emissions from Zn smelting; health effects: research related to EV battery technologies. (JGB)

  4. High-speed AMB machining spindle model updating and model validation

    Science.gov (United States)

    Wroblewski, Adam C.; Sawicki, Jerzy T.; Pesch, Alexander H.

    2011-04-01

    High-Speed Machining (HSM) spindles equipped with Active Magnetic Bearings (AMBs) have been envisioned to be capable of automated self-identification and self-optimization in efforts to accurately calculate parameters for stable high-speed machining operation. With this in mind, this work presents rotor model development accompanied by automated model-updating methodology followed by updated model validation. The model updating methodology is developed to address the dynamic inaccuracies of the nominal open-loop plant model when compared with experimental open-loop transfer function data obtained by the built in AMB sensors. The nominal open-loop model is altered by utilizing an unconstrained optimization algorithm to adjust only parameters that are a result of engineering assumptions and simplifications, in this case Young's modulus of selected finite elements. Minimizing the error of both resonance and anti-resonance frequencies simultaneously (between model and experimental data) takes into account rotor natural frequencies and mode shape information. To verify the predictive ability of the updated rotor model, its performance is assessed at the tool location which is independent of the experimental transfer function data used in model updating procedures. Verification of the updated model is carried out with complementary temporal and spatial response comparisons substantiating that the updating methodology is effective for derivation of open-loop models for predictive use.

  5. Modelling of the dynamic behaviour of hard-to-machine alloys

    Science.gov (United States)

    Hokka, M.; Leemet, T.; Shrot, A.; Bäker, M.; Kuokkala, V.-T.

    2012-08-01

    Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.

  6. Modelling of the dynamic behaviour of hard-to-machine alloys

    Directory of Open Access Journals (Sweden)

    Bäker M.

    2012-08-01

    Full Text Available Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.

  7. Behavioural effects of near-term acute fetal hypoxia in a small precocial animal, the spiny mouse (Acomys cahirinus).

    Science.gov (United States)

    Ireland, Zoe; Dickinson, Hayley; Fleiss, Bobbi; Hutton, Lisa C; Walker, David W

    2010-01-01

    We have previously developed a model of near-term intra-uterine hypoxia producing significant neonatal mortality (37%) in a small laboratory animal - the spiny mouse - which has precocial offspring at birth. The aim of the present study was to determine if this insult resulted in the appearance of behavioural abnormalities in those offspring which survived the hypoxic delivery. Behavioural tests assessed gait (using footprint patterns), motor coordination and balance on an accelerating rotarod, and spontaneous locomotion and exploration in an open field. We found that the near-term acute hypoxic episode produced a mild neurological deficit in the early postnatal period. In comparison to vaginally delivered controls, hypoxia pups were able to remain on the accelerating rotarod for significantly shorter durations on postnatal days 1-2, and in the open field they travelled significantly shorter distances, jumped less, and spent a greater percentage of time stationary on postnatal days 5 and 15. No changes were observed in gait. Unlike some rodent models of cerebral hypoxia-ischaemia, macroscopic examination of the brain on postnatal day 5 showed no gross cystic lesions, oedema or infarct. Future studies should be directed at identifying hypoxia-induced alterations in the function of specific brain regions, and assessing if maternal administration of neuroprotective agents can prevent against hypoxia-induced neurological deficits and brain damage that occur at birth.

  8. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  9. Heat transfer model of semi-transparent ceramics undergoing laser-assisted machining

    Energy Technology Data Exchange (ETDEWEB)

    Pfefferkorn, F.E. [University of Wisconsin-Madison (United States). Department of Mechanical Engineering; Incropera, F.P. [University of Notre Dame, IN (United States). College of Engineering; Yung C. Shin [Purdue University, West Lafayette, IN (United States). School of Mechanical Engineering

    2005-05-01

    A three-dimensional, unsteady heat transfer model has been developed for predicting the temperature field in partially-stabilized zirconia (PSZ) undergoing laser-assisted machining. The semi-transparent PSZ is treated as optically thick within a spectral band from approximately 0.5 to 8 {mu}m. After comparing the diffusion approximation and the discrete ordinates method for predicting internal radiative transfer, suitability of the diffusion approximation is established from a comparison of model predictions with surface temperature measurements. The temperature predictions are in good agreement with measured values during machining. Parametric calculations reveal that laser power and feedrate have the greatest effect on machining temperatures. (author)

  10. Model of mechanism of providing of strategic firmness of machine-building enterprise

    Directory of Open Access Journals (Sweden)

    I.V. Movchan

    2011-03-01

    Full Text Available In the article is considered theoretical aspects of strategic firmness and the developed algorithmic model of mechanism providing of strategic firmness of machine-building enterprise.

  11. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  12. WELL-POSEDNESS OF THE MODEL DESCRIBING A REPAIRABLE, STANDBY, HUMAN & MACHINE SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Geni Gupur

    2003-01-01

    By using the strong continuous semigroup theory of linear operators we prove the existence of a unique positive time-dependent solution of the model describing a repairable, standby, human & machine system.

  13. Study of the machining process of nano-electrical discharge machining based on combined atomistic-continuum modeling method

    Science.gov (United States)

    Zhang, Guojun; Guo, Jianwen; Ming, Wuyi; Huang, Yu; Shao, Xinyu; Zhang, Zhen

    2014-01-01

    Nano-electrical discharge machining (nano-EDM) is an attractive measure to manufacture parts with nanoscale precision, however, due to the incompleteness of its theories, the development of more advanced nano-EDM technology is impeded. In this paper, a computational simulation model combining the molecular dynamics simulation model and the two-temperature model for single discharge process in nano-EDM is constructed to study the machining mechanism of nano-EDM from the thermal point of view. The melting process is analyzed. Before the heated material gets melted, thermal compressive stress higher than 3 GPa is induced. After the material gets melted, the compressive stress gets relieved. The cooling and solidifying processes are also analyzed. It is found that during the cooling process of the melted material, tensile stress higher than 3 GPa arises, which leads to the disintegration of material. The formation of the white layer is attributed to the homogeneous solidification, and additionally, the resultant residual stress is analyzed.

  14. A comparison of machine learning and Bayesian modelling for molecular serotyping.

    Science.gov (United States)

    Newton, Richard; Wernisch, Lorenz

    2017-08-11

    Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological

  15. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    Science.gov (United States)

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.

    2017-01-01

    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  16. Parallel Machine Scheduling Models with Fuzzy Parameters and Precedence Constraints: A Credibility Approach

    Institute of Scientific and Technical Information of China (English)

    HOU Fu-jun; WU Qi-zong

    2007-01-01

    A method for modeling the parallel machine scheduling problems with fuzzy parameters and precedence constraints based on credibility measure is provided.For the given n jobs to be processed on m machines, it is assumed that the processing times and the due dates are nonnegative fuzzy numbers and all the weights are positive, crisp numbers.Based on credibility measure, three parallel machine scheduling problems and a goal-programming model are formulated.Feasible schedules are evaluated not only by their objective values but also by the credibility degree of satisfaction with their precedence constraints.The genetic algorithm is utilized to find the best solutions in a short period of time.An illustrative numerical example is also given.Simulation results show that the proposed models are effective, which can deal with the parallel machine scheduling problems with fuzzy parameters and precedence constraints based on credibility measure.

  17. ELECTROMECHANICAL COUPLING MODEL AND ANALYSIS OF TRANSIENT BEHAVIOR FOR INERTIAL RECIPROCATION MACHINES

    Institute of Scientific and Technical Information of China (English)

    HU Ji-yun; YIN Xue-gang; YU Cui-ping

    2005-01-01

    The dynamical equations for a inertial reciprocating machine excited by two rotating eccentric weights were built by the matrix methodology for establishing dynamical equations of discrete systems. A mathematical model of electromechanical coupling system for the machine was formed by combining the dynamical equations with the state equations of the two motors. The computer simulation to the model was performed for several values of the damping coefficient or the motor power, respectively. The substance of transient behavior of the machine is unveiled by analyzing the results of the computer simulation, and new methods are presented for diminishing the transient amplitude of the vibrating machine and improving the transient behavior. The reliable mathematical model is provided for intelligent control of the transient behavior and engineering design of the equipment.

  18. A stochastic model for the cell formation problem considering machine reliability

    Science.gov (United States)

    Esmailnezhad, Bahman; Fattahi, Parviz; Kheirkhah, Amir Saman

    2015-03-01

    This paper presents a new mathematical model to solve cell formation problem in cellular manufacturing systems, where inter-arrival time, processing time, and machine breakdown time are probabilistic. The objective function maximizes the number of operations of each part with more arrival rate within one cell. Because a queue behind each machine; queuing theory is used to formulate the model. To solve the model, two metaheurstic algorithms such as modified particle swarm optimization and genetic algorithm are proposed. For the generation of initial solutions in these algorithms, a new heuristic method is developed, which always creates feasible solutions. Both metaheurstic algorithms are compared against global solutions obtained from Lingo software's branch and bound (B&B). Also, a statistical method will be used for comparison of solutions of two metaheurstic algorithms. The results of numerical examples indicate that considering the machine breakdown has significant effect on block structures of machine-part matrixes.

  19. Multi products single machine economic production quantity model with multiple batch size

    Directory of Open Access Journals (Sweden)

    Ata Allah Taleizadeh

    2011-04-01

    Full Text Available In this paper, a multi products single machine economic order quantity model with discrete delivery is developed. A unique cycle length is considered for all produced items with an assumption that all products are manufactured on a single machine with a limited capacity. The proposed model considers different items such as production, setup, holding, and transportation costs. The resulted model is formulated as a mixed integer nonlinear programming model. Harmony search algorithm, extended cutting plane and particle swarm optimization methods are used to solve the proposed model. Two numerical examples are used to analyze and to evaluate the performance of the proposed model.

  20. M2 priority screening system for near-term activities: Project documentation. Final report December 11, 1992--May 31, 1994

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-08-12

    From May through August, 1993, the M-2 Group within M Division at LANL conducted with the support of the LANL Integration and Coordination Office (ICO) and Applied Decision Analysis, Inc. (ADA), whose purpose was to develop a system for setting priorities among activities. This phase of the project concentrated on prioritizing near-tenn activities (i.e., activities that must be conducted in the next six months) necessary for setting up this new group. Potential future project phases will concentrate on developing a tool for setting priorities and developing annual budgets for the group`s operations. The priority screening system designed to address the near-term problem was developed, applied in a series of meeting with the group managers, and used as an aid in the assignment of tasks to group members. The model was intended and used as a practical tool for documenting and explaining decisions about near-term priorities, and not as a substitute for M-2 management judgment and decision-making processes.

  1. Interactions among Amazon land use, forests and climate: prospects for a near-term forest tipping point.

    Science.gov (United States)

    Nepstad, Daniel C; Stickler, Claudia M; Filho, Britaldo Soares-; Merry, Frank

    2008-05-27

    Some model experiments predict a large-scale substitution of Amazon forest by savannah-like vegetation by the end of the twenty-first century. Expanding global demands for biofuels and grains, positive feedbacks in the Amazon forest fire regime and drought may drive a faster process of forest degradation that could lead to a near-term forest dieback. Rising worldwide demands for biofuel and meat are creating powerful new incentives for agro-industrial expansion into Amazon forest regions. Forest fires, drought and logging increase susceptibility to further burning while deforestation and smoke can inhibit rainfall, exacerbating fire risk. If sea surface temperature anomalies (such as El Niño episodes) and associated Amazon droughts of the last decade continue into the future, approximately 55% of the forests of the Amazon will be cleared, logged, damaged by drought or burned over the next 20 years, emitting 15-26Pg of carbon to the atmosphere. Several important trends could prevent a near-term dieback. As fire-sensitive investments accumulate in the landscape, property holders use less fire and invest more in fire control. Commodity markets are demanding higher environmental performance from farmers and cattle ranchers. Protected areas have been established in the pathway of expanding agricultural frontiers. Finally, emerging carbon market incentives for reductions in deforestation could support these trends.

  2. Man-Machine Interface Design for Modeling and Simulation Software

    Directory of Open Access Journals (Sweden)

    Arnstein J. Borstad

    1986-07-01

    Full Text Available Computer aided design (CAD systems, or more generally interactive software, are today being developed for various application areas like VLSI-design, mechanical structure design, avionics design, cartographic design, architectual design, office automation, publishing, etc. Such tools are becoming more and more important in order to be productive and to be able to design quality products. One important part of CAD-software development is the man-machine interface (MMI design.

  3. Human factors model concerning the man-machine interface of mining crewstations

    Science.gov (United States)

    Rider, James P.; Unger, Richard L.

    1989-01-01

    The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.

  4. Model evolvement and reuse technology of injection molding machine based on performance knowledge

    Institute of Scientific and Technical Information of China (English)

    Wei Zhe; Feng Yixiong; Tan Jianrong; Wang Jinlong

    2008-01-01

    To illuminate the necessity of model evolvement and reuse, dynamics of injection molding machine's product models are analyzed. The performance knowledge is used to support the model evolvement and reuse. The driven factors of mechanical product model are concluded. The dynamic characteristics of product model are described. The performance knowledge is used to improve specific evolvement process. The upper-layer passing rules are adopted in the mechanical product configuration design. The rules of product model evolvement are investigated. And the model evolvement of injection molding machine has three levels. Practical and effective realization arithmetic is given to realize the performance knowledge reuse. Finally, HT1800X1N series injection molding machines are taken as examples to illuminate that the arithmetic is correct and practical.

  5. Near-Term Electric Vehicle Program. Phase II: Mid-Term Summary Report.

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-08-01

    The Near Term Electric Vehicle (NTEV) Program is a constituent elements of the overall national Electric and Hybrid Vehicle Program that is being implemented by the Department of Energy in accordance with the requirements of the Electric and Hybrid Vehicle Research, Development, and Demonstration Act of 1976. Phase II of the NTEV Program is focused on the detailed design and development, of complete electric integrated test vehicles that incorporate current and near-term technology, and meet specified DOE objectives. The activities described in this Mid-Term Summary Report are being carried out by two contractor teams. The prime contractors for these contractor teams are the General Electric Company and the Garrett Corporation. This report is divided into two discrete parts. Part 1 describes the progress of the General Electric team and Part 2 describes the progress of the Garrett team.

  6. Near-Term Opportunities for Carbon Dioxide Capture and Storage 2007

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    This document contains the summary report of the workshop on global assessments for near-term opportunities for carbon dioxide capture and storage (CCS), which took place on 21-22 June 2007 in Oslo, Norway. It provided an opportunity for direct dialogue between concerned stakeholders in the global effort to accelerate the development and commercialisation of CCS technology. This is part of a series of three workshops on near-term opportunities for this important mitigation option that will feed into the G8 Plan of Action on Climate Change, Clean Energy and Sustainable Development. The ultimate goal of this effort is to present a report and policy recommendations to the G8 leaders at their 2008 summit meeting in Japan.

  7. Utilisation of Modeling, Stress Analysis, Kinematics Optimisation, and Hypothetical Estimation of Lifetime in the Design Process of Mobile Working Machines

    Science.gov (United States)

    Izrael, Gregor; Bukoveczky, Juraj; Gulan, Ladislav

    2011-12-01

    The contribution deals with several methods used in the construction process such as model creation, verification of technical parameters of the machine, and life estimation of the selected modules. Determination of life cycle for mobile working machines, and their carrying modules respectively by investigation and subsequent processing of results gained by service measurements. Machine life claimed by a producer is only relative, because life of these machines depends not only on the way of work on that particular machine but also the state of material which is manipulated by the machine and in great extent the operator, their observance of security regulations, and prescribed working conditions.

  8. Photovoltaic System Pricing Trends. Historical, Recent, and Near-Term Projections, 2015 Edition

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bolinger, Mark [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Fu, Ran [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Davidson, Carolyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Darghouth, Naïm [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wiser, Ryan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-08-25

    This presentation, based on research at Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory, provides a high-level overview of historical, recent, and projected near-term PV pricing trends in the United States focusing on the installed price of PV systems. It also attempts to provide clarity surrounding the wide variety of potentially conflicting data available about PV system prices. This PowerPoint is the fourth edition from this series.

  9. Photovoltaic System Pricing Trends: Historical, Recent, and Near-Term Projections. 2014 Edition (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, D.; Barbose, G.; Margolis, R.; James, T.; Weaver, S.; Darghouth, N.; Fu, R.; Davidson, C.; Booth, S.; Wiser, R.

    2014-09-01

    This presentation, based on research at Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory, provides a high-level overview of historical, recent, and projected near-term PV pricing trends in the United States focusing on the installed price of PV systems. It also attempts to provide clarity surrounding the wide variety of potentially conflicting data available about PV system prices. This PowerPoint is the third edition from this series.

  10. Adrenal glands are essential for activation of glucogenesis during undernutrition in fetal sheep near term

    OpenAIRE

    Fowden, A. L.; Forhead, A. J.

    2010-01-01

    In adults, the adrenal glands are essential for the metabolic response to stress, but little is known about their role in fetal metabolism. This study examined the effects of adrenalectomizing fetal sheep on glucose and oxygen metabolism in utero in fed conditions and after maternal fasting for 48 h near term. Fetal adrenalectomy (AX) had little effect on the rates of glucose and oxygen metabolism by the fetus or uteroplacental tissues in fed conditions. Endogenous glucose production was negl...

  11. Near-Term Actions to Address Long-Term Climate Risk

    Science.gov (United States)

    Lempert, R. J.

    2014-12-01

    Addressing climate change requires effective long-term policy making, which occurs when reflecting on potential events decades or more in the future causes policy makers to choose near-term actions different than those they would otherwise pursue. Contrary to some expectations, policy makers do sometimes make such long-term decisions, but not as commonly and successfully as climate change may require. In recent years however, the new capabilities of analytic decision support tools, combined with improved understanding of cognitive and organizational behaviors, has significantly improved the methods available for organizations to manage longer-term climate risks. In particular, these tools allow decision makers to understand what near-term actions consistently contribute to achieving both short- and long-term societal goals, even in the face of deep uncertainty regarding the long-term future. This talk will describe applications of these approaches for infrastructure, water, and flood risk management planning, as well as studies of how near-term choices about policy architectures can affect long-term greenhouse gas emission reduction pathways.

  12. A Near-Term Quantum Computing Approach for Hard Computational Problems in Space Exploration

    CERN Document Server

    Smelyanskiy, Vadim N; Knysh, Sergey I; Williams, Colin P; Johnson, Mark W; Thom, Murray C; Macready, William G; Pudenz, Kristen L

    2012-01-01

    In this article, we show how to map a sampling of the hardest artificial intelligence problems in space exploration onto equivalent Ising models that then can be attacked using quantum annealing implemented in D-Wave machine. We overview the existing results as well as propose new Ising model implementations for quantum annealing. We review supervised and unsupervised learning algorithms for classification and clustering with applications to feature identification and anomaly detection. We introduce algorithms for data fusion and image matching for remote sensing applications. We overview planning problems for space exploration mission applications and algorithms for diagnostics and recovery with applications to deep space missions. We describe combinatorial optimization algorithms for task assignment in the context of autonomous unmanned exploration. Finally, we discuss the ways to circumvent the limitation of the Ising mapping using a "blackbox" approach based on ideas from probabilistic computing. In this ...

  13. Thermal Error Modeling of a Machine Tool Using Data Mining Scheme

    Science.gov (United States)

    Wang, Kun-Chieh; Tseng, Pai-Chang

    In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.

  14. Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Wenjie Tian

    2014-01-01

    Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.

  15. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  16. Synthesizing Distributed Protocol Specifications from a UML State Machine Modeled Service Specification

    Institute of Scientific and Technical Information of China (English)

    Jehad Al Dallal; Kassem A.Saleh

    2012-01-01

    The object-oriented paradigm is widely applied in designing and implementing communication systems.Unified Modeling Language (UML) is a standard language used to model the design of object-oriented systems.A protocol state machine is a UML adopted diagram that is widely used in designing communication protocols.It has two key attractive advantages over traditional finite state machines:modeling concurrency and modeling nested hierarchical states.In a distributed communication system,each entity of the system has its own protocol that defines when and how the entity exchanges messages with other communicating entities in the system.The order of the exchanged messages must conform to the overall service specifications of the system.In object-oriented systems,both the service and the protocol specifications are modeled in UML protocol state machines.Protocol specification synthesis methods have to be applied to automatically derive the protocol specification from the service specification.Otherwise,a time-consuming process of design,analysis,and error detection and correction has to be applied iteratively until the design of the protocol becomes error-free and consistent with the service specification.Several synthesis methods are proposed in the literature for models other than UML protocol state machines,and therefore,because of the unique features of the protocol state machines,these methods are inapplicable to services modeled in UML protocol state machines.In this paper,we propose a synthesis method that automatically synthesizes the protocol specification of distributed protocol entities from the service specification,given that both types of specifications are modeled in UML protocol state machines.Our method is based on the latest UML version (UML2.3),and it is proven to synthesize protocol specifications that are syntactically and semantically correct.As an example application,the synthesis method is used to derive the protocol specification of the H.323

  17. Script Controlled Modeling of Low Noise Permanent Magnet Synchronous Machines by using JMAG Designer

    OpenAIRE

    RUSU Tiberiu; BÎRTE Ovidiu; SZABÓ Loránd; MARŢIŞ Claudia Steluţa

    2013-01-01

    This paper deals with the parameterizedmodeling of permanent magnet synchronous machines(PMSM) by means of JMAG Designer, an advancedsimulation software for electromechanical design. Thismethod enables the designer to simulate diverse topologies ofthe machines by only changing some basic parameters of thescript controlling the preprocessing phase of the simulations.For this purpose a graphical user interface for modeling themachine was built up in Visual Basic. Thru it the users canenter the ...

  18. Quantum turing machine and brain model represented by Fock space

    Science.gov (United States)

    Iriyama, Satoshi; Ohya, Masanori

    2016-05-01

    The adaptive dynamics is known as a new mathematics to treat with a complex phenomena, for example, chaos, quantum algorithm and psychological phenomena. In this paper, we briefly review the notion of the adaptive dynamics, and explain the definition of the generalized Turing machine (GTM) and recognition process represented by the Fock space. Moreover, we show that there exists the quantum channel which is described by the GKSL master equation to achieve the Chaos Amplifier used in [M. Ohya and I. V. Volovich, J. Opt. B 5(6) (2003) 639., M. Ohya and I. V. Volovich, Rep. Math. Phys. 52(1) (2003) 25.

  19. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Institute of Scientific and Technical Information of China (English)

    Qianjian GUO; Shuo FAN; Rufeng XU; Xiang CHENG; Guoyong ZHAO; Jianguo YANG

    2017-01-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools,spindle thermal error measurement,modeling and compensation of a two turntable five-axis machine tool are researched.Measurement experiment of heat sources and thermal errors are carried out,and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling.In order to analyze the influence of different heat sources on spindle thermal errors,an ANN (artificial neural network) model is presented,and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN,a new ABCNN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors.In order to test the prediction performance of ABC-NN model,an experiment system is developed,the prediction results of LSR (least squares regression),ANN and ABC-NN are compared with the measurement results of spindle thermal errors.Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN,and the residual error is smaller than 3 μm,the new modeling method is feasible.The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  20. Modelling of internal architecture of kinesin nanomotor as a machine language.

    Science.gov (United States)

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  1. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Science.gov (United States)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-03-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  2. Evaluation of machining methods for trabecular metal implants in a rabbit intramedullary osseointegration model.

    Science.gov (United States)

    Deglurkar, Mukund; Davy, Dwight T; Stewart, Matthew; Goldberg, Victor M; Welter, Jean F

    2007-02-01

    Implant success is dependent in part on the interaction of the implant with the surrounding tissues. Porous tantalum implants (Trabecular Metal, TM) have been shown to have excellent osseointegration. Machining this material to complex shapes with close tolerances is difficult because of its open structure and the ductile nature of metallic tantalum. Conventional machining results in occlusion of most of the surface porosity by the smearing of soft metal. This study compared TM samples finished by three processing techniques: conventional machining, electrical discharge machining, and nonmachined, "as-prepared." The TM samples were studied in a rabbit distal femoral intramedullary osseointegration model and in cell culture. We assessed the effects of these machining methods at 4, 8, and 12 weeks after implant placement. The finishing technique had a profound effect on the physical presentation of the implant interface: conventional machining reduced surface porosity to 30% compared to bulk porosities in the 70% range. Bone ongrowth was similar in all groups, while bone ingrowth was significantly greater in the nonmachined samples. The resulting mechanical properties of the bone implant-interface were similar in all three groups, with only interface stiffness and interface shear modulus being significantly higher in the machined samples.

  3. Development of Abrasive Selection Model/Chart for Palm Frond Broom Peeling Machine Design

    Directory of Open Access Journals (Sweden)

    Nwankwojike

    2014-12-01

    Full Text Available A model for predicting the friction required by a palm frond broom peeling machine for effective peeling of palm leaf to broom bristle and a chart for selecting the best abrasive material for this machine’s peeling operation were developed in this study using mechanistic modeling method. The model quantified the relationship between the coefficient of friction and other operational parameters of this machine while the abrasives selection chart constitutes a plot of this measured friction parameter against the abrasive materials used in palm frond broom peeling machine fabrication. The values of the coefficient of friction of palm leaf on different abrasive materials used in this plot were determined from experimental study of the effect of moisture content level of naturally withered palm leaves (uninfluenced by external forces on their coefficient of friction with the abrasives. Results revealed the average moisture content of palm leaf this machine can peel effectively as 6.96% and also that the roughest among the abrasives that approximate the coefficient of friction for a specific design of this peeling machine gives maximum peeling efficiency. Thus, the roughest among the abrasive materials that approximate the coefficient of friction for a specific design of this machine should be selected and used for its fabrication and operation.

  4. Technical note: Evaluation of three machine learning models for surface ocean CO2 mapping

    Science.gov (United States)

    Zeng, Jiye; Matsunaga, Tsuneo; Saigusa, Nobuko; Shirai, Tomoko; Nakaoka, Shin-ichiro; Tan, Zheng-Hong

    2017-04-01

    Reconstructing surface ocean CO2 from scarce measurements plays an important role in estimating oceanic CO2 uptake. There are varying degrees of differences among the 14 models included in the Surface Ocean CO2 Mapping (SOCOM) inter-comparison initiative, in which five models used neural networks. This investigation evaluates two neural networks used in SOCOM, self-organizing maps and feedforward neural networks, and introduces a machine learning model called a support vector machine for ocean CO2 mapping. The technique note provides a practical guide to selecting the models.

  5. Nonlinear Modeling of a High Precision Servo Injection Molding Machine Including Novel Molding Approach

    Institute of Scientific and Technical Information of China (English)

    何雪松; 王旭永; 冯正进; 章志新; 杨钦廉

    2003-01-01

    A nonlinear mathematical model of the injection molding process for electrohydraulic servo injection molding machine (IMM) is developed.It was found necessary to consider the characteristics of asymmetric cylinder for electrohydraulic servo IMM.The model is based on the dynamics of the machine including servo valve,asymmetric cylinder and screw,and the non-Newtonian flow behavior of polymer melt in injection molding is also considered.The performance of the model was evaluated based on novel approach of molding - injection and compress molding,and the results of simulation and experimental data demonstrate the effectiveness of the model.

  6. Measuring and Modelling Delays in Robot Manipulators for Temporally Precise Control using Machine Learning

    DEFF Research Database (Denmark)

    Andersen, Thomas Timm; Amor, Heni Ben; Andersen, Nils Axel

    2015-01-01

    and separate. In this paper, we present a data-driven methodology for separating and modelling inherent delays during robot control. We show how both actuation and response delays can be modelled using modern machine learning methods. The resulting models can be used to predict the delays as well...

  7. An Introduction to Topic Modeling as an Unsupervised Machine Learning Way to Organize Text Information

    Science.gov (United States)

    Snyder, Robin M.

    2015-01-01

    The field of topic modeling has become increasingly important over the past few years. Topic modeling is an unsupervised machine learning way to organize text (or image or DNA, etc.) information such that related pieces of text can be identified. This paper/session will present/discuss the current state of topic modeling, why it is important, and…

  8. A novel method to estimate model uncertainty using machine learning techniques

    NARCIS (Netherlands)

    Solomatine, D.P.; Lal Shrestha, D.

    2009-01-01

    A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the

  9. Research on cubic polynomial acceleration and deceleration control model for high speed NC machining

    Institute of Scientific and Technical Information of China (English)

    Hong-bin LENG; Yi-jie WU; Xiao-hong PAN

    2008-01-01

    To satisfy the need of high speed NC (numerical control) machining, an acceleration and deceleration (aec/dec) control model is proposed, and the speed curve is also constructed by the cubic polynomial. The proposed control model provides continuity of acceleration, which avoids the intense vibration in high speed NC machining. Based on the discrete characteristic of the data sampling interpolation, the acc/dec control discrete mathematical model is also set up and the discrete expression of the theoretical deceleration length is obtained furthermore. Aiming at the question of hardly predetermining the deceleration point in acc/dec control before interpolation, the adaptive acc/dec control algorithm is deduced from the expressions of the theoretical deceleration length. The experimental result proves that the acc/dec control model has the characteristic of easy implementation, stable movement and low impact. The model has been applied in multi-axes high speed micro fabrication machining successfully.

  10. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  11. Universal geometric error modeling of the CNC machine tools based on the screw theory

    Science.gov (United States)

    Tian, Wenjie; He, Baiyan; Huang, Tian

    2011-05-01

    The methods to improve the precision of the CNC (Computerized Numerical Control) machine tools can be classified into two categories: error prevention and error compensation. Error prevention is to improve the precision via high accuracy in manufacturing and assembly. Error compensation is to analyze the source errors that affect on the machining error, to establish the error model and to reach the ideal position and orientation by modifying the trajectory in real time. Error modeling is the key to compensation, so the error modeling method is of great significance. Many researchers have focused on this topic, and proposed many methods, but we can hardly describe the 6-dimensional configuration error of the machine tools. In this paper, the universal geometric error model of CNC machine tools is obtained utilizing screw theory. The 6-dimensional error vector is expressed with a twist, and the error vector transforms between different frames with the adjoint transformation matrix. This model can describe the overall position and orientation errors of the tool relative to the workpiece entirely. It provides the mathematic model for compensation, and also provides a guideline in the manufacture, assembly and precision synthesis of the machine tools.

  12. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Science.gov (United States)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  13. Solar Flare Prediction Model with Three Machine-Learning Algorithms Using Ultraviolet Brightening and Vector Magnetogram

    CERN Document Server

    Nishizuka, N; Kubo, Y; Den, M; Watari, S; Ishii, M

    2016-01-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 h. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetogram, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions from the full-disk magnetogram, from which 60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine learning algorithms: the support vector machine (SVM), k-nearest neighbors (k-NN), and ...

  14. Dynamical modelling of the excavating chain of a ballast cleaning machine

    Directory of Open Access Journals (Sweden)

    Boris Petkov

    2015-12-01

    Full Text Available The ballast bed as part of the railway track fulfils the important functions as the binding element between sleepers and substructure. Fouling increases over the years for various reasons. When the necessary good functioning is no longer assured, ballast bed cleaning must be performed. The machines, that perform that task, are equipped with various complex mechanics - hydraulic systems that ensure high productivity, efficiency and quality of the works. In this article is presented one way of studying the work of the machine for excavating ballast from the ballast bed to the sieving machine. We suggest a dynamic model for simulating the work of a scraper chain of a ballast cleaning machine with different working parameters.

  15. Multiphysics Modeling of an Permanent Magnet Synchronous Machine

    Directory of Open Access Journals (Sweden)

    MARTIS Claudia

    2012-10-01

    Full Text Available This paper analyzes the noise and vibration in PMSMs. There are three types of vibrations in electrical machines: electromagnetic,mechanical and aerodynamic. Electromagnetic force are the main cause of noise and vibration in PMSMs. It is very important to calculate precisely the natural frequencies of the stator system. If oneradial force (which are the main cause for electromagnetic vibration has the frequency close to the natural frequency of the stator system for the same order of vibrational mode, then this force canproduce dangerous vibration in the stator system. The natural frequencies for a stator system of a PMSM have been calculated. Finally a Structural Analysis has been made , pointing out the radialdisplacement and stress for the chosen PMSM .

  16. A rule-based approach to model checking of UML state machines

    Science.gov (United States)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  17. Modeling and Analysis of Single Machine Scheduling Based on Noncooperative Game Theory

    Institute of Scientific and Technical Information of China (English)

    WANGChang-Jun; XIYu-Geng

    2005-01-01

    Considering the independent optimization requirement for each demander of modern manufacture, we explore the application of noncooperative game in production scheduling research,and model scheduling problem as competition of machine resources among a group of selfish jobs.Each job has its own performance objective. For the single machine, multi-jobs and non-preemptive scheduling problem, a noncooperative game model is established. Based on the model, many problems about Nash equilibrium solution, such as the existence, quantity, properties of solution space,performance of solution and algorithm are discussed. The results are tested by numerical example.

  18. Interpreting linear support vector machine models with heat map molecule coloring

    Directory of Open Access Journals (Sweden)

    Rosenbaum Lars

    2011-03-01

    Full Text Available Abstract Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor.

  19. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods

    Science.gov (United States)

    Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-01-01

    Background To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient’s weight kept rising in the past year). This process becomes infeasible with limited budgets. Objective This study’s goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. Methods This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new

  20. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    Science.gov (United States)

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  1. A new circuit-oriented model for the analysis of six-phase induction machine performances

    Energy Technology Data Exchange (ETDEWEB)

    Aroquiadassou, Gerard; Henao, Humberto; Capolino, Gerard-Andre [University of Picardie Jules Verne, Department of Electrical Engineering, 33 rue Saint Leu, 80039 Amiens Cedex 1 (France); Cavagnino, Andrea; Boglietti, Aldo [Politecnico di Torino, Department of Electrical Engineering, C.so Duca degli Abruzzi 24, 10129 Torino (Italy)

    2008-10-15

    This paper deals with a six-phase induction machine design for 42 V embedded applications such as electrical power steering. This machine has symmetrical 60 displacement windings which allow fault-tolerant modes. In fact, when one or more phases are opened, the machine is able to rotate with a torque reduction. A simple circuit-oriented model has been proposed in order to simulate the six-phase squirrel-cage induction machine and to predict its performances. The proposed method consists in the elaboration of an electric equivalent circuit obtained from minimal dimensional knowledge of stator and rotor parts. It takes into account only the magnetic circuit dimensions and the airgap length. A six-phase squirrel-cage induction machine of 0.09 kW, 17 V, 50 Hz, two poles has been used for the experimental set-up. A design program including the non-linear electromagnetic model has been also used with a complete description of stator and rotor cores using the iron non-linear characteristic for the final verification. The simulation results given by the two models are compared with the experimental tests in order to verify their accuracy. The harmonic analyses of stator currents are also compared to go further in the model validations. (author)

  2. Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators

    Directory of Open Access Journals (Sweden)

    Ilias Ouachtouk

    2016-01-01

    Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.

  3. Evaluation of selected near-term energy-conservation options for the Midwest

    Energy Technology Data Exchange (ETDEWEB)

    Evans, A.R.; Colsher, C.S.; Hamilton, R.W.; Buehring, W.A.

    1978-11-01

    This report evaluates the potential for implementation of near-term energy-conservation practices for the residential, commercial, agricultural, industrial, transportation, and utility sectors of the economy in twelve states: Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin. The information used to evaluate the magnitude of achievable energy savings includes regional energy use, the regulatory/legislative climate relating to energy conservation, technical characteristics of the measures, and their feasibility of implementation. This work is intended to provide baseline information for an ongoing regional assessment of energy and environmental impacts in the Midwest. 80 references.

  4. Photovoltaic water pumping applications: Assessment of the near-term market

    Science.gov (United States)

    Rosenblum, L.; Bifano, W. J.; Scudder, L. R.; Poley, W. A.; Cusick, J. P.

    1978-01-01

    Water pumping applications represent a potential market for photovoltaics. The price of energy for photovoltaic systems was compared to that of utility line extensions and diesel generators. The potential domestic demand was defined in the government, commercial/institutional and public sectors. The foreign demand and sources of funding for water pumping systems in the developing countries were also discussed briefly. It was concluded that a near term domestic market of at least 240 megawatts and a foreign market of about 6 gigawatts exist.

  5. Trade-off results and preliminary designs of Near-Term Hybrid Vehicles

    Science.gov (United States)

    Sandberg, J. J.

    1980-01-01

    Phase I of the Near-Term Hybrid Vehicle Program involved the development of preliminary designs of electric/heat engine hybrid passenger vehicles. The preliminary designs were developed on the basis of mission analysis, performance specification, and design trade-off studies conducted independently by four contractors. THe resulting designs involve parallel hybrid (heat engine/electric) propulsion systems with significant variation in component selection, power train layout, and control strategy. Each of the four designs is projected by its developer as having the potential to substitute electrical energy for 40% to 70% of the petroleum fuel consumed annually by its conventional counterpart.

  6. Phase I of the Near Term Hybrid Passenger Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    The results of Phase I of the Near-Term Hybrid Vehicle Program are summarized. This phase of the program ws a study leading to the preliminary design of a 5-passenger hybrid vehicle utilizing two energy sources (electricity and gasoline/diesel fuel) to minimize petroleum usage on a fleet basis. This report presents the following: overall summary of the Phase I activity; summary of the individual tasks; summary of the hybrid vehicle design; summary of the alternative design options; summary of the computer simulations; summary of the economic analysis; summary of the maintenance and reliability considerations; summary of the design for crash safety; and bibliography.

  7. Advanced induction machine model in phase coordinates for wind turbine applications

    DEFF Research Database (Denmark)

    Fajardo, L.A.; Iov, F.; Hansen, Anca Daniela

    2007-01-01

    In this paper an advanced phase coordinates squirrel cage induction machine model with time varying electrical parameters affected by magnetic saturation and rotor deep bar effects, is presented. The model uses standard data sheet for characterization of the electrical parameters, it is developed...

  8. Combining Psychological Models with Machine Learning to Better Predict People’s Decisions

    Science.gov (United States)

    2012-03-09

    in some applications (Kaelbling, Littman, & Cassandra, 1998; Neumann & Morgenstern, 1944; Russell & Norvig , 2003). However, research into people’s...scientists often model peoples’ decisions through machine learning techniques (Russell & Norvig , 2003). These models are based on statistical methods such as...A., & Kraus, S. (2011). Using aspiration adaptation theory to improve learning. In Aamas (p. 423-430). Russell, S. J., & Norvig , P. (2003

  9. Euler-Lagrange models with complex currents of three-phase electrical machines

    CERN Document Server

    Basic, Duro; Rouchon, Pierre

    2008-01-01

    A Lagrangian formulation with complex currents is developed and yields a direct and simple method for modeling three-phases permanent-magnet and induction machines. The Lagrangian is the sum of the mechanical kinetic energy and of the magnetic energy. This magnetic energy is expressed in terms of rotor angle, complex stator and rotor currents. Such Lagrangian setting is a precious guide for modeling space-harmonics and saturation effects. A complexification procedure is applied here in order to derive the Euler-Lagrange equations with complex stator and rotor currents. Such complexification process avoids the usual separation into real and imaginary parts and simplifies notably the calculations. Via simple modification of magnetic energies we derive non-trivial dynamical models describing permanent-magnet machines with both saturation and saliency, and induction machines with both saturation and space harmonics.

  10. Simulating Turing machines on Maurer machines

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2008-01-01

    In a previous paper, we used Maurer machines to model and analyse micro-architectures. In the current paper, we investigate the connections between Turing machines and Maurer machines with the purpose to gain an insight into computability issues relating to Maurer machines. We introduce ways to

  11. A proposed model for assessing service quality in small machining and industrial maintenance companies

    Directory of Open Access Journals (Sweden)

    Morvam dos Santos Netto

    2014-11-01

    Full Text Available Machining and industrial maintenance services include repair (corrective maintenance of equipments, activities involving the assembly-disassembly of equipments, fault diagnosis, machining operations, forming operations, welding processes, assembly and test of equipments. This article proposes a model for assessing the quality of services provided by small machining and industrial maintenance companies, since there is a gap in the literature regarding this issue and because the importance of small service companies in socio-economic development of the country. The model is an adaptation of the SERVQUAL instrument and the criteria determining the quality of services are designed according to the service cycle of a typical small machining and industrial maintenance company. In this sense, the Moments of Truth have been considered in the preparation of two separate questionnaires. The first questionnaire contains 24 statements that reflect the expectations of customers, and the second one contains 24 statements that measure perceptions of service performance. An additional item was included in each questionnaire to assess, respectively, the overall expectation about the services and the overall company performance. Therefore, it is a model that considers the interfaces of the client/supplier relationship, the peculiarities of the machining and industrial maintenance service sector and the company size.

  12. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  13. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  14. Colour Model for Outdoor Machine Vision for Tropical Regions and its Comparison with the CIE Model

    Energy Technology Data Exchange (ETDEWEB)

    Sahragard, Nasrolah; Ramli, Abdul Rahman B [Institute of Advanced Technology, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Marhaban, Mohammad Hamiruce [Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia); Mansor, Shattri B, E-mail: sahragard@yahoo.com [Department of Civil Engineering, Faculty of Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor (Malaysia)

    2011-02-15

    Accurate modeling of daylight and surface reflectance are very useful for most outdoor machine vision applications specifically those which are based on color recognition. Existing daylight CIE model has drawbacks that limit its ability to predict the color of incident light. These limitations include lack of considering ambient light, effects of light reflected off the ground, and context specific information. Previously developed color model is only tested for a few geographical places in North America and its accountability is under question for other places in the world. Besides, existing surface reflectance models are not easily applied to outdoor images. A reflectance model with combined diffuse and specular reflection in normalized HSV color space could be used to predict color. In this paper, a new daylight color model showing the color of daylight for a broad range of sky conditions is developed which will suit weather conditions of tropical places such as Malaysia. A comparison of this daylight color model and daylight CIE model will be discussed. The colors of matte and specular surfaces have been estimated by use of the developed color model and surface reflection function in this paper. The results are shown to be highly reliable.

  15. Short-Term Speed Prediction Using Remote Microwave Sensor Data: Machine Learning versus Statistical Model

    Directory of Open Access Journals (Sweden)

    Han Jiang

    2016-01-01

    Full Text Available Recently, a number of short-term speed prediction approaches have been developed, in which most algorithms are based on machine learning and statistical theory. This paper examined the multistep ahead prediction performance of eight different models using the 2-minute travel speed data collected from three Remote Traffic Microwave Sensors located on a southbound segment of 4th ring road in Beijing City. Specifically, we consider five machine learning methods: Back Propagation Neural Network (BPNN, nonlinear autoregressive model with exogenous inputs neural network (NARXNN, support vector machine with radial basis function as kernel function (SVM-RBF, Support Vector Machine with Linear Function (SVM-LIN, and Multilinear Regression (MLR as candidate. Three statistical models are also selected: Autoregressive Integrated Moving Average (ARIMA, Vector Autoregression (VAR, and Space-Time (ST model. From the prediction results, we find the following meaningful results: (1 the prediction accuracy of speed deteriorates as the prediction time steps increase for all models; (2 the BPNN, NARXNN, and SVM-RBF can clearly outperform two traditional statistical models: ARIMA and VAR; (3 the prediction performance of ANN is superior to that of SVM and MLR; (4 as time step increases, the ST model can consistently provide the lowest MAE comparing with ARIMA and VAR.

  16. Modeling and optimization of Electrical Discharge Machining (EDM using statistical design

    Directory of Open Access Journals (Sweden)

    Hegab Husein A.

    2015-01-01

    Full Text Available Modeling and optimization of nontraditional machining is still an ongoing area of research. The objective of this work is to optimize Electrical Discharge Machining process parameters of Aluminum-multiwall carbon Nanotube composites (AL-CNT model. Material Removal Rate (MRR, Wear Electrode Ratio (EWR and Average Surface Roughness (Ra are primary objectives. The Machining parameters are machining-on time (sec, discharge current (A, voltage (V, total depth of cut (mm, and %wt. CNT added. Mathematical models for all responses as function of significant process parameters are developed using Response Surface Methodology (RSM. Experimental results show optimum levels for material removal rate are %wt. CNT (0%, high level of discharge current (6A and low level of voltage (50 V while optimum levels for Electrode wear ratio are %wt. CNT (5%, high level of discharge current (6A and optimum levels for average surface roughness are %wt. CNT (0%, low level of discharge current (2A and high level of depth of cut (1 mm. Single-objective optimization is formulated and solved via Genetic Algorithm. Multi-objective optimization model is then formulated for the three responses of interest. This methodology gathers experimental results, builds mathematical models in the domain of interest and optimizes the process models. As such, process analysis, modeling, design and optimization are achieved.

  17. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  18. Prediction of near-term breast cancer risk using local region-based bilateral asymmetry features in mammography

    Science.gov (United States)

    Li, Yane; Fan, Ming; Li, Lihua; Zheng, Bin

    2017-03-01

    This study proposed a near-term breast cancer risk assessment model based on local region bilateral asymmetry features in Mammography. The database includes 566 cases who underwent at least two sequential FFDM examinations. The `prior' examination in the two series all interpreted as negative (not recalled). In the "current" examination, 283 women were diagnosed cancers and 283 remained negative. Age of cancers and negative cases completely matched. These cases were divided into three subgroups according to age: 152 cases among the 37-49 age-bracket, 220 cases in the age-bracket 50- 60, and 194 cases with the 61-86 age-bracket. For each image, two local regions including strip-based regions and difference-of-Gaussian basic element regions were segmented. After that, structural variation features among pixel values and structural similarity features were computed for strip regions. Meanwhile, positional features were extracted for basic element regions. The absolute subtraction value was computed between each feature of the left and right local-regions. Next, a multi-layer perception classifier was implemented to assess performance of features for prediction. Features were then selected according stepwise regression analysis. The AUC achieved 0.72, 0.75 and 0.71 for these 3 age-based subgroups, respectively. The maximum adjustable odds ratios were 12.4, 20.56 and 4.91 for these three groups, respectively. This study demonstrate that the local region-based bilateral asymmetry features extracted from CC-view mammography could provide useful information to predict near-term breast cancer risk.

  19. A Case Study of Employing A Single Server Nonpreemptive Priority Queuing Model at ATM Machine

    OpenAIRE

    Abdullah Furquan; Abdullah Imran

    2015-01-01

    This paper discusses a case study of employing a single server nonpreemptivepriorityqueuing model [1]at ATM machine which originally operates on M/M/1 model. In this study we have taken two priority classes of people in following order:- .Priority class 1- woman .Priority class 2- man Sometimea long queue is formed at ATMmachine (single server)but the bank management don’t have enough money to invest on installing new ATM machine.In this situation we want to apply single ser...

  20. An Identification Model of Health States of Machine Wear Based on Oil Analysis

    Institute of Scientific and Technical Information of China (English)

    FU Jun-qing; LI Han-xiong; XUAO Xin-hua

    2005-01-01

    This paper presents a modeling procedure for deriving a single value measure based on a regression model, and a method for determining a statistical threshold value as identification criterion of normal or abnormal states of machine wear. A real numerical example is examined by the method and identification criterion presented. The results indicate that the judgments by the presented methods are basically consistent with the real facts, and therefore the method and identification criterion are valuable for judging the normal or abnormal state of machine wear based on oil analysis.

  1. FLOW STRESS MODEL FOR HARD MACHINING OF AISI H13 WORK TOOL STEEL

    Institute of Scientific and Technical Information of China (English)

    H. Yan; J. Hua; R. Shivpuri

    2005-01-01

    An approach is presented to characterize the stress response of workpiece in hard machining,accounted for the effect of the initial workpiece hardness, temperature, strain and strain rate on flow stress. AISI H13 work tool steel was chosen to verify this methodology. The proposed flow stress model demonstrates a good agreement with data collected from published experiments.Therefore, the proposed model can be used to predict the corresponding flow stress-strain response of AISI H13 work tool steel with variation of the initial workpiece hardness in hard machining.

  2. Analytical modelling of modular and unequal tooth width surface-mounted permanent magnet machines

    OpenAIRE

    Li, G. J.; Zhu, Z-Q.

    2015-01-01

    This paper presents simple analytical modelling for 2 types of 3-phase surface-mounted permanent magnet (SPM) machines such as modular and unequal tooth width (UNET) machines with different slot/pole number combinations. It is based on the slotless open-circuit air-gap flux density and the slotted air-gap relative permeance calculations. This model allows calculating the open-circuit air-gap flux density, phase flux linkage and back electromotive force (EMF), average torque of both the modula...

  3. Phase I of the Near-Term Hybrid Passenger-Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    Heat engine/electric hybrid vehicles offer the potential of greatly reduced petroleum consumption, compared to conventional vehicles, without the disadvantages of limited performance and operating range associated with purely electric vehicles. This report documents a hybrid-vehicle design approach which is aimed at the development of the technology required to achieve this potential - in such a way that it is transferable to the auto industry in the near term. The development of this design approach constituted Phase I of the Near-Term Hybrid-Vehicle Program. The major tasks in this program were: (1) Mission Analysis and Performance Specification Studies; (2) Design Tradeoff Studies; and (3) Preliminary Design. Detailed reports covering each of these tasks are included as appendices to this report and issued under separate cover; a fourth task, Sensitivity Studies, is also included in the report on the Design Tradeoff Studies. Because of the detail with which these appendices cover methodology and both interim and final results, the body of this report was prepared as a brief executive summary of the program activities and results, with appropriate references to the detailed material in the appendices.

  4. Baroreflex control of renal sympathetic nerve activity and heart rate in near-term fetal sheep.

    Science.gov (United States)

    Booth, Lindsea C; Gunn, Alistair J; Malpas, Simon C; Barrett, Carolyn J; Davidson, Joanne O; Guild, Sarah-Jane; Bennet, Laura

    2011-08-01

    Late preterm infants, born between 34 and 36 weeks gestation, have significantly higher morbidity than neonates born at full term, which may be partly related to reduced sensitivity of the arterial baroreflex. The present study assessed baroreflex control of heart rate (HR) and renal sympathetic nerve activity (RSNA) in near-term fetal sheep at 123 ± 1 days gestation. At this age, although fetuses are not fully mature in some respects (term is 147 days), sleep-state cycling is established [between high-voltage, low-frequency (HV) and low-voltage, high-frequency (LV) sleep], and neural myelination is similar to the term human infant. Fetal sheep were instrumented to record blood pressure (BP), HR (n = 15) and RSNA (n = 5). Blood pressure was manipulated using vasoactive drugs, phenylephrine and sodium nitroprusside. In both HV and LV sleep, phenylephrine was associated with increased arterial BP and decreased HR. In HV sleep, phenylephrine was associated with a fall in RSNA, from 124 ± 14 to 58 ± 11% (P fall in BP after sodium nitroprusside was associated with a significant increase in HR during LV but not HV sleep, and there was no significant effect of hypotension on RSNA. These data demonstrate that in near-term fetal sheep baroreflex activity is only partly active and is highly modulated by sleep state. Critically, there was no RSNA response to marked hypotension; this finding has implications for the ability of the late preterm fetus to adapt to low BP.

  5. Contribution of maternal thyroxine to fetal thyroxine pools in normal rats near term

    Energy Technology Data Exchange (ETDEWEB)

    Morreale de Escobar, G.; Calvo, R.; Obregon, M.J.; Escobar Del Rey, F. (Instituto de Investigaciones Biomedicas, Madrid (Spain))

    1990-05-01

    Normal dams were equilibrated isotopically with ({sup 125}I)T4 infused from 11 to 21 days of gestation, at which time maternal and fetal extrathyroidal tissues were obtained to determine their ({sup 125}I)T4 and T4 contents. The specific activity of the ({sup 125}I)T4 in the fetal tissues was lower than in maternal T4 pools. The extent of this change allows evaluation of the net contribution of maternal T4 to the fetal extrathyroidal T4 pools. At 21 days of gestation, near term, this represents 17.5 +/- 0.9% of the T4 in fetal tissues, a value considerably higher than previously calculated. The methodological approach was validated in dams given a goitrogen to block fetal thyroid function. The specific activities of the ({sup 125}I)T4 in maternal and fetal T4 pools were then similar, confirming that in cases of fetal thyroid impairment the T4 in fetal tissues is determined by the maternal contribution. Thus, previous statements that in normal conditions fetal thyroid economy near term is totally independent of maternal thyroid status ought to be reconsidered.

  6. Phase I of the Near-Term Hybrid Vehicle Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-10

    Heat engine/electric hybrid vehicles offer the potential of greatly reduced petroleum consumption, compared to conventional vehicles, without the disadvantages of limited performance and operating range associated with pure electric vehicles. This report documents a hybrid vehicle design approach which is aimed at the development of the technology required to achieve this potential, in such a way that it is transferable to the auto industry in the near term. The development of this design approach constituted Phase I of the Near-Term Hybrid Vehicle Program. The major tasks in this program were: mission analysis and performance specification studies; design tradeoff studies; and preliminary design. Detailed reports covering each of these tasks are included as appendices to this report. A fourth task, sensitivity studies, is also included in the report on the design tradeoff studies. Because of the detail with which these appendices cover methodology and results, the body of this report has been prepared as a brief executive summary of the program activities and results, with appropriate references to the detailed material in the appendices.

  7. Static Object Detection Based on a Dual Background Model and a Finite-State Machine

    Directory of Open Access Journals (Sweden)

    Heras Evangelio Rubén

    2011-01-01

    Full Text Available Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the learning rate. The state machine provides the meaning for the interpretation of the results obtained from background subtraction; it can be implemented as a look-up table with negligible computational cost and it can be easily extended. Due to the definition of the states in the state machine, the system can be used either full automatically or interactively, making it extremely suitable for real-life surveillance applications. The system was successfully validated with several public datasets.

  8. A Novel Machine Learning Strategy Based on Two-Dimensional Numerical Models in Financial Engineering

    Directory of Open Access Journals (Sweden)

    Qingzhen Xu

    2013-01-01

    Full Text Available Machine learning is the most commonly used technique to address larger and more complex tasks by analyzing the most relevant information already present in databases. In order to better predict the future trend of the index, this paper proposes a two-dimensional numerical model for machine learning to simulate major U.S. stock market index and uses a nonlinear implicit finite-difference method to find numerical solutions of the two-dimensional simulation model. The proposed machine learning method uses partial differential equations to predict the stock market and can be extensively used to accelerate large-scale data processing on the history database. The experimental results show that the proposed algorithm reduces the prediction error and improves forecasting precision.

  9. Remotely sensed data assimilation technique to develop machine learning models for use in water management

    Science.gov (United States)

    Zaman, Bushra

    Increasing population and water conflicts are making water management one of the most important issues of the present world. It has become absolutely necessary to find ways to manage water more efficiently. Technological advancement has introduced various techniques for data acquisition and analysis, and these tools can be used to address some of the critical issues that challenge water resource management. This research used learning machine techniques and information acquired through remote sensing, to solve problems related to soil moisture estimation and crop identification on large spatial scales. In this dissertation, solutions were proposed in three problem areas that can be important in the decision making process related to water management in irrigated systems. A data assimilation technique was used to build a learning machine model that generated soil moisture estimates commensurate with the scale of the data. The research was taken further by developing a multivariate machine learning algorithm to predict root zone soil moisture both in space and time. Further, a model was developed for supervised classification of multi-spectral reflectance data using a multi-class machine learning algorithm. The procedure was designed for classifying crops but the model is data dependent and can be used with other datasets and hence can be applied to other landcover classification problems. The dissertation compared the performance of relevance vector and the support vector machines in estimating soil moisture. A multivariate relevance vector machine algorithm was tested in the spatio-temporal prediction of soil moisture, and the multi-class relevance vector machine model was used for classifying different crop types. It was concluded that the classification scheme may uncover important data patterns contributing greatly to knowledge bases, and to scientific and medical research. The results for the soil moisture models would give a rough idea to farmers

  10. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  11. Predicting the Plant Root-Associated Ecological Niche of 21 Pseudomonas Species Using Machine Learning and Metabolic Modeling

    OpenAIRE

    Chien, Jennifer; Larsen, Peter

    2017-01-01

    Plants rarely occur in isolated systems. Bacteria can inhabit either the endosphere, the region inside the plant root, or the rhizosphere, the soil region just outside the plant root. Our goal is to understand if using genomic data and media dependent metabolic model information is better for training machine learning of predicting bacterial ecological niche than media independent models or pure genome based species trees. We considered three machine learning techniques: support vector machin...

  12. A mechanistic ultrasonic vibration amplitude model during rotary ultrasonic machining of CFRP composites.

    Science.gov (United States)

    Ning, Fuda; Wang, Hui; Cong, Weilong; Fernando, P K S C

    2017-04-01

    Rotary ultrasonic machining (RUM) has been investigated in machining of brittle, ductile, as well as composite materials. Ultrasonic vibration amplitude, as one of the most important input variables, affects almost all the output variables in RUM. Numerous investigations on measuring ultrasonic vibration amplitude without RUM machining have been reported. In recent years, ultrasonic vibration amplitude measurement with RUM of ductile materials has been investigated. It is found that the ultrasonic vibration amplitude with RUM was different from that without RUM under the same input variables. RUM is primarily used in machining of brittle materials through brittle fracture removal. With this reason, the method for measuring ultrasonic vibration amplitude in RUM of ductile materials is not feasible for measuring that in RUM of brittle materials. However, there are no reported methods for measuring ultrasonic vibration amplitude in RUM of brittle materials. In this study, ultrasonic vibration amplitude in RUM of brittle materials is investigated by establishing a mechanistic amplitude model through cutting force. Pilot experiments are conducted to validate the calculation model. The results show that there are no significant differences between amplitude values calculated by model and those obtained from experimental investigations. The model can provide a relationship between ultrasonic vibration amplitude and input variables, which is a foundation for building models to predict other output variables in RUM.

  13. State Machine Modeling of the Space Launch System Solid Rocket Boosters

    Science.gov (United States)

    Harris, Joshua A.; Patterson-Hine, Ann

    2013-01-01

    The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.

  14. Modelling habitat requirements of white-clawed crayfish (Austropotamobius pallipes using support vector machines

    Directory of Open Access Journals (Sweden)

    Favaro L.

    2011-07-01

    Full Text Available The white-clawed crayfish’s habitat has been profoundly modified in Piedmont (NW Italy due to environmental changes caused by human impact. Consequently, native populations have decreased markedly. In this research project, support vector machines were tested as possible tools for evaluating the ecological factors that determine the presence of white-clawed crayfish. A system of 175 sites was investigated, 98 of which recorded the presence of Austropotamobius pallipes. At each site 27 physical-chemical, environmental and climatic variables were measured according to their importance to A. pallipes. Various feature selection methods were employed. These yielded three subsets of variables that helped build three different types of models: (1 models with no variable selection; (2 models built by applying Goldberg’s genetic algorithm after variable selection; (3 models built by using a combination of four supervised-filter evaluators after variable selection. These different model types helped us realise how important it was to select the right features if we wanted to build support vector machines that perform as well as possible. In addition, support vector machines have a high potential for predicting indigenous crayfish occurrence, according to our findings. Therefore, they are valuable tools for freshwater management, tools that may prove to be much more promising than traditional and other machine-learning techniques.

  15. Mathematical Model of Asynchronous Machine in MATLAB Simulink

    Directory of Open Access Journals (Sweden)

    A A Ansari

    2010-05-01

    Full Text Available Different mathematical models have been used over the years to examine different problems associated with induction motors. These range from the simple equivalent circuit models to more complex d,q models and abc models which allow the inclusion of various forms of impedance and/or voltage unbalance. Recently, hybrid models have been developed which allow the inclusion of supply side unbalance but with the computational economy of the d,q models. This paper presents these models with typical results and provides guidelines for their use The dynamic simulation of small power induction motor based on mathematical modelling is proposed in this paper. The dynamic simulation is one of the key steps in the validation of the design process of the motor drive systems and it is needed for eliminating inadvertent design mistakes and the resulting error in the prototype construction and testing. This paper demonstrates the simulation of steady-state performance of induction motor by MATLAB Program Threephase induction motor is modeled and simulated with SIMULINK model.

  16. Modeling powder encapsulation in dosator-based machines: II. Experimental evaluation.

    Science.gov (United States)

    Khawam, Ammar; Schultz, Leon

    2011-12-15

    A theoretical model was previously derived to predict powder encapsulation in dosator-based machines. The theoretical basis of the model was discussed earlier. In this part; the model was evaluated experimentally using two powder formulations with substantially different flow behavior. Encapsulation experiments were performed using a Zanasi encapsulation machine under two sets of experimental conditions. Model predicted outcomes such as encapsulation fill weight and plug height were compared to those experimentally obtained. Results showed a high correlation between predicted and actual outcomes demonstrating the model's success in predicting the encapsulation of both formulations. The model is a potentially useful in silico analysis tool that can be used for capsule dosage form development in accordance to quality by design (QbD) principles.

  17. Comparison of Models Needed for Conceptual Design of Man-Machine Systems in Different Application Domains

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1986-01-01

    For systematic and computer-aided design of man-machine systems, a consistent framework is needed, i. e. , a set of models which allows the selection of system characteristics which serve the individual user not only to satisfy his goal, but also to select mental processes that match his resource...... of other domains, such as emergency management, CAD/CAM/CIM, and office systems, and describes the characteristic differences in model requirements and requirements for model development....

  18. Temperature fields in machining processes and heat transfer models

    Energy Technology Data Exchange (ETDEWEB)

    Palazzo, G.; Pasquino, R. [University of Salerno Via Ponte Donmelillo, Fisciano (Italy). Department of Mechanical Engineering; Bellomo, N. [Politecnico Torino Corso Duca degli Abruzzi, Torino (Italy). Department of Mathematics

    2002-07-01

    This paper deals with the modelling of the heat transfer process with special attention to the characterization of the thermal field during turning processes. Specifically, the measurement of the thermal field and the selection of the proper heat transfer models are dealt with. The analysis is developed in view of the solution of direct and inverse problems. (author)

  19. Modelling and Control of Inverse Dynamics for a 5-DOF Parallel Kinematic Polishing Machine

    Directory of Open Access Journals (Sweden)

    Weiyang Lin

    2013-08-01

     /  control method is presented and investigated 2∞ in order to track the error control of the inverse dynamic model; the simulation results from different conditions show that the mixed  /  control method could 2∞ achieve an optimal and robust control performance. This work shows that the presented PKPM has a higher dynamic performance than conventional machine tools.

  20. Modelling and optimization of a permanent-magnet machine in a flywheel

    NARCIS (Netherlands)

    Holm, S.R.

    2003-01-01

    This thesis describes the derivation of an analytical model for the design and optimization of a permanent-magnet machine for use in an energy storage flywheel. A prototype of this flywheel is to be used as the peak-power unit in a hybrid electric city bus. The thesis starts by showing the feasibili

  1. Advanced Dynamics and Model-Based Control of Structures and Machines

    CERN Document Server

    Krommer, Michael; Belyaev, Alexander

    2012-01-01

    The book contains 26 scientific contributions by leading experts from Russia, Austria, Italy, Japan and Taiwan. It presents an overview on recent developments in Advanced Dynamics and Model Based Control of Structures and Machines. Main topics are nonlinear control of structures and systems, sensing and actuation, active and passive damping, nano- and micromechanics, vibrations and waves.

  2. An Universal Modeling Method for Enhancing the Volumetric Accuracy of CNC Machine Tools

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Volumetric error modeling method is an important te ch nique for enhancement the accuracy of CNC machine tools by error compensation. I n the research field, the main question is how to find an universal kinematics m odeling method for different kinds of NC machine tools. Multi-body system theor y is always used to solve the dynamics problem of complex physical system. But t ill now, the error factors that always exist in practice system is still not con sidered. In this paper, the accuracy kinematics of MB...

  3. Algorithm for Modeling Wire Cut Electrical Discharge Machine Parameters using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    G.Sankara Narayanan

    2014-03-01

    Full Text Available Unconventional machining process finds lot of application in aerospace and precision industries. It is preferred over other conventional methods because of the advent of composite and high strength to weight ratio materials, complex parts and also because of its high accuracy and precision. Usually in unconventional machine tools, trial and error method is used to fix the values of process parameters which increase the production time and material wastage. A mathematical model functionally relating process parameters and operating parameters of a wire cut electric discharge machine (WEDM is developed incorporating Artificial neural network (ANN and the work piece material is SKD11 tool steel. This is accomplished by training a feed forward neural network with back propagation learning Levenberg-Marquardt algorithm. The required data used for training and testing the ANN are obtained by conducting trial runs in wire cut electric discharge machine in a small scale industry from South India. The programs for training and testing the neural network are developed, using matlab 7.0.1 package. In this work, we have considered the parameters such as thickness, time and wear as the input values and from that the values of the process parameters are related and a algorithm is arrived. Hence, the proposed algorithm reduces the time taken by trial runs to set the input process parameters of WEDM and thus reduces the production time along with reduction in material wastage. Thus the cost of machining processes is reduced and thereby increases the overall productivity.

  4. An Ant Optimization Model for Unrelated Parallel Machine Scheduling with Energy Consumption and Total Tardiness

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2015-01-01

    Full Text Available This research considers an unrelated parallel machine scheduling problem with energy consumption and total tardiness. This problem is compounded by two challenges: differences of unrelated parallel machines energy consumption and interaction between job assignments and machine state operations. To begin with, we establish a mathematical model for this problem. Then an ant optimization algorithm based on ATC heuristic rule (ATC-ACO is presented. Furthermore, optimal parameters of proposed algorithm are defined via Taguchi methods for generating test data. Finally, comparative experiments indicate the proposed ATC-ACO algorithm has better performance on minimizing energy consumption as well as total tardiness and the modified ATC heuristic rule is more effectively on reducing energy consumption.

  5. Modeling of thermal spalling during electrical discharge machining of titanium diboride

    Energy Technology Data Exchange (ETDEWEB)

    Gadalla, A.M.; Bozkurt, B.; Faulk, N.M. (Texas A and M Univ., Dept. of Chemical Engineering, College Station, TX (US))

    1991-04-01

    Erosion in electrical discharge machining has been described as occurring by melting and flushing the liquid formed. Recently, however, thermal spalling was reported as the mechanism for machining refractory materials with low thermal conductivity and high thermal expansion. The process is described in this paper by a model based on a ceramic surface exposed to a constant circular heating source which supplied a constant flux over the pulse duration. The calculations were based on TiB{sub 2} mechanical properties along a and c directions. Theoretical predictions were verified by machining hexagonal TiB{sub 2}. Large flakes of TiB{sub 2} with sizes close to grain size and maximum thickness close to the predicted values were collected, together with spherical particles of Cu and Zn eroded from cutting wire. The cutting surfaces consist of cleavage planes sometimes contaminated with Cu, Zn, and impurities from the dielectric fluid.

  6. Small-time scale network traffic prediction based on a local support vector machine regression model

    Institute of Scientific and Technical Information of China (English)

    Meng Qing-Fang; Chen Yue-Hui; Peng Yu-Hua

    2009-01-01

    In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the reconstructed phase space, the local support vector machine prediction method is used to predict the traffic measurement data, and the BIC-based neighbouring point selection method is used to choose the number of the nearest neighbouring points for the local support vector machine regression model. The experimental results show that the local support vector machine prediction method whose neighbouring points are optimized can effectively predict the small-time scale traffic measurement data and can reproduce the statistical features of real traffic measurements.

  7. Programming and machining of complex parts based on CATIA solid modeling

    Science.gov (United States)

    Zhu, Xiurong

    2017-09-01

    The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.

  8. Heliostat Manufacturing for Near-Term Markets: Phase II Final Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1998-12-21

    This report describes a project by Science Applications International Corporation and its subcontractors Boeing/Rocketdyne and Bechtel Corp. to develop manufacturing technology for production of SAIC stretched membrane heliostats. The project consists of three phases, of which two are complete. This first phase had as its goals to identify and complete a detailed evaluation of manufacturing technology, process changes, and design enhancements to be pursued for near-term heliostat markets. In the second phase, the design of the SAIC stretched membrane heliostat was refined, manufacturing tooling for mirror facet and structural component fabrication was implemented, and four proof-of-concept/test heliostats were produced and installed in three locations. The proposed plan for Phase III calls for improvements in production tooling to enhance product quality and prepare increased production capacity. This project is part of the U.S. Department of Energy's Solar Manufacturing Technology Program (SolMaT).

  9. Chemicals from Biomass: A Market Assessment of Bioproducts with Near-Term Potential

    Energy Technology Data Exchange (ETDEWEB)

    Biddy, Mary J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Scarlata, Christopher [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kinchin, Christopher [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-03-23

    Production of chemicals from biomass offers a promising opportunity to reduce U.S. dependence on imported oil, as well as to improve the overall economics and sustainability of an integrated biorefinery. Given the increasing momentum toward the deployment and scale-up of bioproducts, this report strives to: (1) summarize near-term potential opportunities for growth in biomass-derived products; (2) identify the production leaders who are actively scaling up these chemical production routes; (3) review the consumers and market champions who are supporting these efforts; (4) understand the key drivers and challenges to move biomass-derived chemicals to market; and (5) evaluate the impact that scale-up of chemical strategies will have on accelerating the production of biofuels.

  10. Analysis of near-term production and market opportunities for hydrogen and related activities

    Energy Technology Data Exchange (ETDEWEB)

    Mauro, R.; Leach, S. [National Hydrogen Association, Washington, DC (United States)

    1995-09-01

    This paper summarizes current and planned activities in the areas of hydrogen production and use, near-term venture opportunities, and codes and standards. The rationale for these efforts is to assess industry interest and engage in activities that move hydrogen technologies down the path to commercialization. Some of the work presented in this document is a condensed, preliminary version of reports being prepared under the DOE/NREL contract. In addition, the NHA work funded by Westinghouse Savannah River Corporation (WSRC) to explore the opportunities and industry interest in a Hydrogen Research Center is briefly described. Finally, the planned support of and industry input to the Hydrogen Technical Advisory Panel (HTAP) on hydrogen demonstration projects is discussed.

  11. Impacts of Near-term Climate Change on Surface Water - Groundwater Availability in the Nueces River basin, TX

    Science.gov (United States)

    Sinha, T.; Kumar, M.

    2014-12-01

    In arid and semi-arid regions, sustainability of surface water and groundwater resources is highly uncertain in the face of climate change as well as under competing demands due to urbanization, population growth and water needs to support ecosystem services. Most studies on climate change impact assessment focus on either surface water or groundwater resources alone. In this study, we utilize a fully coupled surface water and groundwater model, Penn-State Integrated Hydrologic Model (PIHM), and recent climate change projections from Climate Models Inter-comparison Project-5 (CMIP5) to evaluate impacts of near-term climate change on water availability in the Nueces River basin, TX. After performing calibration and validation of PIHM over multiple sites, hindcast simulations will be performed over the 1981-2010 period using data from multiple General Circulation Models (GCMs) obtained from the CMIP5 Project. The results will be compared to the observed data to understand added utility of hindcasts in improving the estimation of surface water and groundwater resources. Finally, we will assess the impacts of climate change on both surface water and groundwater resources over the next 20-30 years, which is a relevant time period for water management decisions.

  12. Colorado Plateau Rapid Ecoregion Assessment Change Agents - Development - Current, Near-Term, and Long-Term Potential High Landscape Development

    Data.gov (United States)

    Bureau of Land Management, Department of the Interior — This map shows areas of high current, near-term, and long-term potential landscape development, based on factors such as urban areas, agriculture, roads, and energy...

  13. Modelling of Moving Coil Actuators in Fast Switching Valves Suitable for Digital Hydraulic Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller

    2015-01-01

    The efficiency of digital hydraulic machines is strongly dependent on the valve switching time. Recently, fast switching have been achieved by using a direct electromagnetic moving coil actuator as the force producing element in fast switching hydraulic valves suitable for digital hydraulic...... machines. Mathematical models of the valve switching, targeted for design optimisation of the moving coil actuator, are developed. A detailed analytical model is derived and presented and its accuracy is evaluated against transient electromagnetic finite element simulations. The model includes...... an estimation of the eddy currents generated in the actuator yoke upon current rise, as they may have significant influence on the coil current response. The analytical model facilitates fast simulation of the transient actuator response opposed to the transient electro-magnetic finite element model which...

  14. Identification and non-integer order modelling of synchronous machines operating as generator

    Directory of Open Access Journals (Sweden)

    Szymon Racewicz

    2012-09-01

    Full Text Available This paper presents an original mathematical model of a synchronous generator using derivatives of fractional order. In contrast to classical models composed of a large number of R-L ladders, it comprises half-order impedances, which enable the accurate description of the electromagnetic induction phenomena in a wide frequency range, while minimizing the order and number of model parameters. The proposed model takes into account the skin eff ect in damper cage bars, the eff ects of eddy currents in rotor solid parts, and the saturation of the machine magnetic circuit. The half-order transfer functions used for modelling these phenomena were verifi ed by simulation of ferromagnetic sheet impedance using the fi nite elements method. The analysed machine’s parameters were identified on the basis of SSFR (StandStill Frequency Response characteristics measured on a gradually magnetised synchronous machine.

  15. A state machine approach in modelling the heating process of a building

    Energy Technology Data Exchange (ETDEWEB)

    Pakanen, Jouko [Helsinki University of Technology, P.O. Box 3300, FI-02015 TKK Espoo (Finland); Karjalainen, Sami [VTT, P.O. Box 1000, FI-02044 VTT Espoo (Finland)

    2009-05-15

    Process models and their applications have gradually become an integral part of the design, maintenance and automation of modern buildings. The following state machine model outlines a new approach in this area. The heating power described by the model is based on the recent inputs as well as on the past inputs and outputs of the process, thus also representing the states of the system. Identifying the model means collecting, assorting and storing observations, but also effectively utilizing their inherent relationships and nearest neighbours. The last aspect enables to create a uniform set of data, which forms the characteristic, dynamic behaviour of the HVAC process. The state machine model is non-parametric and needs no sophisticated algorithm for identification. It is therefore suitable for small microprocessor devices equipped with a larger memory capacity. The first test runs, performed in a simulated environment, were encouraging and showed good prediction capability. (author)

  16. The Synthesis of Precise Rotating Machine Mathematical Model, Operating Natural Signals and Virtual Data

    Science.gov (United States)

    Zhilenkov, A. A.; Kapitonov, A. A.

    2017-07-01

    It is known that synchronous machines catalogue data are presented for the case of two-phase machine in rotating coordinate system, e.g. for their description with Park-Gorev’s equation system. Nevertheless, many problems require control of phase currents and voltages, for instance, in modeling of the systems, in which synchronous generators supply powerful rectifiers. Modeling of complex systems with synchronous generators, semiconductor convertors and etc. (with phase currents control necessary for power switch commutation algorithms) becomes achievable with the equation system described in this article. Given model can be used in digital control systems with internal model. It doesn’t require high capacity of computing resources and provides sufficient modeling accuracy.

  17. Effect of umbilical cord milking in term and near term infants: randomized control trial.

    Science.gov (United States)

    Upadhyay, Amit; Gothwal, Sunil; Parihar, Rajeshwari; Garg, Amit; Gupta, Abhilasha; Chawla, Deepak; Gulati, Ish K

    2013-02-01

    The objective of the study was to investigate the effect of umbilical cord milking as compared with early cord clamping on hematological parameters at 6 weeks of age among term and near term neonates. This was a randomized control trial. Eligible neonates (>35 weeks' gestation) were randomized in intervention and control groups (100 each). Neonates of both groups got early cord clamping (within 30 seconds). The cord of the experimental group was milked after cutting and clamping at 25 cm from the umbilicus, whereas in control group cord was clamped near (2-3 cm) the umbilicus and not milked. Both groups got similar routine care. Unpaired Student t and Fisher exact tests were used for statistical analysis. Baseline characteristics were comparable in the 2 groups. Mean hemoglobin (Hgb) (11.9 [1.5] g/dL and mean serum ferritin 355.9 [182.6] μg/L) were significantly higher in the intervention group as compared with the control group (10.8 [0.9] g/dL and 177.5 [135.8] μg/L), respectively, at 6 weeks of age. The mean Hgb and hematocrit at 12 hours and 48 hours was significantly higher in intervention group (P = .0001). The mean blood pressure at 30 minutes, 12 hours, and 48 hours after birth was significantly higher but within normal range. No significant difference was observed in the heart rate, respiratory rate, polycythemia, serum bilirubin, and need of phototherapy in the 2 groups. Umbilical cord milking is a safe procedure and it improved Hgb and iron status at 6 weeks of life among term and near term neonates. Copyright © 2013 Mosby, Inc. All rights reserved.

  18. Developmental control of iodothyronine deiodinases by cortisol in the ovine fetus and placenta near term.

    Science.gov (United States)

    Forhead, Alison J; Curtis, Katrina; Kaptein, Ellen; Visser, Theo J; Fowden, Abigail L

    2006-12-01

    Preterm infants have low serum T4 and T3 levels, which may partly explain the immaturity of their tissues. Deiodinase enzymes are important in determining the bioavailability of thyroid hormones: deiodinases D1 and D2 convert T4 to T3, whereas deiodinase D3 inactivates T3 and produces rT3 from T4. In human and ovine fetuses, plasma T3 rises near term in association with the prepartum cortisol surge. This study investigated the developmental effects of cortisol and T3 on tissue deiodinases and plasma thyroid hormones in fetal sheep during late gestation. Plasma cortisol and T3 concentrations in utero were manipulated by exogenous hormone infusion and fetal adrenalectomy. Between 130 and 144 d of gestation (term 145+/-2 d), maturational increments in plasma cortisol and T3, and D1 (hepatic, renal, perirenal adipose tissue) and D3 (cerebral), and decrements in renal and placental D3 activities were abolished by fetal adrenalectomy. Between 125 and 130 d, iv cortisol infusion raised hepatic, renal, and perirenal adipose tissue D1 and reduced renal and placental D3 activities. Infusion with T3 alone increased hepatic D1 and decreased renal D3 activities. Therefore, in the sheep fetus, the prepartum cortisol surge induces tissue-specific changes in deiodinase activity that, by promoting production and suppressing clearance of T3, may be responsible for the rise in plasma T3 concentration near term. Some of the maturational effects of cortisol on deiodinase activity may be mediated by T3.

  19. MATHEMATICAL MODEL FOR THE STUDY AND DESIGN OF A ROTARY-VANE GAS REFRIGERATION MACHINE

    Directory of Open Access Journals (Sweden)

    V. V. Trandafilov

    2016-08-01

    Full Text Available This paper presents a mathematical model of calculating the main parameters the operating cycle, rotary-vane gas refrigerating machine that affect installation, machine control and working processes occurring in it at the specified criteria. A procedure and a graphical method for the rotary-vane gas refrigerating machine (RVGRM are proposed. A parametric study of the main geometric variables and temperature variables on the thermal behavior of the system is analyzed. The model considers polytrope index for the compression and expansion in the chamber. Graphs of the pressure and temperature in the chamber of the angle of rotation of the output shaft are received. The possibility of inclusion in the cycle regenerative heat exchanger is appreciated. The change of the coefficient of performance machine after turning the cycle regenerative heat exchanger is analyzed. It is shown that the installation of a regenerator RVGRM cycle results in increased COP more than 30%. The simulation results show that the proposed model can be used to design and optimize gas refrigerator Stirling.

  20. Dynamic modelling and analysis of multi-machine power systems including wind farms

    Science.gov (United States)

    Tabesh, Ahmadreza

    2005-11-01

    This thesis introduces a small-signal dynamic model, based on a frequency response approach, for the analysis of a multi-machine power system with special focus on an induction machine based wind farm. The proposed approach is an alternative method to the conventional eigenvalue analysis method which is widely employed for small-signal dynamic analyses of power systems. The proposed modelling approach is successfully applied and evaluated for a power system that (i) includes multiple synchronous generators, and (ii) a wind farm based on either fixed-speed, variable-speed, or doubly-fed induction machine based wind energy conversion units. The salient features of the proposed method, as compared with the conventional eigenvalue analysis method, are: (i) computational efficiency since the proposed method utilizes the open-loop transfer-function matrix of the system, (ii) performance indices that are obtainable based on frequency response data and quantitatively describe the dynamic behavior of the system, and (iii) capability to formulate various wind energy conversion unit, within a wind farm, in a modular form. The developed small-signal dynamic model is applied to a set of multi-machine study systems and the results are validated based on comparison (i) with digital time-domain simulation results obtained from PSCAD/EMTDC software tool, and (ii) where applicable with eigenvalue analysis results.

  1. Modeling of Tool Wear in Vibration Assisted Nano Impact-Machining by Loose Abrasives

    Directory of Open Access Journals (Sweden)

    Sagil James

    2014-01-01

    Full Text Available Vibration assisted nano impact-machining by loose abrasives (VANILA is a novel nanomachining process that combines the principles of vibration assisted abrasive machining and tip-based nanomachining, to perform target specific nanoabrasive machining of hard and brittle materials. An atomic force microscope (AFM is used as a platform in this process wherein nanoabrasives, injected in slurry between the workpiece and the vibrating AFM probe which is the tool, impact the workpiece and cause nanoscale material removal. The VANILA process are conducted such that the tool tip does not directly contact the workpiece. The level of precision and quality of the machined features in a nanomachining process is contingent on the tool wear which is inevitable. Initial experimental studies have demonstrated reduced tool wear in the VANILA process as compared to indentation process in which the tool directly contacts the workpiece surface. In this study, the tool wear rate during the VANILA process is analytically modeled considering impacts of abrasive grains on the tool tip surface. Experiments are conducted using several tools in order to validate the predictions of the theoretical model. It is seen that the model is capable of accurately predicting the tool wear rate within 10% deviation.

  2. Named Entity Recognition Based on A Machine Learning Model

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2012-09-01

    Full Text Available For the recruitment information in Web pages, a novel unified model for named entity recognition is proposed in this study. The models provide a simple statistical framework to incorporate a wide variety of linguistic knowledge and statistical models in a unified way. In our approach, firstly, Multi-Rules are built for a better representation of the named entity, in order to emphasize the specific semantics and term space in the named entity. Then an optimal algorithm of the hierarchically structured DSTCRFs is performed, in order to pick out the structure attributes of the named entity from the recruitment knowledge and optimize the efficiency of the training. The experimental results showed that the accuracy rate has been significantly improved and the complexity of sample training has been decreased.

  3. Implications of capacity expansion under uncertainty and value of information: The near-term energy planning of Japan

    Energy Technology Data Exchange (ETDEWEB)

    Krukanont, Pongsak [Energy Economics Laboratory, Department of Socio-Environmental Energy Science, Graduate School of Energy Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan); Tezuka, Tetsuo [Energy Economics Laboratory, Department of Socio-Environmental Energy Science, Graduate School of Energy Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)]. E-mail: tezuka@energy.kyoto-u.ac.jp

    2007-10-15

    In this paper, we present the near-term analysis of capacity expansion under various uncertainties from the viewpoints of the decision-making process on the optimal allocation of investment and the value of information. An optimization model based on two-stage stochastic programming was developed using real data to describe the Japanese energy system as a case study. Different uncertainty parameters were taken into consideration by a disaggregate analysis of a bottom-up energy modeling approach, including end-use energy demands, plant operating availability and carbon tax rate. Four policy regimes represented as energy planning or policy options were also studied, covering business as usual, renewable energy target, carbon taxation and nuclear phase-out regimes. In addition, we investigated the role of various energy technologies and the behavior of the value of information with respect to the probability function of the worst-case scenario. This value of information provides decision makers with a quantitative analysis for the cost to obtain perfect information about the future. The developed model could be regarded as an applicable tool for decision support to provide a better understanding in energy planning and policy analyses.

  4. Hopfield models as nondeterministic finite-state machines

    NARCIS (Netherlands)

    Drossaers, Marc F.J.

    1992-01-01

    The use of neural networks for integrated linguistic analysis may be profitable. This paper presents the first results of our research on that subject: a Hopfield model for syntactical analysis. We construct a neural network as an implementation of a bounded push-down automaton, which can accept con

  5. Mathematical Model of Lifetime Duration at Insulation of Electrical Machines

    Directory of Open Access Journals (Sweden)

    Mihaela Răduca

    2009-10-01

    Full Text Available Abstract. This paper present a mathematical model of lifetime duration at hydro generator stator winding insulation when at hydro generator can be appear the damage regimes. The estimation to make by take of the programming and non-programming revisions, through the introduction and correlation of the new defined notions.

  6. Modelling rollover behaviour of exacavator-based forest machines

    Science.gov (United States)

    M.W. Veal; S.E. Taylor; Robert B. Rummer

    2003-01-01

    This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...

  7. The Sausage Machine: A New Two-Stage Parsing Model.

    Science.gov (United States)

    Frazier, Lyn; Fodor, Janet Dean

    1978-01-01

    The human sentence parsing device assigns phrase structure to sentences in two steps. The first stage parser assigns lexical and phrasal nodes to substrings of words. The second stage parser then adds higher nodes to link these phrasal packages together into a complete phrase marker. This model is compared with others. (Author/RD)

  8. Improved Quality Prediction Model for Multistage Machining Process Based on Geometric Constraint Equation

    Institute of Scientific and Technical Information of China (English)

    ZHU Limin; HE Gaiyun; SONG Zhanjie

    2016-01-01

    Product variation reduction is critical to improve process efficiency and product quality, especially for multistage machining process (MMP). However, due to the variation accumulation and propagation, it becomes quite difficult to predict and reduce product variation for MMP. While the method of statistical process control can be used to control product quality, it is used mainly to monitor the process change rather than to analyze the cause of product variation. In this paper, based on a differential description of the contact kinematics of locators and part surfaces, and the geometric constraints equation defined by the locating scheme, an improved analytical variation propagation model for MMP is presented. In which the influence of both locator position and machining error on part quality is considered while, in traditional model, it usually focuses on datum error and fixture error. Coordinate transformation theory is used to reflect the generation and transmission laws of error in the establishment of the model. The concept of deviation matrix is heavily applied to establish an explicit mapping between the geometric deviation of part and the process error sources. In each machining stage, the part deviation is formulized as three separated components corresponding to three different kinds of error sources, which can be further applied to fault identification and design optimization for complicated machining process. An example part for MMP is given out to validate the effectiveness of the methodology. The experiment results show that the model prediction and the actual measurement match well. This paper provides a method to predict part deviation under the influence of fixture error, datum error and machining error, and it enriches the way of quality prediction for MMP.

  9. The Use of Machine Aids in Dynamic Multi-Task Environments: A Comparison of an Optimal Model to Human Behavior.

    Science.gov (United States)

    1982-06-01

    advantage . Unproductive machines, however, were used far more frequently than indicated by the optimal model. Increasing the cost of using machines was...found to have a greater inhibiting effect on their use than did de-creasing machine productivity. SECURITY CLASSIFICATION OF THIS PAQE(Vhm’ Date Bnb .,e@V...and the method used to deal with it. The cognitive interface is like the storm front between a warm air mass and a cold air mass. Both are described as

  10. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified).

  11. Issues of Application of Machine Learning Models for Virtual and Real-Life Buildings

    Directory of Open Access Journals (Sweden)

    Young Min Kim

    2016-06-01

    Full Text Available The current Building Energy Performance Simulation (BEPS tools are based on first principles. For the correct use of BEPS tools, simulationists should have an in-depth understanding of building physics, numerical methods, control logics of building systems, etc. However, it takes significant time and effort to develop a first principles-based simulation model for existing buildings—mainly due to the laborious process of data gathering, uncertain inputs, model calibration, etc. Rather than resorting to an expert’s effort, a data-driven approach (so-called “inverse” approach has received growing attention for the simulation of existing buildings. This paper reports a cross-comparison of three popular machine learning models (Artificial Neural Network (ANN, Support Vector Machine (SVM, and Gaussian Process (GP for predicting a chiller’s energy consumption in a virtual and a real-life building. The predictions based on the three models are sufficiently accurate compared to the virtual and real measurements. This paper addresses the following issues for the successful development of machine learning models: reproducibility, selection of inputs, training period, outlying data obtained from the building energy management system (BEMS, and validation of the models. From the result of this comparative study, it was found that SVM has a disadvantage in computation time compared to ANN and GP. GP is the most sensitive to a training period among the three models.

  12. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    Science.gov (United States)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  13. Limitations Of The Current State Space Modelling Approach In Multistage Machining Processes Due To Operation Variations

    Science.gov (United States)

    Abellán-Nebot, J. V.; Liu, J.; Romero, F.

    2009-11-01

    The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.

  14. SOFT SENSING MODEL BASED ON SUPPORT VECTOR MACHINE AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu; Shao Huihe; Wang Xiaofan

    2004-01-01

    Soft sensor is widely used in industrial process control.It plays an important role to improve the quality of product and assure safety in production.The core of soft sensor is to construct soft sensing model.A new soft sensing modeling method based on support vector machine (SVM) is proposed.SVM is a new machine learning method based on statistical learning theory and is powerful for the problem characterized by small sample, nonlinearity, high dimension and local minima.The proposed methods are applied to the estimation of frozen point of light diesel oil in distillation column.The estimated outputs of soft sensing model based on SVM match the real values of frozen point and follow varying trend of frozen point very well.Experiment results show that SVM provides a new effective method for soft sensing modeling and has promising application in industrial process applications.

  15. A Novel Soft Sensor Modeling Approach Based on Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    Feng Rui(冯瑞); Song Chunlin; Zhang Yanzhu; Shao Huihe

    2004-01-01

    Artificial Neural Networks (ANNs) such as radial basis function neural networks (RBFNNs) have been successfully used in soft sensor modeling. However, the generalization ability of conventional ANNs is not very well. For this reason, we present a novel soft sensor modeling approach based on Support Vector Machines (SVMs). Since standard SVMs have the limitation of speed and size in training large data set, we hereby propose Least Squares Support Vector Machines (LS_SVMs) and apply it to soft sensor modeling. Systematic analysis is performed and the result indicates that the proposed method provides satisfactory performance with excellent approximation and generalization property. Monte Carlo simulations show that our soft sensor modeling approach achieves performance superior to the conventional method based on RBFNNs.

  16. DEVELOPMENT OF FUZZY MODEL FOR POWDER MIXED ELECTRO DISCHARGE MACHINING USING COPPER AND GRAPHITE TOOL MATERIAL

    Directory of Open Access Journals (Sweden)

    SONI S.S.

    2012-09-01

    Full Text Available This paper describes development of fuzzy logic model for powder mixed electro discharge machining (PMEDM process. The developed fuzzy model implements triangular and trapezoidal membership functionsfor fuzzification and centre-of-area method for defuzzification processes. The process parameters selected as control variables for experimental work were tool material, type of powder, concentration of powder in dielectric medium and peak current. The machining operation was conducted by using copper and graphite as electrode material on mild steel workpiece material. The powder additives used in the experiment were aluminum and silicon because of their significantly different electrical and thermal properties. The dielectric fluid used was kerosene. The response parameters selected are material removal rate and electrode wear rate. Response surfaces are developed from the developed fuzzy system model. Also exemplar plot developed to compare the responses from fuzzy model and experiment.

  17. Secure State UML: Modeling and Testing Security Concerns of Software Systems Using UML State Machines

    Directory of Open Access Journals (Sweden)

    S. Batool

    2014-05-01

    Full Text Available In this research we present a technique by using which, extended UML models can be converted to standard UML models so that existing MBT techniques can be applied directly on these models. Existing Model Based Testing (MBT Techniques cannot be directly applied to extended UML models due to the difference of modeling notation and new model elements. Verification of these models is also very important. Realizing and testing non functional requirements such as efficiency, portability and security, at model level strengthens the ability of model to turn down risk, cost and probability of system failure in cost effective way. Access control is most widely used technique for implementing security in software systems. Existing approaches for security modeling focus on representation of access control policies such as authentication, role based access control by introducing security oriented model elements through extension in Unified Modelling Language (UML. But doing so hinders the potential and application of MBT techniques to verify these models and test access control policies. In this research we introduce a technique secure State UML to formally design security models with secure UML and then transform it to UML state machine diagrams so that it can be tested, verified by existing MBT techniques. By applying proposed technique on case studies, we found the results that MBT techniques can be applied on resulting state machine diagrams and generated test paths have potential to identify the risks associated with security constraints violation.

  18. A New Color Constancy Model for Machine Vision

    Institute of Scientific and Technical Information of China (English)

    TAO Linmi; XU Guangyou

    2001-01-01

    Both physiological and psychological evidences suggest that the human visual system analyze images in neural subsystems tuned to different attributes of the stimulus. Color module and lightness module are such subsystems. Under this general result, a new physical model of trichromatic system has been developed to deal with the color constancy of computer vision. A normal color image is split into two images: the gray scale image and the equal lightness color image for the two modules. Relatively, a two-dimensional descriptor is applied to describe the property of surface reflectance in the equal lightness color image. This description of surface spectral reflectance has the property of color constancy. Image segmentation experiments based on color property of object show that the presented model is effective.

  19. Abstract Machine as a Model of Content Management Information System

    OpenAIRE

    Zykov, Sergey V.

    2006-01-01

    Enterprise content management is an urgent issue of current scientific and practical activities in software design and implementation. However, papers known as yet give insufficient coverage of theoretical background of the software in question. The paper gives an attempt of building a state-based model of content management. In accordance with the theoretical principles outlined, a content management information system (CMIS) has been implemented in a large international oil-and-gas group of...

  20. Research on Modeling and Control of Regenerative Braking for Brushless DC Machines Driven Electric Vehicles

    OpenAIRE

    Jian-ping Wen; Chuan-wei Zhang

    2015-01-01

    In order to improve energy utilization rate of battery-powered electric vehicle (EV) using brushless DC machine (BLDCM), the model of braking current generated by regenerative braking and control method are discussed. On the basis of the equivalent circuit of BLDCM during the generative braking period, the mathematic model of braking current is established. By using an extended state observer (ESO) to observe actual braking current and the unknown disturbances of regenerative braking system, ...

  1. Comparison between 2D and 3D Modelling of Induction Machine Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    Zelmira Ferkova

    2015-01-01

    Full Text Available The paper compares two different ways (2D and 3D of modelling of two-phase squirrel-cage induction machine using the finite element method (FEM. It focuses mainly on differences between starting characteristics given from both types of the model. It also discusses influence of skew rotor slots on harmonic content in air gap flux density and summarizes some issues of both approaches.

  2. Interpretation of machine-learning-based disruption models for plasma control

    Science.gov (United States)

    Parsons, Matthew S.

    2017-08-01

    While machine learning techniques have been applied within the context of fusion for predicting plasma disruptions in tokamaks, they are typically interpreted with a simple ‘yes/no’ prediction or perhaps a probability forecast. These techniques take input signals, which could be real-time signals from machine diagnostics, to make a prediction of whether a transient event will occur. A major criticism of these methods is that, due to the nature of machine learning, there is no clear correlation between the input signals and the output prediction result. Here is proposed a simple method that could be applied to any existing prediction model to determine how sensitive the state of a plasma is at any given time with respect to the input signals. This is accomplished by computing the gradient of the decision function, which effectively identifies the quickest path away from a disruption as a function of the input signals and therefore could be used in a plasma control setting to avoid them. A numerical example is provided for illustration based on a support vector machine model, and the application to real data is left as an open opportunity.

  3. Systematic Geometric Error Modeling for Workspace Volumetric Calibration of a 5-axis Turbine Blade Grinding Machine

    Institute of Scientific and Technical Information of China (English)

    Abdul Wahid Khan; Chen Wuyi

    2010-01-01

    A systematic geometric model has been presented for calibration of a newly designed 5-axis turbine blade grinding machine.This machine is designed to serve a specific purpose to attain high accuracy and high efficiency grinding of turbine blades by eliminating the hand grinding process.Although its topology is RPPPR (P:prismatic;R:rotary),its design is quite distinct from the competitive machine tools.As error quantification is the only way to investigate,maintain and improve its accuracy,calibration is recommended for its performance assessment and acceptance testing.Systematic geometric error modeling technique is implemented and 52 position dependent and position independent errors are identified while considering the machine as five rigid bodies by eliminating the set-up errors ofworkpiece and cutting tool.39 of them are found to have influential errors and are accommodated for finding the resultant effect between the cutting tool and the workpiece in workspace volume.Rigid body kinematics techniques and homogenous transformation matrices are used for error synthesis.

  4. Statistical regression modeling and machinability study of hardened AISI 52100 steel using cemented carbide insert

    Directory of Open Access Journals (Sweden)

    Amlana Panda

    2017-01-01

    Full Text Available The present study investigates performance and feasibility of application of low cost cemented carbide insert in dry machining of AISI 52100 steel hardened to (55 ± 1 HRC which is rarely researched as far as machining of bearing steel is concerned. Machinability studies i.e. flank wear, surface roughness and morphology analysis of chip has been investigated and statistical regression modeling has been developed. The test has been conducted based on Taguchi L16 OA taking machining parameters like cutting speed, feed and depth of cut. It is observed that uncoated cemented carbide insert performs well at some selected runs (Run 1, 5 and 9 which show its feasibility for hard turning applications. The developed serrated saw tooth chip of burnt blue colour adversely affects the surface quality. Adequacy of the developed statistical regression model has been checked using ANOVA analysis (depending on F value, P value and R2 value and normal probability plot at 95% confidence level. The results of optimal parametric combinations may be adopted while turning hardened AISI 52100 steel under dry environment with uncoated cemented carbide insert.

  5. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared to finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.

  6. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  7. Solving the Bose-Hubbard Model with Machine Learning

    Science.gov (United States)

    Saito, Hiroki

    2017-09-01

    Motivated by the recent successful application of artificial neural networks to quantum many-body problems [G. Carleo and M. Troyer, doi.org/10.1126/science.aag2302" xlink:type="simple">Science 355, 602 (2017)], a method to calculate the ground state of the Bose-Hubbard model using a feedforward neural network is proposed. The results are in good agreement with those obtained by exact diagonalization and the Gutzwiller approximation. The method of neural-network quantum states is promising for solving quantum many-body problems of ultracold atoms in optical lattices.

  8. Control volume based modelling of compressible flow in reciprocating machines

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2004-01-01

    conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction......, and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures...

  9. NUMERICAL MODELING OF MULTICYLINDER ELECTRO-HYDRAULIC SYSTEM AND CONTROLLER DESIGN FOR SHOCK TEST MACHINE

    Institute of Scientific and Technical Information of China (English)

    CHU Deying; ZHANG Zhiyi; WANG Gongxian; HUA Hongxing

    2007-01-01

    A high fidelity dynamic model of a high-energy hydraulically-actuated shock test machine for heavy weight devices is presented to satisfy the newly-built shock resistance standard and simulate the actual underwater explosion environments in laboratory as well as increase the testing capability of shock test machine. In order to produce the required negative shock pulse in the given time duration, four hydraulic actuators are utilized. The model is then used to formulate an advanced feedforward controller for the system to produce the required negative waveform and to address the motion synchronization of the four cylinders. The model provides a safe and easily controllable way to perform a "virtual testing" before starting potentially destructive tests on specimen and to predict performance of the system. Simulation results have demonstrated the effectiveness of the controller.

  10. A Simple Computational Model of a jellyfish-like flying machine

    Science.gov (United States)

    Fang, Fang; Ristroph, Leif; Shelley, Michael

    2013-11-01

    We explore theoretically the aerodynamics of a jellyfish-like flying machine recently fabricated at NYU. This experimental device achieves flight and hovering by opening and closing a set of flapping wings. It displays orientational flight stability without additional control surfaces or feedback control. Our model machine consists of two symmetric massless flapping wings connected to a body with mass and moment of inertia. A vortex sheet shedding and wake model is used for the flow simulation. Use of the Fast Multipole Method (FMM), and adaptive addition/deletion of vortices, allows us to simulate for long times and resolve complex wakes. We use our model to explore the physical parameters that maintain body hovering, its ascent and descent, and investigate the stability of these states.

  11. A model for a multi-class classification machine

    Science.gov (United States)

    Rau, Albrecht; Nadal, Jean-Pierre

    1992-06-01

    We consider the properties of multi-class neural networks, where each neuron can be in several different states. The motivations for considering such systems are manifold. In image processing for example, the different states correspond to the different grey tone levels. Another multi-class classification task implemented on a feed-forward network is the analysis of DNA sequences or the prediction of the secondary structure of proteins from the sequence of amino acids. To investigate the behaviour of such systems, one specific dynamical rule - the “winner-take-all” rule - is studied. Gauge invariances of the model are analysed. For a multi-class perceptron with N Q-state input neurons and Q‧-state output neuron, the maximal number of patterns that can be stored in the large N limit is found to be proportional to N(Q - 1) ƒ(Q‧), where ƒ( Q‧) is a slowly increasing and bounded function of order 1.

  12. Contribution to the modelling of induction machines by fractional order; Contribution a la modelisation dynamique d'ordre non entier de la machine asynchrone a cage

    Energy Technology Data Exchange (ETDEWEB)

    Canat, S.

    2005-07-15

    Induction machine is most widespread in industry. Its traditional modeling does not take into account the eddy current in the rotor bars which however induce strong variations as well of the resistance as of the resistance of the rotor. This diffusive phenomenon, called 'skin effect' could be modeled by a compact transfer function using fractional derivative (non integer order). This report theoretically analyzes the electromagnetic phenomenon on a single rotor bar before approaching the rotor as a whole. This analysis is confirmed by the results of finite elements calculations of the magnetic field, exploited to identify a fractional order model of the induction machine (identification method of Levenberg-Marquardt). Then, the model is confronted with an identification of experimental results. Finally, an automatic method is carried out to approximate the dynamic model by integer order transfer function on a frequency band. (author)

  13. Hybrid Swarm Algorithms for Parameter Identification of an Actuator Model in an Electrical Machine

    Directory of Open Access Journals (Sweden)

    Ying Wu

    2011-01-01

    Full Text Available Efficient identification and control algorithms are needed, when active vibration suppression techniques are developed for industrial machines. In the paper a new actuator for reducing rotor vibrations in electrical machines is investigated. Model-based control is needed in designing the algorithm for voltage input, and therefore proper models for the actuator must be available. In addition to the traditional prediction error method a new knowledge-based Artificial Fish-Swarm optimization algorithm (AFA with crossover, CAFAC, is proposed to identify the parameters in the new model. Then, in order to obtain a fast convergence of the algorithm in the case of a 30 kW two-pole squirrel cage induction motor, we combine the CAFAC and Particle Swarm Optimization (PSO to identify parameters of the machine to construct a linear time-invariant(LTI state-space model. Besides that, the prediction error method (PEM is also employed to identify the induction motor to produce a black box model with correspondence to input-output measurements.

  14. Law machines: scale models, forensic materiality and the making of modern patent law.

    Science.gov (United States)

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  15. EXPERIMENTAL EVALUATION OF NUMERICAL MODELS TO REPRESENT THE STIFFNESS OF LAMINATED ROTOR CORES IN ELECTRICAL MACHINES

    Directory of Open Access Journals (Sweden)

    HIDERALDO L. V. SANTOS

    2013-08-01

    Full Text Available Usually, electrical machines have a metallic cylinder made up of a compacted stack of thin metal plates (referred as laminated core assembled with an interference fit on the shaft. The laminated structure is required to improve the electrical performance of the machine and, besides adding inertia, also enhances the stiffness of the system. Inadequate characterization of this element may lead to errors when assessing the dynamic behavior of the rotor. The aim of this work was therefore to evaluate three beam models used to represent the laminated core of rotating electrical machines. The following finite element beam models are analyzed: (i an “equivalent diameter model”, (ii an “unbranched model” and (iii a “branched model”. To validate the numerical models, experiments are performed with nine different electrical rotors so that the first non-rotating natural frequencies and corresponding vibration modes in a free-free support condition are obtained experimentally. The models are evaluated by comparing the natural frequencies and corresponding vibration mode shapes obtained experimentally with those obtained numerically. Finally, a critical discussion of the behavior of the beam models studied is presented. The results show that for the majority of the rotors tested, the “branched model” is the most suitable

  16. Using Machine Learning to Create Turbine Performance Models (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Clifton, A.

    2013-04-01

    Wind turbine power output is known to be a strong function of wind speed, but is also affected by turbulence and shear. In this work, new aerostructural simulations of a generic 1.5 MW turbine are used to explore atmospheric influences on power output. Most significant is the hub height wind speed, followed by hub height turbulence intensity and then wind speed shear across the rotor disk. These simulation data are used to train regression trees that predict the turbine response for any combination of wind speed, turbulence intensity, and wind shear that might be expected at a turbine site. For a randomly selected atmospheric condition, the accuracy of the regression tree power predictions is three times higher than that of the traditional power curve methodology. The regression tree method can also be applied to turbine test data and used to predict turbine performance at a new site. No new data is required in comparison to the data that are usually collected for a wind resource assessment. Implementing the method requires turbine manufacturers to create a turbine regression tree model from test site data. Such an approach could significantly reduce bias in power predictions that arise because of different turbulence and shear at the new site, compared to the test site.

  17. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  18. Response surface modelling of tool electrode wear rate and material removal rate in micro electrical discharge machining of Inconel 718

    DEFF Research Database (Denmark)

    Puthumana, Govindan

    2017-01-01

    conductivity and high strength causing it extremely difficult tomachine. Micro-Electrical Discharge Machining (Micro-EDM) is a non-conventional method that has a potential toovercome these restrictions for machining of Inconel 718. Response Surface Method (RSM) was used for modelling thetool Electrode Wear...

  19. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the complex...

  20. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...

  1. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the complex...

  2. Mask synthesis and verification based on geometric model for surface micro-machined MEMS

    Institute of Scientific and Technical Information of China (English)

    LI Jian-hua; LIU Yu-sheng; GAO Shu-ming

    2005-01-01

    Traditional MEMS (microelectromechanical system) design methodology is not a structured method and has become an obstacle for MEMS creative design. In this paper, a novel method of mask synthesis and verification for surface micro-machined MEMS is proposed, which is based on the geometric model of a MEMS device. The emphasis is focused on synthesizing the masks at the basis of the layer model generated from the geometric model of the MEMS device. The method is comprised of several steps: the correction of the layer model, the generation of initial masks and final masks including multi-layer etch masks, and mask simulation. Finally some test results are given.

  3. Near term hybrid passenger vehicle development program. Phase I. Appendices C and D. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    The derivation of and actual preliminary design of the Near Term Hybrid Vehicle (NTHV) are presented. The NTHV uses a modified GM Citation body, a VW Rabbit turbocharged diesel engine, a 24KW compound dc electric motor, a modified GM automatic transmission, and an on-board computer for transmission control. The following NTHV information is presented: the results of the trade-off studies are summarized; the overall vehicle design; the selection of the design concept and the base vehicle (the Chevrolet Citation), the battery pack configuration, structural modifications, occupant protection, vehicle dynamics, and aerodynamics; the powertrain design, including the transmission, coupling devices, engine, motor, accessory drive, and powertrain integration; the motor controller; the battery type, duty cycle, charger, and thermal requirements; the control system (electronics); the identification of requirements, software algorithm requirements, processor selection and system design, sensor and actuator characteristics, displays, diagnostics, and other topics; environmental system including heating, air conditioning, and compressor drive; the specifications, weight breakdown, and energy consumption measures; advanced technology components, and the data sources and assumptions used. (LCL)

  4. Food System Trade Study for a Near-Term Mars Mission

    Science.gov (United States)

    Levri, Julie; Luna, Bernadette (Technical Monitor)

    2000-01-01

    This paper evaluates several food system options for a near-term Mars mission, based on plans for the 120-day BIO-Plex test. Food systems considered in the study are based on the International Space Station (ISS) Assembly Phase and Assembly Complete food systems. The four systems considered are: 1) ISS assembly phase food system (US portion) with individual packaging without salad production; 2) ISS assembly phase food system (US portion) with individual packaging, with salad production; 3) ISS assembly phase food system (US portion) with bulk packaging, with salad production; 4) ISS assembly complete food system (US portion) with bulk packaging with salad and refrigeration/freezing. The food system options are assessed using equivalent system mass (ESM), which evaluates each option based upon the mass, volume, power, cooling and crewtime requirements that are associated with each food system option. However, since ESM is unable to elucidate the differences in psychological benefits between the food systems, a qualitative evaluation of each option is also presented.

  5. California Power-to-Gas and Power-to-Hydrogen Near-Term Business Case Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Eichman, Josh [National Renewable Energy Lab. (NREL), Golden, CO (United States); Flores-Espino, Francisco [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    Flexible operation of electrolysis systems represents an opportunity to reduce the cost of hydrogen for a variety of end-uses while also supporting grid operations and thereby enabling greater renewable penetration. California is an ideal location to realize that value on account of growing renewable capacity and markets for hydrogen as a fuel cell electric vehicle (FCEV) fuel, refineries, and other end-uses. Shifting the production of hydrogen to avoid high cost electricity and participation in utility and system operator markets along with installing renewable generation to avoid utility charges and increase revenue from the Low Carbon Fuel Standard (LCFS) program can result in around $2.5/kg (21%) reduction in the production and delivery cost of hydrogen from electrolysis. This reduction can be achieved without impacting the consumers of hydrogen. Additionally, future strategies for reducing hydrogen cost were explored and include lower cost of capital, participation in the Renewable Fuel Standard program, capital cost reduction, and increased LCFS value. Each must be achieved independently and could each contribute to further reductions. Using the assumptions in this study found a 29% reduction in cost if all future strategies are realized. Flexible hydrogen production can simultaneously improve the performance and decarbonize multiple energy sectors. The lessons learned from this study should be used to understand near-term cost drivers and to support longer-term research activities to further improve cost effectiveness of grid integrated electrolysis systems.

  6. Cerebral oxygenation during postasphyxial seizures in near-term fetal sheep.

    Science.gov (United States)

    Gonzalez, Hernan; Hunter, Christian J; Bennet, Laura; Power, Gordon G; Gunn, Alistair J

    2005-07-01

    After exposure to asphyxia, infants may develop both prolonged, clinically evident seizures and shorter, clinically silent seizures; however, their effect on cerebral tissue oxygenation is unclear. We therefore examined the hypothesis that the increase in oxygen delivery during postasphyxial seizures might be insufficient to meet the needs of increased metabolism, thus causing a fall in tissue oxygenation, in unanesthetized near-term fetal sheep in utero (gestational age 125+/-1 days). Fetuses were administered an infusion of the specific adenosine A1 receptor antagonist 8-cyclopentyl-1,3-dipropylxanthine, followed by 10 mins of asphyxia induced by complete umbilical cord occlusion. The fetuses then recovered for 3 days. Sixty-one episodes of electrophysiologically defined seizures were identified in five fetuses. Tissue PO(2) (tPO(2)) did not change significantly during short seizures (seizures lasting more than 3.5 mins (Pseizures, cortical blood flow did not begin to increase until tPO(2) had begun to fall, and then rose more slowly than the increase in metabolism, with a widening of the brain to blood temperature gradient. In conclusion, in the immature brain, during prolonged, but not short seizures, there is a transient mismatch between cerebral blood flow and metabolism leading to significant cerebral deoxygenation.

  7. The peaks of eternal light: A near-term property issue on the moon

    Science.gov (United States)

    Elvis, M.; Milligan, T.; . Krolikowski, A.

    2016-12-01

    The Outer Space Treaty makes it clear that the Moon is the 'province of all mankind', with the latter ordinarily understood to exclude state or private appropriation of any portion of its surface. However, there are indeterminacies in the Treaty and in space law generally over the issue of appropriation. These indeterminacies might permit a close approximation to a property claim or some manner of 'quasiproperty'. The recently revealed highly inhomogeneous distribution of lunar resources changes the context of these issues. We illustrate this altered situation by considering the Peaks of Eternal Light. They occupy about one square kilometer of the lunar surface. We consider a thought experiment in which a Solar telescope is placed on one of the Peaks of Eternal Light at the lunar South pole for scientific research. Its operation would require non-disturbance, and hence that the Peak remain unvisited by others, effectively establishing a claim of protective exclusion and de facto appropriation. Such a telescope would be relatively easy to emplace with today's technology and so poses a near-term property issue on the Moon. While effective appropriation of a Peak might proceed without raising some of the familiar problems associated with commercial development (especially lunar mining), the possibility of such appropriation nonetheless raises some significant issues concerning justice and the safeguarding of scientific practice on the lunar surface.We consider this issue from scientific, technical, ethical and policy viewpoints.

  8. Phase I of the Near-Term Hybrid Passenger-Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    Under contract to the Jet Propulsion Laboratory of the California Institute of Technology, Minicars conducted Phase I of the Near-Term Hybrid Passenger Vehicle (NTHV) Development Program. This program led to the preliminary design of a hybrid (electric and internal combustion engine powered) vehicle and fulfilled the objectives set by JPL. JPL requested that the report address certain specific topics. A brief summary of all Phase I activities is given initially; the hybrid vehicle preliminary design is described in Sections 4, 5, and 6. Table 2 of the Summary lists performance projections for the overall vehicle and some of its subsystems. Section 4.5 gives references to the more-detailed design information found in the Preliminary Design Data Package (Appendix C). Alternative hybrid-vehicle design options are discussed in Sections 3 through 6. A listing of the tradeoff study alternatives is included in Section 3. Computer simulations are discussed in Section 9. Section 8 describes the supporting economic analyses. Reliability and safety considerations are discussed specifically in Section 7 and are mentioned in Sections 4, 5, and 6. Section 10 lists conclusions and recommendations arrived at during the performance of Phase I. A complete bibliography follows the list of references.

  9. Adrenal glands are essential for activation of glucogenesis during undernutrition in fetal sheep near term.

    Science.gov (United States)

    Fowden, A L; Forhead, A J

    2011-01-01

    In adults, the adrenal glands are essential for the metabolic response to stress, but little is known about their role in fetal metabolism. This study examined the effects of adrenalectomizing fetal sheep on glucose and oxygen metabolism in utero in fed conditions and after maternal fasting for 48 h near term. Fetal adrenalectomy (AX) had little effect on the rates of glucose and oxygen metabolism by the fetus or uteroplacental tissues in fed conditions. Endogenous glucose production was negligible in both AX and intact, sham-operated fetuses in fed conditions. Maternal fasting reduced fetal glucose levels and umbilical glucose uptake in both groups of fetuses to a similar extent but activated glucose production only in the intact fetuses. The lack of fasting-induced glucogenesis in AX fetuses was accompanied by falls in fetal glucose utilization and oxygen consumption not seen in intact controls. The circulating concentrations of cortisol and total catecholamines, and the hepatic glycogen content and activities of key gluconeogenic enzymes, were also less in AX than intact fetuses in fasted animals. Insulin concentrations were also lower in AX than intact fetuses in both nutritional states. Maternal glucose utilization and its distribution between the fetal, uteroplacental, and nonuterine maternal tissues were unaffected by fetal AX in both nutritional states. Ovine fetal adrenal glands, therefore, have little effect on basal rates of fetal glucose and oxygen metabolism but are essential for activating fetal glucogenesis in response to maternal fasting. They may also be involved in regulating insulin sensitivity in utero.

  10. Formal Model of Machining Capacity for a Machining System%加工系统的加工能力形式化建模方法

    Institute of Scientific and Technical Information of China (English)

    朱琦琦; 江平宇

    2011-01-01

    针对加工系统服务化、虚拟化过程中的加工能力建模问题,结合加工系统的功能、性能以及性能质量,对加工能力及其相关概念进行了定义.采用关系代数和集合论,对加工系统、加工特征、加工操作进行了描述,继而以关系选择与自然连接运算对加工能力进行了形式化表达,并采用关系投影运算对加工能力模型进行了视图分析,揭示了加工能力产生的机理.结合实例对建模方法进行验证,其结果表明,所采用的方法能够实现加工能力的形式化建模,可以为加工系统的加工能力分析与评价提供支持.%Modeling of machining capacity is of great importance in servitization and visualization of machining system. The machining capacity and related concepts are clarified, then formal models of function, performance and performance quality of machining system are constructed, and the machining capacity model is established in terms of the description of the operation of relation selection and natural-join, and analyzed by the relation projection operation. Finally, a case verifies the proposed models.

  11. A Collaboration Model for Community-Based Software Development with Social Machines

    Directory of Open Access Journals (Sweden)

    Dave Murray-Rust

    2016-02-01

    Full Text Available Crowdsourcing is generally used for tasks with minimal coordination, providing limited support for dynamic reconfiguration. Modern systems, exemplified by social ma chines, are subject to continual flux in both the client and development communities and their needs. To support crowdsourcing of open-ended development, systems must dynamically integrate human creativity with machine support. While workflows can be u sed to handle structured, predictable processes, they are less suitable for social machine development and its attendant uncertainty. We present models and techniques for coordination of human workers in crowdsourced software development environments. We combine the Social Compute Unit—a model of ad-hoc human worker teams—with versatile coordination protocols expressed in the Lightweight Social Calculus. This allows us to combine coordination and quality constraints with dynamic assessments of end-user desires, dynamically discovering and applying development protocols.

  12. Genetic optimization of training sets for improved machine learning models of molecular properties

    CERN Document Server

    Browning, Nicholas J; von Lilienfeld, O Anatole; Röthlisberger, Ursula

    2016-01-01

    The training of molecular models of quantum mechanical properties based on statistical machine learning requires large datasets which exemplify the map from chemical structure to molecular property. Intelligent a priori selection of training examples is often difficult or impossible to achieve as prior knowledge may be sparse or unavailable. Ordinarily representative selection of training molecules from such datasets is achieved through random sampling. We use genetic algorithms for the optimization of training set composition consisting of tens of thousands of small organic molecules. The resulting machine learning models are considerably more accurate with respect to small randomly selected training sets: mean absolute errors for out-of-sample predictions are reduced to ~25% for enthalpies, free energies, and zero-point vibrational energy, to ~50% for heat-capacity, electron-spread, and polarizability, and by more than ~20% for electronic properties such as frontier orbital eigenvalues or dipole-moments. We...

  13. A fuzzy modeling for single machine scheduling problem with deteriorating jobs

    Directory of Open Access Journals (Sweden)

    Mohammad Mahavi Mazdeh

    2010-06-01

    Full Text Available This paper addresses a bi-criteria scheduling problem with deteriorating jobs on a single machine. We develop a model for a single machine bi-criteria scheduling problem (SMBSP with the aim of minimizing total tardiness and work in process (WIP costs. WIP cost increases as a job passes through a series of stages in the production process. Due to the uncertainty involved in real-world scheduling problems, it is sometimes unrealistic or even impossible to acquire exact input data. Hence, we consider the SMBSP under the hypothesis of fuzzy L-R processing time's knowledge and fuzzy L-R due date. The effectiveness of the proposed model and the denoted methodology is demonstrated through a test problem.

  14. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  15. Physics-Informed Machine Learning for Predictive Turbulence Modeling: Using Data to Improve RANS Modeled Reynolds Stresses

    CERN Document Server

    Wang, Jian-Xun; Xiao, Heng

    2016-01-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Recently, data-driven methods have been proposed as a promising alternative to the traditional approaches of turbulence model development. In this work we propose a data-driven, physics-informed machine learning approach for predicting discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are first trained with benchmark flow data and then used to predict Reynolds stresses discrepancies in new flows. The method is used to predict the Reynolds stresses in the flow over periodic hills by using two training flow scenarios of increasing difficulties: (1) ...

  16. A mathematical model of the controlled axial flow divider for mobile machines

    Science.gov (United States)

    Mulyukin, V. L.; Karelin, D. L.; Belousov, A. M.

    2016-06-01

    The authors give a mathematical model of the axial adjustable flow divider allowing one to define the parameters of the feed pump and the hydraulic motor-wheels in the multi-circuit hydrostatic transmission of mobile machines, as well as for example built features that allows to clearly evaluate the mutual influence of the values of pressure and flow on all input and output circuits of the system.

  17. Fault Tolerance Automotive Air-Ratio Control Using Extreme Learning Machine Model Predictive Controller

    OpenAIRE

    Pak Kin Wong; Hang Cheong Wong; Chi Man Vong; Tong Meng Iong; Ka In Wong; Xianghui Gao

    2015-01-01

    Effective air-ratio control is desirable to maintain the best engine performance. However, traditional air-ratio control assumes the lambda sensor located at the tail pipe works properly and relies strongly on the air-ratio feedback signal measured by the lambda sensor. When the sensor is warming up during cold start or under failure, the traditional air-ratio control no longer works. To address this issue, this paper utilizes an advanced modelling technique, kernel extreme learning machine (...

  18. A Support Vector Machine-based Evaluation Model of Customer Satisfaction Degree in Logistics

    Institute of Scientific and Technical Information of China (English)

    SUN Hua-li; XIE Jian-ying

    2007-01-01

    This paper pressnts a novel evaluation model of the customer satisfaction degree (CSD) in logistics based on support vector machine (SVM). Firstly, the relation between the suppliers and the customers is analyzed. Secondly, the evaluation index system and fuzzy quantitative methods are provided. Thirdly, the CSD evaluation system including eight indexes and three ranks rinsed on one-against-one mode of SVM is built. Last simulation experiment is presented to illustrate the theoretical results.

  19. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology”

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik

    2017-05-31

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  20. Understanding storm surges in the North Sea: Ishiguro’s electronic modelling machine

    Directory of Open Access Journals (Sweden)

    Claire Kennard

    2016-11-01

    Full Text Available In December the Science Museum will open Mathematics: The Winton Gallery. The new gallery tells mathematical stories in relation to a broad spectrum of fundamental human concerns. One of the key exhibits is a newly acquired machine for modelling storm surges in the North Sea. Designed by Japanese engineer Shizuo Ishiguro, the object offers a way to explore the far-reaching impact and relevance of mathematical work.

  1. Modelling In-Store Consumer Behaviour Using Machine Learning and Digital Signage Audience Measurement Data

    OpenAIRE

    Ravnik, Robert; Solina, Franc; Žabkar, Vesna

    2014-01-01

    Audience adaptive digital signage is a new emerging tech- nology, where public broadcasting displays adapt their content to the audience demographic and temporal features. The collected audience measurement data can be used as a unique basis for statistical analysis of viewing patterns, interactive display applications and also for further research and observer modelling. Here, we use machine learning methods on real-world digital signage viewership data to predict consumer behav- iour in a r...

  2. Modeling and Simulation of Process-Machine Interaction in Grinding of Cemented Carbide Indexable Inserts

    National Research Council Canada - National Science Library

    Feng, Wei; Yao, Bin; Chen, BinQiang; Zhang, DongSheng; Zhang, XiangLei; Shen, ZhiHuang

    2015-01-01

      Interaction of process and machine in grinding of hard and brittle materials such as cemented carbide may cause dynamic instability of the machining process resulting in machining errors and a decrease in productivity...

  3. Advancing Control for Shield Tunneling Machine by Backstepping Design with LuGre Friction Model

    Directory of Open Access Journals (Sweden)

    Haibo Xie

    2014-01-01

    Full Text Available Shield tunneling machine is widely applied for underground tunnel construction. The shield machine is a complex machine with large momentum and ultralow advancing speed. The working condition underground is rather complicated and unpredictable, and brings big trouble in controlling the advancing speed. This paper focused on the advancing motion control on desired tunnel axis. A three-state dynamic model was established with considering unknown front face earth pressure force and unknown friction force. LuGre friction model was introduced to describe the friction force. Backstepping design was then proposed to make tracking error converge to zero. To have a comparison study, controller without LuGre model was designed. Tracking simulations of speed regulations and simulations when front face earth pressure changed were carried out to show the transient performances of the proposed controller. The results indicated that the controller had good tracking performance even under changing geological conditions. Experiments of speed regulations were carried out to have validations of the controllers.

  4. The modified nodal analysis method applied to the modeling of the thermal circuit of an asynchronous machine

    Science.gov (United States)

    Nedelcu, O.; Salisteanu, C. I.; Popa, F.; Salisteanu, B.; Oprescu, C. V.; Dogaru, V.

    2017-01-01

    The complexity of electrical circuits or of equivalent thermal circuits that were considered to be analyzed and solved requires taking into account the method that is used for their solving. Choosing the method of solving determines the amount of calculation necessary for applying one of the methods. The heating and ventilation systems of electrical machines that have to be modeled result in complex equivalent electrical circuits of large dimensions, which requires the use of the most efficient methods of solving them. The purpose of the thermal calculation of electrical machines is to establish the heating, the overruns of temperatures or over-temperatures in some parts of the machine compared to the temperature of the ambient, in a given operating mode of the machine. The paper presents the application of the modified nodal analysis method for the modeling of the thermal circuit of an asynchronous machine.

  5. Hierarchical analytical and simulation modelling of human-machine systems with interference

    Science.gov (United States)

    Braginsky, M. Ya; Tarakanov, D. V.; Tsapko, S. G.; Tsapko, I. V.; Baglaeva, E. A.

    2017-01-01

    The article considers the principles of building the analytical and simulation model of the human operator and the industrial control system hardware and software. E-networks as the extension of Petri nets are used as the mathematical apparatus. This approach allows simulating complex parallel distributed processes in human-machine systems. The structural and hierarchical approach is used as the building method for the mathematical model of the human operator. The upper level of the human operator is represented by the logical dynamic model of decision making based on E-networks. The lower level reflects psychophysiological characteristics of the human-operator.

  6. A novel excitation controller using support vector machines and approximate models

    Institute of Scientific and Technical Information of China (English)

    Xiaofang YUAN; Yaonan WANG; Shutao LI

    2008-01-01

    This paper proposes a novel excitation controller using suppon vector machines(SVM)and approximate models.The nonlinear control law is derived directly based on an input-output approximation method via Taylor expansion,which not only avoids complex control development and intensive computation,but also avoids online learning or adjust.ment.Only a general SVM modelling technique is involved in both model identification and controller implementation.The robustness of the stability is rigorously established using the Lyapunov method.Several simulations demonstrate the effectiveness of the proposed excitation controller.

  7. On Combining Language Models to Improve a Text-based Human-machine Interface

    Directory of Open Access Journals (Sweden)

    Daniel Cruz Cavalieri

    2015-12-01

    Full Text Available This paper concentrates on improving a text-based human-machine interface integrated into a robotic wheelchair. Since word prediction is one of the most common methods used in such systems, the goal of this work is to improve the results using this specific module. For this, an exponential interpolation language model (LM is considered. First, a model based on partial differential equations is proposed; with the appropriate initial conditions, we are able to design a interpolation language model that merges a word-based n-gram language model and a part-of-speech-based language model. Improvements in keystroke saving (KSS and perplexity (PP over the word-based ngram language model and two other traditional interpolation models are obtained, considering two different task domains and three different languages. The proposed interpolation model also provides additional improvements over the hit rate (HR parameter.

  8. Predictive modeling of human operator cognitive state via sparse and robust support vector machines.

    Science.gov (United States)

    Zhang, Jian-Hua; Qin, Pan-Pan; Raisch, Jörg; Wang, Ru-Bin

    2013-10-01

    The accurate prediction of the temporal variations in human operator cognitive state (HCS) is of great practical importance in many real-world safety-critical situations. However, since the relationship between the HCS and electrophysiological responses of the operator is basically unknown, complicated and uncertain, only data-based modeling method can be employed. This paper is aimed at constructing a data-driven computationally intelligent model, based on multiple psychophysiological and performance measures, to accurately estimate the HCS in the context of a safety-critical human-machine system. The advanced least squares support vector machines (LS-SVM), whose parameters are optimized by grid search and cross-validation techniques, are adopted for the purpose of predictive modeling of the HCS. The sparse and weighted LS-SVM (WLS-SVM) were proposed by Suykens et al. to overcome the deficiency of the standard LS-SVM in lacking sparseness and robustness. This paper adopted those two improved LS-SVM algorithms to model the HCS based solely on a set of physiological and operator performance data. The results showed that the sparse LS-SVM can obtain HCS models with sparseness with almost no loss of modeling accuracy, while the WLS-SVM leads to models which are robust in case of noisy training data. Both intelligent system modeling approaches are shown to be capable of capturing the temporal fluctuation trends of the HCS because of their superior generalization performance.

  9. WITHDRAWN: Prostaglandins for prelabour rupture of membranes at or near term.

    Science.gov (United States)

    Tan, B P; Hannah, M E

    2007-07-18

    Induction of labour after prelabour rupture of membranes may reduce the risk of neonatal infection. However an expectant approach may be less likely to result in caesarean section. The objective of this review was to assess the effects of induction of labour with prostaglandins versus expectant management for prelabour rupture of membranes at or near term. We searched the Cochrane Pregnancy and Childbirth Group trials register. Randomised and quasi-randomised trials comparing early use of prostaglandins (with or without oxytocin) with no early use of prostaglandins in women with spontaneous rupture of membranes before labour, and 34 weeks or more of gestation. Trials were assessed for quality and data were abstracted. Fifteen trials were included. Most were of moderate to good quality. Different forms of prostaglandin preparations were used in these trials and it may be inappropriate to combine their results. Induction of labour by prostaglandins was associated with a decreased risk of chorioamnionitis (odds ratio 0.77, 95% confidence interval 0.61 to 0.97) based on eight trials and admission to neonatal intensive care (odds ratio 0.79, 95% confidence interval 0.66 to 0.94) based on seven trials. No difference was detected for rate of caesarean section, although induction by prostaglandins was associated with a more frequent maternal diarrhoea and use of anaesthesia and/or analgesia. Based on one trial, women were more likely to view their care positively if labour was induced with prostaglandins,. Induction of labour with prostaglandins appears to decrease the risk of maternal infection (chorioamnionitis) and admission to neonatal intensive care. Induction of labour with prostaglandins does not appear to increase the rate of caesarean section, although it is associated with more frequent maternal diarrhoea and pain relief.

  10. The solenoidal transport option: IFE drivers, near term research facilities, and beam dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Lee, E.P. [Ernest Orlando Lawrence Berkeley National Lab., CA (United States); Briggs, R.J. [Science Applications International Corp., Pleasanton, CA (United States)

    1997-09-01

    Solenoidal magnets have been used as the beam transport system in all the high current electron induction accelerators that have been built in the past several decades. They have also been considered for the front end transport system for heavy ion accelerators for Inertial Fusion Energy (IFE) drivers, but this option has received very little attention in recent years. The analysis reported here was stimulated mainly by the recent effort to define an affordable {open_quotes}Integrated Research Experiment{close_quotes} (IRE) that can meet the near term needs of the IFE program. The 1996 FESAC IFE review panel agreed that an integrated experiment is needed to fully resolve IFE heavy ion driver science and technology issues; specifically, {open_quotes}the basic beam dynamics issues in the accelerator, the final focusing and transport issues in a reactor-relevant beam parameter regime, and the target heating phenomenology{close_quotes}. The development of concepts that can meet these technical objectives and still stay within the severe cost constraints all new fusion proposals will encounter is a formidable challenge. Solenoidal transport has a very favorable scaling as the particle mass is decreased (the main reason why it is preferred for electrons in the region below 50 MeV). This was recognized in a recent conceptual study of high intensity induction linac-based proton accelerators for Accelerator Driven Transmutation Technologies, where solenoidal transport was chosen for the front end. Reducing the ion mass is an obvious scaling to exploit in an IRE design, since the output beam voltage will necessarily be much lower than that of a full scale driver, so solenoids should certainly be considered as one option for this experiment as well.

  11. Non-mosaic trisomy 16 in a near-term child

    Energy Technology Data Exchange (ETDEWEB)

    Donlon, T.A.; Kuslich, C.D. [Kapiolani Medical Center, Honolulu, HI (United States); Murray, J.E. [Tripler Army Medical Center, HI (United States)] [and others

    1994-09-01

    Trisomy 16 is the most common trisomy in first trimester spontaneous abortions, suggesting a high rate of non-disjunction. While cases of confined placental mosaicism and fetal mosaicism or partial trisomy of chromosome 16 have been reported in term fetuses, there have been no previous reports of a near-term fetus with full trisomy 16, indicating a high rate of selection against such cases. Our patient is a 25 year old Filipino female who underwent obstetrical sonographic evaluation at 32 weeks gestation due to suspicion of intrauterine growth retardation. Evaluation was remarkable for severe growth restriction and multiple dysmorphic features. The fetal karyotype was 47,XX,+16 (20 cells in blood, 30 cells from amniocytes); however, the remainder of the laboratory analysis was unremarkable. The patient went into spontaneous labor at 35 weeks gestation and had noted fetal movement prior to admission, but subsequently delivered a stillborn female fetus with a birthweight of 983 grams. Chromosomes from skin and brain fibroblasts and chorionic villus were examined and all (30 cells each) demonstrated trisomy 16. Fetal autopsy confirmed the presence of multiple major structural defects including facial dismorphism, webbing of the neck and axilla, pulmonary hypoplasia, cardiosplenic syndrome, congenital diaphragmatic hernia, and agenesis of the corpus callosum. While full trisomy 16 has previously been thought to be incompatible with fetal survival past the early second trimester, this case demonstrates this premise to be invalid. Previous studies by other laboratories have shown the extra chromosome 16 in aborted cases to be of maternal origin, consistent with a higher rate of maternal vs. paternal non-disjunction. The parental origin results of the present case will be presented.

  12. Near-term viability of solar heat applications for the federal sector

    Science.gov (United States)

    Williams, T. A.

    1991-12-01

    Solar thermal technologies are capable of providing heat across a wide range of temperatures, making them potentially attractive for meeting energy requirements for industrial process heat applications and institutional heating. The energy savings that could be realized by solar thermal heat are quite large, potentially several quads annually. Although technologies for delivering heat at temperatures above 100 C currently exist within industry, only a fairly small number of commercial systems have been installed to date. The objective of this paper is to investigate and discuss the prospects for near term solar heat sales to federal facilities as a mechanism for providing an early market niche to the aid the widespread development and implementation of the technology. The specific technical focus is on mid-temperature (100 to 350 C) heat demands that could be met with parabolic trough systems. Federal facilities have several features relative to private industry that may make them attractive for solar heat applications relative to other sectors. Key features are specific policy mandates for conserving energy, a long term planning horizon with well defined decision criteria, and prescribed economic return criteria for conservation and solar investments that are generally less stringent than the investment criteria used by private industry. Federal facilities also have specific difficulties in the sale of solar heat technologies that are different from those of other sectors, and strategies to mitigate these difficulties will be important. For the baseline scenario developed in this paper, the solar heat application was economically competitive with heat provided by natural gas. The system levelized energy cost was $5.9/MBtu for the solar heat case, compared to $6.8/MBtu for the life cycle fuel cost of a natural gas case. A third-party ownership would also be attractive to federal users, since it would guarantee energy savings and would not need initial federal funds.

  13. An Examination of Selected Datacom Options for the Near-Term Implementation of Trajectory Based Operations

    Science.gov (United States)

    Johnson, Walter W.; Lachter, Joel B.; Battiste, Vernol; Lim, Veranika; Brandt, Summer L.; Koteskey, Robert W.; Dao, Arik-Quang V.; Ligda, Sarah V.; Wu, Shu-Chieh

    2011-01-01

    A primary feature of the Next Generation Air Transportation System (NextGen) is trajectory based operations (TBO). Under TBO, aircraft flight plans are known to computer systems on the ground that aid in scheduling and separation. The Future Air Navigation System (FANS) was developed to support TBO, but relatively few aircraft in the US are FANSequipped. Thus, any near-term implementation must provide TBO procedures for non-FANS aircraft. Previous research has explored controller clearances, but any implementation must also provide procedures for aircraft requests. The work presented here aims to surface issues surrounding TBO communication procedures for non-FANS aircraft and for aircraft requesting deviations around weather. Three types of communication were explored: Voice, FANS, and ACARS,(Aircraft Communications Addressing and Reporting System). ACARS and FANS are datacom systems that differ in that FANS allows uplinked flight plans to be loaded into the Flight Management System (FMS), while ACARS delivers flight plans as text that must be entered manually via the Control Display Unit (CDU). Sixteen pilots (eight two-person flight decks) and four controllers participated in 32 20-minute scenarios that required the flight decks to navigate through convective weather as they approached their top of descents (TODs). Findings: The rate of non-conformance was higher than anticipated, with aircraft off path more than 20% of the time. Controllers did not differentiate between the ACARS and FANS datacom, and were mixed in their preference for Voice vs. datacom (ACARS and FANS). Pilots uniformly preferred Voice to datacom, particularly ACARS. Much of their dislike appears to result from the slow response times in the datacom conditions. As a result, participants frequently resorted to voice communication. These results imply that, before implementing TBO in environments where pilots make weather deviation requests, further research is needed to develop communication

  14. Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term Low-Cost Implementation

    Science.gov (United States)

    Wing, David J.; Ballin, Mark G.; Koczo, Stefan, Jr.; Vivona, Robert A.; Henderson, Jeffrey M.

    2013-01-01

    The concept of Traffic Aware Strategic Aircrew Requests (TASAR) combines Automatic Dependent Surveillance Broadcast (ADS-B) IN and airborne automation to enable user-optimal in-flight trajectory replanning and to increase the likelihood of Air Traffic Control (ATC) approval for the resulting trajectory change request. TASAR is designed as a near-term application to improve flight efficiency or other user-desired attributes of the flight while not impacting and potentially benefiting ATC. Previous work has indicated the potential for significant benefits for each TASAR-equipped aircraft. This paper will discuss the approach to minimizing TASAR's cost for implementation and accelerating readiness for near-term implementation.

  15. "Machine" consciousness and "artificial" thought: an operational architectonics model guided approach.

    Science.gov (United States)

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H

    2012-01-05

    Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Bayesian Reliability Modeling and Assessment Solution for NC Machine Tools under Small-sample Data

    Institute of Scientific and Technical Information of China (English)

    YANG Zhaojun; KAN Yingnan; CHEN Fei; XU Binbin; CHEN Chuanhai; YANG Chuangui

    2015-01-01

    Although Markov chain Monte Carlo(MCMC) algorithms are accurate, many factors may cause instability when they are utilized in reliability analysis; such instability makes these algorithms unsuitable for widespread engineering applications. Thus, a reliability modeling and assessment solution aimed at small-sample data of numerical control(NC) machine tools is proposed on the basis of Bayes theories. An expert-judgment process of fusing multi-source prior information is developed to obtain the Weibull parameters’ prior distributions and reduce the subjective bias of usual expert-judgment methods. The grid approximation method is applied to two-parameter Weibull distribution to derive the formulas for the parameters’ posterior distributions and solve the calculation difficulty of high-dimensional integration. The method is then applied to the real data of a type of NC machine tool to implement a reliability assessment and obtain the mean time between failures(MTBF). The relative error of the proposed method is 5.8020×10-4 compared with the MTBF obtained by the MCMC algorithm. This result indicates that the proposed method is as accurate as MCMC. The newly developed solution for reliability modeling and assessment of NC machine tools under small-sample data is easy, practical, and highly suitable for widespread application in the engineering field; in addition, the solution does not reduce accuracy.

  17. Ex vivo normothermic machine perfusion is safe, simple, and reliable: results from a large animal model.

    Science.gov (United States)

    Nassar, Ahmed; Liu, Qiang; Farias, Kevin; D'Amico, Giuseppe; Tom, Cynthia; Grady, Patrick; Bennett, Ana; Diago Uso, Teresa; Eghtesad, Bijan; Kelly, Dympna; Fung, John; Abu-Elmagd, Kareem; Miller, Charles; Quintini, Cristiano

    2015-02-01

    Normothermic machine perfusion (NMP) is an emerging preservation modality that holds the potential to prevent the injury associated with low temperature and to promote organ repair that follows ischemic cell damage. While several animal studies have showed its superiority over cold storage (CS), minimal studies in the literature have focused on safety, feasibility, and reliability of this technology, which represent key factors in its implementation into clinical practice. The aim of the present study is to report safety and performance data on NMP of DCD porcine livers. After 60 minutes of warm ischemia time, 20 pig livers were preserved using either NMP (n = 15; physiologic perfusion temperature) or CS group (n = 5) for a preservation time of 10 hours. Livers were then tested on a transplant simulation model for 24 hours. Machine safety was assessed by measuring system failure events, the ability to monitor perfusion parameters, sterility, and vessel integrity. The ability of the machine to preserve injured organs was assessed by liver function tests, hemodynamic parameters, and histology. No system failures were recorded. Target hemodynamic parameters were easily achieved and vascular complications were not encountered. Liver function parameters as well as histology showed significant differences between the 2 groups, with NMP livers showing preserved liver function and histological architecture, while CS livers presenting postreperfusion parameters consistent with unrecoverable cell injury. Our study shows that NMP is safe, reliable, and provides superior graft preservation compared to CS in our DCD porcine model. © The Author(s) 2014.

  18. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    Science.gov (United States)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-01-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67–0.76)] and validation cohorts [0.73 (0.63–0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future. PMID:28176850

  19. Kinematic Modelling and Control Simulation for 1PS+3TPS Type Hybrid Machine Tool

    Institute of Scientific and Technical Information of China (English)

    FAN Shouwen; WANG Xiaobing; HUANG Hongzhong

    2006-01-01

    A structure scheme for a novel hybrid machine tool (HMT) is proposed in this paper. In the scheme, a 4-DOFs 1PS+3TPS type spatial hybrid mechanism is utilized as main feed mechanism, with assistance of a two direction movable worktable, multi-coordinates NC machining can be realized. In the main feed mechanism, fixed platform is connected with moving platform by three TPS driving links and one PS driving link, one translation DOF and three rotation DOFs can be achieved by it. This type HMT enjoys some advantages over its conventional counterparts:large workspace,good dexterity,etc. Closed form inverse displacement analysis model and inverse kinematic model for main feed mechanism are established. A fuzzy PID control scheme for machining control of HMTs with high tracking precision is proposed aiming at highly nonlinear, tightly coupled and uncertain characteristic of HMTs. Simulation researches for fuzzy PID control of HMTs are carried out. Simulation Results demonstrate the effectiveness and the Robostness of the fuzzy PID controller.

  20. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    Science.gov (United States)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67–0.76)] and validation cohorts [0.73 (0.63–0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  1. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  2. A Parallel Decision Model Based on Support Vector Machines and Its Application to Fault Diagnosis

    Institute of Scientific and Technical Information of China (English)

    Yan Weiwu(阎威武); Shao Huihe

    2004-01-01

    Many industrial process systems are becoming more and more complex and are characterized by distributed features. To ensure such a system to operate under working order, distributed parameter values are often inspected from subsystems or different points in order to judge working conditions of the system and make global decisions. In this paper, a parallel decision model based on Support Vector Machine (PDMSVM) is introduced and applied to the distributed fault diagnosis in industrial process. PDMSVM is convenient for information fusion of distributed system and it performs well in fault diagnosis with distributed features. PDMSVM makes decision based on synthetic information of subsystems and takes the advantage of Support Vector Machine. Therefore decisions made by PDMSVM are highly reliable and accurate.

  3. FAULT DIAGNOSIS APPROACH BASED ON HIDDEN MARKOV MODEL AND SUPPORT VECTOR MACHINE

    Institute of Scientific and Technical Information of China (English)

    LIU Guanjun; LIU Xinmin; QIU Jing; HU Niaoqing

    2007-01-01

    Aiming at solving the problems of machine-learning in fault diagnosis, a diagnosis approach is proposed based on hidden Markov model (HMM) and support vector machine (SVM). HMM usually describes intra-class measure well and is good at dealing with continuous dynamic signals. SVM expresses inter-class difference effectively and has perfect classify ability. This approach is built on the merit of HMM and SVM. Then, the experiment is made in the transmission system of a helicopter. With the features extracted from vibration signals in gearbox, this HMM-SVM based diagnostic approach is trained and used to monitor and diagnose the gearbox's faults. The result shows that this method is better than HMM-based and SVM-based diagnosing methods in higher diagnostic accuracy with small training samples.

  4. Modelling of chaotic systems based on modified weighted recurrent least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Sun Jian-Cheng; Zhang Tai-Yi; Liu Feng

    2004-01-01

    Positive Lyapunov exponents cause the errors in modelling of the chaotic time series to grow exponentially. In this paper, we propose the modified version of the support vector machines (SVM) to deal with this problem. Based on recurrent least squares support vector machines (RLS-SVM), we introduce a weighted term to the cost function to compensate the prediction errors resulting from the positive global Lyapunov exponents. To demonstrate the effectiveness of our algorithm, we use the power spectrum and dynamic invariants involving the Lyapunov exponents and the correlation dimension as criterions, and then apply our method to the Santa Fe competition time series. The simulation results shows that the proposed method can capture the dynamics of the chaotic time series effectively.

  5. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  6. A Physics-Informed Machine Learning Framework for RANS-based Predictive Turbulence Modeling

    Science.gov (United States)

    Xiao, Heng; Wu, Jinlong; Wang, Jianxun; Ling, Julia

    2016-11-01

    Numerical models based on the Reynolds-averaged Navier-Stokes (RANS) equations are widely used in turbulent flow simulations in support of engineering design and optimization. In these models, turbulence modeling introduces significant uncertainties in the predictions. In light of the decades-long stagnation encountered by the traditional approach of turbulence model development, data-driven methods have been proposed as a promising alternative. We will present a data-driven, physics-informed machine-learning framework for predictive turbulence modeling based on RANS models. The framework consists of three components: (1) prediction of discrepancies in RANS modeled Reynolds stresses based on machine learning algorithms, (2) propagation of improved Reynolds stresses to quantities of interests with a modified RANS solver, and (3) quantitative, a priori assessment of predictive confidence based on distance metrics in the mean flow feature space. Merits of the proposed framework are demonstrated in a class of flows featuring massive separations. Significant improvements over the baseline RANS predictions are observed. The favorable results suggest that the proposed framework is a promising path toward RANS-based predictive turbulence in the era of big data. (SAND2016-7435 A).

  7. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    Science.gov (United States)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  8. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  9. Mechatronics in the mining industry. Modelling of underground machines; Mechatronik im Bergbau. Modellbildung von Untertage-Maschinen

    Energy Technology Data Exchange (ETDEWEB)

    Bruckmann, Tobias; Brandt, Thorsten [mercatronics GmbH, Duisburg (Germany)

    2009-12-17

    The development of new functions for machines operating underground often requires a prolonged and cost-intensive test phase. Precisely the development of complex functions as occur in operating assistance systems, for example, is highly iterative. If a corresponding prototype is required for each iteration step of the development, the development costs will, of course, increase rapidly. Virtual prototypes and simulators based on mathematical models of the machine offer an alternative in this case. The article describes the same principles for modelling the kinematics of underground machines. (orig.)

  10. A 2D Model of Induction Machine Dedicated to faults Detection : Extension of the Modified Winding Function

    Directory of Open Access Journals (Sweden)

    A. Ghoggal

    2005-12-01

    Full Text Available This paper deals mainly with the modeling of induction machine inductances by taking into account all the space harmonics, the skewing rotor bars effects and linear rise of MMF across the slot. The model is established initially in the case of symmetric machine, which corresponds to the case of a constant air-gap, then in the other case where the machine presents a static or dynamic, axial or radial eccentricity. This objective would be achieved by exploiting an extension in 2-D of the modified winding function approach (MWFA.

  11. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models.

    Science.gov (United States)

    Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  12. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  13. Executive summary for assessing the near-term risk of climate uncertainty : interdependencies among the U.S. states.

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.

    2010-04-01

    Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.

  14. Fast and Accurate Modeling of Molecular Atomization Energies with Machine Learning

    CERN Document Server

    Rupp, Matthias; Müller, Klaus-Robert; von Lilienfeld, O Anatole

    2011-01-01

    We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schr\\"odinger equation is mapped onto a non-linear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross-validation over more than seven thousand small organic molecules yields a mean absolute error of ~10 kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves.

  15. A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping

    Directory of Open Access Journals (Sweden)

    Wang Yan

    2014-01-01

    Full Text Available The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results’ evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer’s obvious improvement of mapping error rate.

  16. MODELS OF FATIGUE LIFE CURVES IN FATIGUE LIFE CALCULATIONS OF MACHINE ELEMENTS – EXAMPLES OF RESEARCH

    Directory of Open Access Journals (Sweden)

    Grzegorz SZALA

    2014-03-01

    Full Text Available In the paper there was attempted to analyse models of fatigue life curves possible to apply in calculations of fatigue life of machine elements. The analysis was limited to fatigue life curves in stress approach enabling cyclic stresses from the range of low cycle fatigue (LCF, high cycle fatigue (HCF, fatigue limit (FL and giga cycle fatigue (GCF appearing in the loading spectrum at the same time. Chosen models of the analysed fatigue live curves will be illustrated with test results of steel and aluminium alloys.

  17. Sensitivity Analysis of a Spatio-Temporal Avalanche Forecasting Model Based on Support Vector Machines

    Science.gov (United States)

    Matasci, G.; Pozdnoukhov, A.; Kanevski, M.

    2009-04-01

    The recent progress in environmental monitoring technologies allows capturing extensive amount of data that can be used to assist in avalanche forecasting. While it is not straightforward to directly obtain the stability factors with the available technologies, the snow-pack profiles and especially meteorological parameters are becoming more and more available at finer spatial and temporal scales. Being very useful for improving physical modelling, these data are also of particular interest regarding their use involving the contemporary data-driven techniques of machine learning. Such, the use of support vector machine classifier opens ways to discriminate the ``safe'' and ``dangerous'' conditions in the feature space of factors related to avalanche activity based on historical observations. The input space of factors is constructed from the number of direct and indirect snowpack and weather observations pre-processed with heuristic and physical models into a high-dimensional spatially varying vector of input parameters. The particular system presented in this work is implemented for the avalanche-prone site of Ben Nevis, Lochaber region in Scotland. A data-driven model for spatio-temporal avalanche danger forecasting provides an avalanche danger map for this local (5x5 km) region at the resolution of 10m based on weather and avalanche observations made by forecasters on a daily basis at the site. We present the further work aimed at overcoming the ``black-box'' type modelling, a disadvantage the machine learning methods are often criticized for. It explores what the data-driven method of support vector machine has to offer to improve the interpretability of the forecast, uncovers the properties of the developed system with respect to highlighting which are the important features that led to the particular prediction (both in time and space), and presents the analysis of sensitivity of the prediction with respect to the varying input parameters. The purpose of the

  18. Research on Modeling and Control of Regenerative Braking for Brushless DC Machines Driven Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jian-ping Wen

    2015-01-01

    Full Text Available In order to improve energy utilization rate of battery-powered electric vehicle (EV using brushless DC machine (BLDCM, the model of braking current generated by regenerative braking and control method are discussed. On the basis of the equivalent circuit of BLDCM during the generative braking period, the mathematic model of braking current is established. By using an extended state observer (ESO to observe actual braking current and the unknown disturbances of regenerative braking system, the autodisturbances rejection controller (ADRC for controlling the braking current is developed. Experimental results show that the proposed method gives better recovery efficiency and is robust to disturbances.

  19. A derivation of the generalized model of strains during bending of metal tubes at bending machines

    OpenAIRE

    Śloderbach Z.

    2014-01-01

    According to the postulate concerning a local change of the “actual active radius” with a bending angle in the bend zone, a generalized model of strain during metal tube bending was derived. The tubes should be subjected to bending at tube bending machines by the method of wrapping at the rotating template and with the use of a lubricated steel mandrel. The model is represented by three components of strain in the analytic form, including displacement of the neutral axis. Generalization of th...

  20. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  1. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Science.gov (United States)

    Jia, Lei; Yarlagadda, Ramya; Reed, Charles C

    2015-01-01

    Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  2. Predicting Freeway Work Zone Delays and Costs with a Hybrid Machine-Learning Model

    Directory of Open Access Journals (Sweden)

    Bo Du

    2017-01-01

    Full Text Available A hybrid machine-learning model, integrating an artificial neural network (ANN and a support vector machine (SVM model, is developed to predict spatiotemporal delays, subject to road geometry, number of lane closures, and work zone duration in different periods of a day and in the days of a week. The model is very user friendly, allowing the least inputs from the users. With that the delays caused by a work zone on any location of a New Jersey freeway can be predicted. To this end, tremendous amounts of data from different sources were collected to establish the relationship between the model inputs and outputs. A comparative analysis was conducted, and results indicate that the proposed model outperforms others in terms of the least root mean square error (RMSE. The proposed hybrid model can be used to calculate contractor penalty in terms of cost overruns as well as incentive reward schedule in case of early work competition. Additionally, it can assist work zone planners in determining the best start and end times of a work zone for developing and evaluating traffic mitigation and management plans.

  3. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  4. Some cases of machining large-scale parts: Characterization and modelling of heavy turning, deep drilling and broaching

    Science.gov (United States)

    Haddag, B.; Nouari, M.; Moufki, A.

    2016-10-01

    Machining large-scale parts involves extreme loading at the cutting zone. This paper presents an overview of some cases of machining large-scale parts: heavy turning, deep drilling and broaching processes. It focuses on experimental characterization and modelling methods of these processes. Observed phenomena and/or measured cutting forces are reported. The paper also discusses the predictive ability of the proposed models to reproduce experimental data.

  5. Computational Models of Financial Price Prediction: A Survey of Neural Networks, Kernel Machines and Evolutionary Computation Approaches

    Directory of Open Access Journals (Sweden)

    Javier Sandoval

    2011-12-01

    Full Text Available A review of the representative models of machine learning research applied to the foreign exchange rate and stock price prediction problem is conducted.  The article is organized as follows: The first section provides a context on the definitions and importance of foreign exchange rate and stock markets.  The second section reviews machine learning models for financial prediction focusing on neural networks, SVM and evolutionary methods. Lastly, the third section draws some conclusions.

  6. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech‐based human‐machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  7. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  8. EFFECT OF TOOL POLARITY ON THE MACHINING CHARACTERISTICS IN ELECTRIC DISCHARGE MACHINING OF SILVER STEEL AND STATISTICAL MODELLING OF THE PROCESS

    Directory of Open Access Journals (Sweden)

    DILSHAD AHMAD KHAN,

    2011-06-01

    Full Text Available Electric discharge machining (EDM is a thermoelectric process in which electrical energy is converted into thermal energy and this thermal energy is used for the machining purpose. It is the common practice in EDM to make tool negative and work piece positive (direct polarity , but researches shows that reverse of it is also possible in which tool is positive and work piece is negative ( reverse polarity, but not much work has been carried out on the reverse polarity till now. This paper discusses the effect of tool polarity on the machining characteristics in electric discharge machining of silver steel. High metal removal rate, low relative electrode wear and good surface finish are conflicting goals, which can not be achieved simultaneously with a particular combination of control settings. To achieve the best machining results, the goal has to be taken separately in different phases of work with different emphasis. A 32 factorial design has been used for planning of experimental conditions. Copper is used as tool material and Silver steel of 28 grade is selected as work piece material with positive and negative polarities. The effectiveness of EDM process with silver steel is evaluated in terms of metal removal rate (MRR, percent relative electrode wear (%REW and the surface roughness (S.R ofthe work piece produced at different current and pulse duration levels. In this experimental work spark erosion oil (trade name IPOL is taken as a dielectric and experiments have been conducted at 50% duty factor. The study reveals that direct polarity is suitable for higher metal removal rate and lower relative electrode wear but reverse polarity gives better surface finish as compared to direct polarity. Direct polarity gives 4-11 times more MRR and 5 times less relative electrode wear as compared to reverse polarity, and reverse polarity gives 1.3-2.7 times better surface finish as compared to direct polarity. Second order regression model is also

  9. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    2011-07-27

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy in reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.

  10. Dynamic Modeling and Damping Function of GUPFC in Multi-Machine Power System

    Directory of Open Access Journals (Sweden)

    Sasongko Pramono Hadi

    2011-11-01

    Full Text Available This paper presents a new dynamic model of multi-machine power system equipped with GUPFC for power system study, and using PSS and GUPFC POD controller some effective control schemes are proposed to improve power system stability. Based on UPFC configuration, an additional series boosting transformer is considered to define a GUPFC configuration and its mathematical model; Phillips-Heffron scheme is used to formulate machine model, and modification of network dealing with GUPFC parameter is carried out to develop a MIMO as well as comprehensive power system with GUPFC model. Genetics Algorithm method was proposed to lead-lag compensation design, this technique provides the parameter controller. The controller produced supplementary signals, the PSS for machine and POD for GUPFC. By applying a small disturbance, the dynamic stability power system was investigated. Simulation results show that the proposed power system with GUPFC model is valid and suitable for stability analysis. The installation of GUPFC without POD decreased the damping oscillation. But, the results show that the presence of GUPFC in power system network provided by PSS and POD controller is very potential to improve system stability. A 66% overshoot reduction could be reached, it is obtained 12 s in settling time (shorter, although the rise time become 700 ms longer. Simulation results revealed that the role of POD controller is more dominant than the PSS, however both PSS and GUPFC POD controller simultaneously present a positive interaction. Phase angle of converter C, δC is the most significant control signal POD in oscillation damping.

  11. Adaptive brain shut-down counteracts neuroinflammation in the near-term ovine fetus

    Directory of Open Access Journals (Sweden)

    Alex eXU

    2014-06-01

    Full Text Available Objective: Repetitive umbilical cord occlusions (UCOs in ovine fetus leading to severe acidemia result in adaptive shut-down of electrocortical activity (ECOG as well as systemic and brain inflammation. We hypothesized that the fetuses with earlier ECOG shut-down as a neuroprotective mechanism in response to repetitive UCOs will show less brain inflammation and, moreover, that chronic hypoxia will impact this relationship.Methods: Near term fetal sheep were chronically instrumented with ECOG leads, vascular catheters and a cord occluder and then underwent repetitive UCOs for up to 4 hours or until fetal arterial pH was < 7.00. Eight animals, hypoxic prior to the UCOs (SaO2< 55%, were allowed to recover 24 hours post insult, while 14 animals, five of whom also were chronically hypoxic, were allowed to recover 48 hours post insult, after which brains were perfusion-fixed. Time of ECOG shut-down and corresponding pH were noted, as well as time to then reach pH<7.00 (ΔT. Microglia (MG were counted as a measure of inflammation in grey matter layers 4-6 (GM4-6 where most ECOG activity is generated. Results are reported as mean±SEM for p<0.05.Results: Repetitive UCOs resulted in worsening acidosis over 3 to 4 hours with arterial pH decreasing to 6.97±0.02 all UCO groups’ animals, recovering to baseline by 24 hours. ECOG shut-down occurred 52±7 min before reaching pH < 7.00 at pH 7.23±0.02 across the animal groups. MG counts were inversely correlated to ΔT in 24 hours recovery animals (R=-0.84, as expected. This was not the case in normoxic 48 hours recovery animals, and, surprisingly, in hypoxic 48 hours recovery animals this relationship was reversed (R=0.90.Conclusion: Adaptive brain shut-down during labour-like worsening acidemia counteracts neuroinflammation in a hypoxia- and time-dependent manner.

  12. Time scales of autonomic information flow in near-term fetal sheep

    Directory of Open Access Journals (Sweden)

    Martin eFrasch

    2012-09-01

    Full Text Available Autonomic information flow (AIF characterizes fetal heart rate (FHR variability (fHRV in the time scale dependent complexity domain and discriminates sleep states (high voltage/low frequency (HV/LF and low voltage/high frequency (LV/HF electrocortical activity. However, the physiologic relationship of AIF time scales to the underlying sympathetic and vagal rhythms is not known. Understanding this relationship will enhance the benefits derived from using fHRV to monitor fetal health non-invasively. We analyzed AIF measured as Kullback-Leibler entropy in fetal sheep in late gestation as function of vagal and sympathetic modulation of fHRV, using atropine and propranolol respectively (n=6, and also analyzed changes in fHRV during sleep states (n=12. Atropine blockade resulted in complexity decrease at 2.5 Hz compared to baseline HV/LF and LV/HF states and at 1.6 Hz compared to LV/HF. Propranolol blockade resulted in complexity increase in the 0.8-1 Hz range compared to LV/HF and in no changes when compared to HV/LF. During LV/HF state activity, fHRV complexity was lower at 2.5 Hz and higher at 0.15-0.19 Hz than during HV/LF. Our findings show that in mature fetuses near term vagal activity contributes to fHRV complexity on a wider range of time scales than sympathetic activity. Related to sleep, during LV/HF we found lower complexity at short-term time scale where complexity is also decreased due to vagal blockade. We conclude that vagal and sympathetic modulations of fHRV show sleep state-dependent and time scale-dependent complexity patterns captured by AIF analysis of fHRV. Specifically, we observed a vagally mediated and sleep state-dependent change in these patterns at a time scale around 2.5 Hz (0.2 seconds. A paradigm of state-dependent nonlinear sympathovagal modulation of fHRV is discussed.

  13. Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation

    Science.gov (United States)

    Lenard, Roger X.; Andrews, Dana G.

    2007-06-01

    Interstellar transportation to nearby star systems over periods shorter than the human lifetime requires speeds in the range of 0.1-0.15 c and relatively high accelerations. These speeds are not attainable using rockets, even with advanced fusion engines because at these velocities, the energy density of the spacecraft approaches the energy density of the fuel. Anti-matter engines are theoretically possible but current physical limitations would have to be suspended to get the mass densities required. Interstellar ramjets have not proven practicable, so this leaves beamed momentum propulsion or a continuously fueled Mag-Orion system as the remaining candidates. However, deceleration is also a major issue, but part of the Mini-Mag Orion approach assists in solving this problem. This paper reviews the state of the art from a Phases I and II SBIT between Sandia National Laboratories and Andrews Space, applying our results to near-term interstellar travel. A 1000 T crewed spacecraft and propulsion system dry mass at .1c contains ˜9×1021J. The author has generated technology requirements elsewhere for use of fission power reactors and conventional Brayton cycle machinery to propel a spacecraft using electric propulsion. Here we replace the electric power conversion, radiators, power generators and electric thrusters with a Mini-Mag Orion fission-fusion hybrid. Only a small fraction of fission fuel is actually carried with the spacecraft, the remainder of the propellant (macro-particles of fissionable material with a D-T core) is beamed to the spacecraft, and the total beam energy requirement for an interstellar probe mission is roughly 1020J, which would require the complete fissioning of 1000 ton of Uranium assuming 35% power plant efficiency. This is roughly equivalent to a recurring cost per flight of 3.0 billion dollars in reactor grade enriched uranium using today's prices. Therefore, interstellar flight is an expensive proposition, but not unaffordable, if the

  14. Response of the Kuroshio Extension path state to near-term global warming in CMIP5 experiments with MIROC4h

    Science.gov (United States)

    Li, Rui; Jing, Zhao; Chen, Zhaohui; Wu, Lixin

    2017-04-01

    In this study, responses of the Kuroshio Extension (KE) path state to near-term (2006-2035) global warming are investigated using a Kuroshio-resolving atmosphere-ocean coupled model. Under the representative concentration pathway 4.5 (RCP4.5) forcing, the KE system is intensified and its path state tends to move northward and becomes more stable. It is suggested that the local anticyclonic wind stress anomalies in the KE region favor the spin-up of the southern recirculation gyre, and the remote effect induced by the anticyclonic wind stress anomalies over the central and eastern midlatitude North Pacific also contributes to the stabilization of the KE system substantially. The dominant role of wind stress forcing on KE variability under near-term global warming is further confirmed by adopting a linear 1.5 layer reduced-gravity model forced by wind stress curl field from the present climate model. It is also found that the main contributing longitudinal band for KE index (KEI) moves westward in response to the warmed climate. This results from the northwestward expansion of the large-scale sea level pressure (SLP) field.

  15. Biosimilarity Assessments of Model IgG1-Fc Glycoforms Using a Machine Learning Approach.

    Science.gov (United States)

    Kim, Jae Hyun; Joshi, Sangeeta B; Tolbert, Thomas J; Middaugh, C Russell; Volkin, David B; Smalter Hall, Aaron

    2016-02-01

    Biosimilarity assessments are performed to decide whether 2 preparations of complex biomolecules can be considered "highly similar." In this work, a machine learning approach is demonstrated as a mathematical tool for such assessments using a variety of analytical data sets. As proof-of-principle, physical stability data sets from 8 samples, 4 well-defined immunoglobulin G1-Fragment crystallizable glycoforms in 2 different formulations, were examined (see More et al., companion article in this issue). The data sets included triplicate measurements from 3 analytical methods across different pH and temperature conditions (2066 data features). Established machine learning techniques were used to determine whether the data sets contain sufficient discriminative power in this application. The support vector machine classifier identified the 8 distinct samples with high accuracy. For these data sets, there exists a minimum threshold in terms of information quality and volume to grant enough discriminative power. Generally, data from multiple analytical techniques, multiple pH conditions, and at least 200 representative features were required to achieve the highest discriminative accuracy. In addition to classification accuracy tests, various methods such as sample space visualization, similarity analysis based on Euclidean distance, and feature ranking by mutual information scores are demonstrated to display their effectiveness as modeling tools for biosimilarity assessments.

  16. Experimental force modeling for deformation machining stretching mode for aluminum alloys

    Indian Academy of Sciences (India)

    ARSHPREET SINGH; ANUPAM AGRAWAL

    2017-02-01

    Deformation machining is a hybrid process that combines two manufacturing processes—thin structure machining and single-point incremental forming. This process enables the creation of complex structures and geometries, which would be rather difficult or sometimes impossible to manufacture. A comprehensive experimental study of forces induced in deformation machining stretching mode has been performedin the present work. A table-type force dynamometer has been used to record the deforming forces in three Cartesian directions. The influence of five process parameters—floor thickness, tool diameter, wall angle,incremental step size, and floor size on the deforming forces—is investigated. Individual as well as combined empirical models of the parameters with regard to the forces have been formed. The results of this study indicatethat the average resultant force primarily depends on the floor thickness to be deformed and the incremental depth in the tool path. This could be due to the variation in local stiffness of the sheet with change in floor thickness. The effect of tool diameter, deforming wall angle, and floor size is not significant.

  17. Near Term Hybrid Passenger Vehicle Development Program. Phase I, Final report. Appendix B: trade-off studies. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    Traversi, M.; Piccolo, R.

    1979-06-11

    Trade-off studies of Near Term Hybrid Vehicle (NTHV) design elements were performed to identify the most promising design concept in terms of achievable petroleum savings. The activities in these studies are described. The results are presented as preliminary NTHV body design, expected fuel consumption as a function of vehicle speed, engine requirements, battery requirements, and vehicle reliability and cost. (LCL)

  18. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  19. Machine Learning Model of the Swift/BAT Trigger Algorithm for Long GRB Population Studies

    CERN Document Server

    Graff, Philip B; Baker, John G; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien 2014 is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of $\\gtrsim97\\%$ ($\\lesssim 3\\%$ error), which is a significant improvement on a cut in GRB flux which has an accuracy of $89.6\\%$ ($10.4\\%$ error). These models are then used to measure the detection efficiency of Swift as a function of redshift $z$, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of $n_0 \\sim 0.48^{+0.41}_{-0.23} \\ {\\rm Gpc}^{-3} {\\...

  20. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  1. Analog models of computations \\& Effective Church Turing Thesis: Efficient simulation of Turing machines by the General Purpose Analog Computer

    CERN Document Server

    Pouly, Amaury; Graça, Daniel S

    2012-01-01

    \\emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \\emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \\emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo...

  2. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  3. One- and two-dimensional Stirling machine simulation using experimentally generated reversing flow turbuulence models

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, L.F. [Univ. of Minnesota, Minneapolis, MN (United States)

    1990-08-01

    The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year`s funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge.

  4. A Case Study of Employing A Single Server Nonpreemptive Priority Queuing Model at ATM Machine

    Directory of Open Access Journals (Sweden)

    Abdullah Furquan

    2015-08-01

    Full Text Available This paper discusses a case study of employing a single server nonpreemptivepriorityqueuing model [1]at ATM machine which originally operates on M/M/1 model. In this study we have taken two priority classes of people in following order:- .Priority class 1- woman .Priority class 2- man Sometimea long queue is formed at ATMmachine (single serverbut the bank management don’t have enough money to invest on installing new ATM machine.In this situation we want to apply single server nonpreemptive priority queuing model.The security guard at the ATM will divide the customers in two category and arrange the customers in the above said priority order Thuspriority class 1 people willreceive theatm service ahead of priority class 2 people.This will reduce the waiting time of priority class 1 people. Of course by doing this the waiting time of priority class 2will increase. This is ok as long as the increment in waiting time of priority class2 people is reasonable and within the tolerable limitof priority class 2people.This will be true when percentage of priority class 1 people is relatively less as compared to priority class 2 people To know the attitude and tolerable limit of priority class 2 people towards the single server non preemtive priority model a sample survey has been done on the incomingpriority class 2 population at the atm machine.Against this background, the queuing process is employed with emphasis to Poisson distribution to assess the waiting time. The data for this study was collected from primary source and is limited to ATM service point of state bank of India located at Ramesh chowk, Aurangabad, bihar, India.. The assistance of three colleague was sought in collecting the data. The Interarrival time and service time data was collected during busy working hours (i.e. 10.30am to 4:00pm during the first 60 days. A sample survey was done to know the attitude and tolerable limit of priority class 2people towards the single server

  5. Hemodynamic modelling of BOLD fMRI - A machine learning approach

    DEFF Research Database (Denmark)

    Jacobsen, Danjal Jakup

    2007-01-01

    This Ph.D. thesis concerns the application of machine learning methods to hemodynamic models for BOLD fMRI data. Several such models have been proposed by different researchers, and they have in common a basis in physiological knowledge of the hemodynamic processes involved in the generation...... of the BOLD signal. The BOLD signal is modelled as a non-linear function of underlying, hidden (non-measurable) hemodynamic state variables. The focus of this thesis work has been to develop methods for learning the parameters of such models, both in their traditional formulation, and in a state space...... formulation. In the latter, noise enters at the level of the hidden states, as well as in the BOLD measurements themselves. A framework has been developed to allow approximate posterior distributions of model parameters to be learned from real fMRI data. This is accomplished with Markov chain Monte Carlo...

  6. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  7. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    Science.gov (United States)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan

  8. Prediction of effluent concentration in a wastewater treatment plant using machine learning models.

    Science.gov (United States)

    Guo, Hong; Jeong, Kwanho; Lim, Jiyeon; Jo, Jeongwon; Kim, Young Mo; Park, Jong-pyo; Kim, Joon Ha; Cho, Kyung Hwa

    2015-06-01

    Of growing amount of food waste, the integrated food waste and waste water treatment was regarded as one of the efficient modeling method. However, the load of food waste to the conventional waste treatment process might lead to the high concentration of total nitrogen (T-N) impact on the effluent water quality. The objective of this study is to establish two machine learning models-artificial neural networks (ANNs) and support vector machines (SVMs), in order to predict 1-day interval T-N concentration of effluent from a wastewater treatment plant in Ulsan, Korea. Daily water quality data and meteorological data were used and the performance of both models was evaluated in terms of the coefficient of determination (R2), Nash-Sutcliff efficiency (NSE), relative efficiency criteria (drel). Additionally, Latin-Hypercube one-factor-at-a-time (LH-OAT) and a pattern search algorithm were applied to sensitivity analysis and model parameter optimization, respectively. Results showed that both models could be effectively applied to the 1-day interval prediction of T-N concentration of effluent. SVM model showed a higher prediction accuracy in the training stage and similar result in the validation stage. However, the sensitivity analysis demonstrated that the ANN model was a superior model for 1-day interval T-N concentration prediction in terms of the cause-and-effect relationship between T-N concentration and modeling input values to integrated food waste and waste water treatment. This study suggested the efficient and robust nonlinear time-series modeling method for an early prediction of the water quality of integrated food waste and waste water treatment process. Copyright © 2015. Published by Elsevier B.V.

  9. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  10. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    Science.gov (United States)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  11. The applications of machine learning algorithms in the modeling of estrogen-like chemicals.

    Science.gov (United States)

    Liu, Huanxiang; Yao, Xiaojun; Gramatica, Paola

    2009-06-01

    Increasing concern is being shown by the scientific community, government regulators, and the public about endocrine-disrupting chemicals that, in the environment, are adversely affecting human and wildlife health through a variety of mechanisms, mainly estrogen receptor-mediated mechanisms of toxicity. Because of the large number of such chemicals in the environment, there is a great need for an effective means of rapidly assessing endocrine-disrupting activity in the toxicology assessment process. When faced with the challenging task of screening large libraries of molecules for biological activity, the benefits of computational predictive models based on quantitative structure-activity relationships to identify possible estrogens become immediately obvious. Recently, in order to improve the accuracy of prediction, some machine learning techniques were introduced to build more effective predictive models. In this review we will focus our attention on some recent advances in the use of these methods in modeling estrogen-like chemicals. The advantages and disadvantages of the machine learning algorithms used in solving this problem, the importance of the validation and performance assessment of the built models as well as their applicability domains will be discussed.

  12. Three-Phase Unbalanced Transient Dynamics and Powerflow for Modeling Distribution Systems With Synchronous Machines

    Energy Technology Data Exchange (ETDEWEB)

    Elizondo, Marcelo A.; Tuffner, Francis K.; Schneider, Kevin P.

    2016-01-01

    Unlike transmission systems, distribution feeders in North America operate under unbalanced conditions at all times, and generally have a single strong voltage source. When a distribution feeder is connected to a strong substation source, the system is dynamically very stable, even for large transients. However if a distribution feeder, or part of the feeder, is separated from the substation and begins to operate as an islanded microgrid, transient dynamics become more of an issue. To assess the impact of transient dynamics at the distribution level, it is not appropriate to use traditional transmission solvers, which generally assume transposed lines and balanced loads. Full electromagnetic solvers capture a high level of detail, but it is difficult to model large systems because of the required detail. This paper proposes an electromechanical transient model of synchronous machine for distribution-level modeling and microgrids. This approach includes not only the machine model, but also its interface with an unbalanced network solver, and a powerflow method to solve unbalanced conditions without a strong reference bus. The presented method is validated against a full electromagnetic transient simulation.

  13. A Model-based Analysis of Impulsivity Using a Slot-Machine Gambling Paradigm

    Directory of Open Access Journals (Sweden)

    Saee ePaliwal

    2014-07-01

    Full Text Available Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling. Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11, correlated significantly with an aggregate read-out of the following gambling responses: bet increases, machines switches, casino switches and double-ups. Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e. the Hierarchical Gaussian Filter (HGF and Rescorla-Wagner reinforcement learning models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF, the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to impulsivity. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future assessments of pathological gambling.

  14. A model-based analysis of impulsivity using a slot-machine gambling paradigm.

    Science.gov (United States)

    Paliwal, Saee; Petzschner, Frederike H; Schmitz, Anna Katharina; Tittgemeyer, Marc; Stephan, Klaas E

    2014-01-01

    Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling (PG). Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11), correlated significantly with an aggregate read-out of the following gambling responses: bet increases (BIs), machines switches (MS), casino switches (CS), and double-ups (DUs). Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e., the Hierarchical Gaussian Filter (HGF) and Rescorla-Wagner reinforcement learning (RL) models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF), the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to the impulsive traits of an individual. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future

  15. The Model of Information Support for Management of Investment Attractiveness of Machine-Building Enterprises

    Directory of Open Access Journals (Sweden)

    Chernetska Olga V.

    2016-11-01

    Full Text Available The article discloses the content of the definition of “information support”, identifies basic approaches to the interpretation of this economic category. The main purpose of information support for management of enterprise investment attractiveness is determined. The key components of information support for management of enterprise investment attractiveness are studied. The main types of automated information systems for management of the investment attractiveness of enterprises are identified and characterized. The basic computer programs for assessing the level of investment attractiveness of enterprises are considered. A model of information support for management of investment attractiveness of machine-building enterprises is developed.

  16. Discriminative feature-rich models for syntax-based machine translation.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  17. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    Science.gov (United States)

    Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-01

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  18. A machine learning approach to the potential-field method for implicit modeling of geological structures

    Science.gov (United States)

    Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe

    2017-06-01

    Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.

  19. Research on Dynamic Modeling and Application of Kinetic Contact Interface in Machine Tool

    Directory of Open Access Journals (Sweden)

    Dan Xu

    2016-01-01

    Full Text Available A method is presented which is a kind of combining theoretic analysis and experiment to obtain the equivalent dynamic parameters of linear guideway through four steps in detail. From statics analysis, vibration model analysis, dynamic experiment, and parameter identification, the dynamic modeling of linear guideway is synthetically studied. Based on contact mechanics and elastic mechanics, the mathematic vibration model and the expressions of basic mode frequency are deduced. Then, equivalent stiffness and damping of guideway are obtained in virtue of single-freedom-degree mode fitting method. Moreover, the investigation above is applied in a certain gantry-type machining center; and through comparing with simulation model and experiment results, both availability and correctness are validated.

  20. Modelling soil water retention using support vector machines with genetic algorithm optimisation.

    Science.gov (United States)

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  1. Automatic Extraction of Three Dimensional Prismatic Machining Features from CAD Model

    Directory of Open Access Journals (Sweden)

    B.V. Sudheer Kumar

    2011-12-01

    Full Text Available Machining features recognition provides the necessary platform for the computer aided process planning (CAPP and plays a key role in the integration of computer aided design (CAD and computer aided manufacturing (CAM. This paper presents a new methodology for extracting features from the geometrical data of the CAD Model present in the form of Virtual Reality Modeling Language (VRML files. First, the point cloud is separated into the available number of horizontal cross sections. Each cross section consists of a 2D point cloud. Then, a collection of points represented by a set of feature points is derived for each slice, describing the cross section accurately, and providing the basis for a feature-extraction. These extracted manufacturing features, gives the necessary information regarding the manufacturing activities tomanufacture the part. Software in Microsoft Visual C++ environment is developed to recognize the features, where geometric information of the part isextracted from the CAD model. By using this data, anoutput file i.e., text file is generated, which gives all the machinable features present in the part. This process has been tested on various parts and successfully extracted all the features

  2. Predictive Models for Different Roughness Parameters During Machining Process of Peek Composites Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Mata-Cabrera Francisco

    2013-10-01

    Full Text Available Polyetheretherketone (PEEK composite belongs to a group of high performance thermoplastic polymers and is widely used in structural components. To improve the mechanical and tribological properties, short fibers are added as reinforcement to the material. Due to its functional properties and potential applications, it’s impor- tant to investigate the machinability of non-reinforced PEEK (PEEK, PEEK rein- forced with 30% of carbon fibers (PEEK CF30, and reinforced PEEK with 30% glass fibers (PEEK GF30 to determine the optimal conditions for the manufacture of the parts. The present study establishes the relationship between the cutting con- ditions (cutting speed and feed rate and the roughness (Ra , Rt , Rq , Rp , by develop- ing second order mathematical models. The experiments were planned as per full factorial design of experiments and an analysis of variance has been performed to check the adequacy of the models. These state the adequacy of the derived models to obtain predictions for roughness parameters within ranges of parameters that have been investigated during the experiments. The experimental results show that the most influence of the cutting parameters is the feed rate, furthermore, proved that glass fiber reinforcements produce a worse machinability.

  3. A mathematical model for surface roughness of fluidic channels produced by grinding aided electrochemical discharge machining (G-ECDM)

    OpenAIRE

    Ladeesh V. G.; Manu R

    2017-01-01

    Grinding aided electrochemical discharge machining is a hybrid technique, which combines the grinding action of an abrasive tool and thermal effects of electrochemical discharges to remove material from the workpiece for producing complex contours. The present study focuses on developing fluidic channels on borosilicate glass using G-ECDM and attempts to develop a mathematical model for surface roughness of the machined channel. Preliminary experiments are conducted to study the effect of mac...

  4. Mirrored Language Structure and Innate Logic of the Human Brain as a Computable Model of the Oracle Turing Machine

    CERN Document Server

    Wen, Han Xiao

    2010-01-01

    We wish to present a mirrored language structure (MLS) and four logic rules determined by this structure for the model of a computable Oracle Turing machine. MLS has novel features that are of considerable biological and computational significance. It suggests an algorithm of relation learning and recognition (RLR) that enables the deterministic computers to simulate the mechanism of the Oracle Turing machine, or P = NP in a mathematical term.

  5. Modelling Soil Water Retention Using Support Vector Machines with Genetic Algorithm Optimisation

    Directory of Open Access Journals (Sweden)

    Krzysztof Lamorski

    2014-01-01

    Full Text Available This work presents point pedotransfer function (PTF models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: –0.98, –3.10, –9.81, –31.02, –491.66, and –1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models’ parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67–0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  6. Machine Learning Techniques for Single Nucleotide Polymorphism—Disease Classification Models in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Cristian R. Munteanu

    2010-07-01

    Full Text Available Single nucleotide polymorphisms (SNPs can be used as inputs in disease computational studies such as pattern searching and classification models. Schizophrenia is an example of a complex disease with an important social impact. The multiple causes of this disease create the need of new genetic or proteomic patterns that can diagnose patients using biological information. This work presents a computational study of disease machine learning classification models using only single nucleotide polymorphisms at the HTR2A and DRD3 genes from Galician (Northwest Spain schizophrenic patients. These classification models establish for the first time, to the best knowledge of the authors, a relationship between the sequence of the nucleic acid molecule and schizophrenia (Quantitative Genotype – Disease Relationships that can automatically recognize schizophrenia DNA sequences and correctly classify between 78.3–93.8% of schizophrenia subjects when using datasets which include simulated negative subjects and a linear artificial neural network.

  7. Physics-Informed Machine Learning for Predictive Turbulence Modeling: A Priori Assessment of Prediction Confidence

    CERN Document Server

    Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-01-01

    Although Reynolds-Averaged Navier-Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows, standard RANS models are known to be unreliable in many flows of engineering relevance, including flows with separation, strong pressure gradients or mean flow curvature. With increasing amounts of 3-dimensional experimental data and high fidelity simulation data from Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), data-driven turbulence modeling has become a promising approach to increase the predictive capability of RANS simulations. Recently, a data-driven turbulence modeling approach via machine learning has been proposed to predict the Reynolds stress anisotropy of a given flow based on high fidelity data from closely related flows. In this work, the closeness of different flows is investigated to assess the prediction confidence a priori. Specifically, the Mahalanobis distance and the kernel density estimation (KDE) technique...

  8. Modeling and Control of Hybrid Machine Systems——a Five-bar Mechanism Case

    Institute of Scientific and Technical Information of China (English)

    Hongnian Yu

    2006-01-01

    A hybrid machine (HM) as a typical mechatronic device, is a useful tool to generate smooth motion, and combines the motions of a large constant speed motor with a small servo motor by means of a mechnical linkage mechanism, in order to provide a powerful programmable drive system. To achieve design objectives, a control system is required. To design a better control system and analyze the performance of an HM, a dynamic model is necessary. This paper first develops a dynamic model of an HM with a five-bar mechanism using a Lagrangian formulation. Then, several important properties which are very useful in system analysis, and control system design, are presented. Based on the developed dynamic model,two control approaches, computed torque, and combined computed torque and slide mode control, are adopted to control the HM system. Simulation results demonstrate the control performance and limitations of each control approach.

  9. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  10. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Felix F. Gonzalez-Navarro

    2016-10-01

    Full Text Available Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  11. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods †

    Science.gov (United States)

    Gonzalez-Navarro, Felix F.; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A.; Flores-Rios, Brenda L.; Ibarra-Esquer, Jorge E.

    2016-01-01

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization. PMID:27792165

  12. Modelling and simulation for table tennis referee regulation based on finite state machine.

    Science.gov (United States)

    Cui, Jianjiang; Liu, Zixuan; Xu, Long

    2017-10-01

    As referee's decisions are made artificially in traditional table tennis matches, many factors in a match, such as fatigue and subjective tendency, may lead to unjust decision. Based on finite state machine (FSM), this paper presents a model for table tennis referee regulation to substitute manual decisions. In this model, the trajectory of the ball is recorded through a binocular visual system while the complete rules extracted from the International Table Tennis Federation (ITTF) rules are described based on FSM. The final decision for the competition is made based on expert system theory. Simulation result shows that the proposed model has high accuracy, and can be generalised to other similar games such as badminton, volleyball, etc.

  13. Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.

    Science.gov (United States)

    Komasi, Mehdi; Sharghi, Soroush

    2016-01-01

    Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.

  14. Response surface and artificial neural network prediction model and optimization for surface roughness in machining

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Sahoo

    2015-04-01

    Full Text Available The present paper deals with the development of prediction model using response surface methodology and artificial neural network and optimizes the process parameter using 3D surface plot. The experiment has been conducted using coated carbide insert in machining AISI 1040 steel under dry environment. The coefficient of determination value for RSM model is found to be high (R2 = 0.99 close to unity. It indicates the goodness of fit for the model and high significance of the model. The percentage of error for RSM model is found to be only from -2.63 to 2.47. The maximum error between ANN model and experimental lies between -1.27 and 0.02 %, which is significantly less than the RSM model. Hence, both the proposed RSM and ANN prediction model sufficiently predict the surface roughness, accurately. However, ANN prediction model seems to be better compared with RSM model. From the 3D surface plots, the optimal parametric combination for the lowest surface roughness is d1-f1-v3 i.e. depth of cut of 0.1 mm, feed of 0.04 mm/rev and cutting speed of 260 m/min respectively.

  15. Extended Park's transformation for 2×3-phase synchronous machine and converter phasor model with representation of AC harmonics

    DEFF Research Database (Denmark)

    Knudsen, Hans

    1995-01-01

    in the stator. A consistent method is developed to determine model parameters from standard machine data. A phasor model of the line commutated converter is presented. The converter model includes not only the fundamental frequency, but also any chosen number of harmonics without a representation of the single...

  16. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  17. Surface Roughness Prediction Model in Machining of Carbon Steel by PVD Coated Cutting Tools

    Directory of Open Access Journals (Sweden)

    Yusuf Sahin

    2004-01-01

    Full Text Available The surface roughness model in the turning of AISI 1040 carbon steel was developed in terms of cutting speed, feed rate and depth of cut using response surface methodology. Machining tests were carried out using PVD-coated tools under different cutting conditions. The surface roughness equations of cutting tools when machining the carbon steels were achieved by using the experimental data. The results are presented in terms of mean values and confidence levels. The established equation shows that the feed rate was found to be a main influencing factor on the surface roughness. It increased with increasing the feed rate, but decreased with increasing the cutting speed and the depth of cut, respectively. The variance analysis for the second-order model shows that the interaction terms and the square terms were statically insignificant. However, it could be seen that the first-order effect of feed rate was significant while cutting speed and depth of cut was insignificant. The predicted surface roughness of the samples was found to lie close to that of the experimentally observed ones with 95% confident intervals.

  18. Study on Application of Grey Prediction Model in Superalloy MAR-247 Machining

    Directory of Open Access Journals (Sweden)

    Chen Shao-Hsien

    2015-01-01

    Full Text Available Superalloy MAR-247 is mainly applied in the space industry and die industry. With its characteristics of mechanical property, fatigue resistance, and high temperature corrosion resistance, therefore, it is mainly applied in machine parts of high temperature and corrosion resistance, such as turbine blades and rotor of the aeroengine and turbine assembly in the nuclear power plant. However, considering that its properties of high strength, low thermal conductivity, being difficult to soften, and work hardening may reduce the life of cutting-tool and weaken the surface accuracy, the study provided minimizing experiment occurring during milling process for superalloy material. As a statistical approach used to analyse experiment data, this study used GM(1,1 in the grey prediction model to conduct simulation and then predict and analyze its characteristics based on the experimental data, focusing on the tool life and surface accuracy. Moreover, with the superalloy machining parameters of the current effective application improved grey prediction model, it can decrease the errors, extend the tool life, and improve the prediction precision of surface accuracy.

  19. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  20. Design concept of K-DEMO for near-term implementation

    Science.gov (United States)

    Kim, K.; Im, K.; Kim, H. C.; Oh, S.; Park, J. S.; Kwon, S.; Lee, Y. S.; Yeom, J. H.; Lee, C.; Lee, G.-S.; Neilson, G.; Kessel, C.; Brown, T.; Titus, P.; Mikkelsen, D.; Zhai, Y.

    2015-05-01

    A Korean fusion energy development promotion law (FEDPL) was enacted in 2007. As a following step, a conceptual design study for a steady-state Korean fusion demonstration reactor (K-DEMO) was initiated in 2012. After the thorough 0D system analysis, the parameters of the main machine characterized by the major and minor radii of 6.8 and 2.1 m, respectively, were chosen for further study. The analyses of heating and current drives were performed for the development of the plasma operation scenarios. Preliminary results on lower hybrid and neutral beam current drive are included herein. A high performance Nb3Sn-based superconducting conductor is adopted, providing a peak magnetic field approaching 16 T with the magnetic field at the plasma centre above 7 T. Pressurized water is the prominent choice for the main coolant of K-DEMO when the balance of plant development details is considered. The blanket system adopts a ceramic pebble type breeder. Considering plasma performance, a double-null divertor is the reference configuration choice of K-DEMO. For a high availability operation, K-DEMO incorporates a design with vertical maintenance. A design concept for K-DEMO is presented together with the preliminary design parameters.

  1. Induction of labour at or near term for suspected fetal macrosomia.

    Science.gov (United States)

    Boulvain, Michel; Irion, Olivier; Dowswell, Therese; Thornton, Jim G

    2016-05-22

    popular with many women. In settings where obstetricians can be reasonably confident about their scan assessment of fetal weight, the advantages and disadvantages of induction at or near term for fetuses suspected of being macrosomic should be discussed with parents.Although some parents and doctors may feel the evidence already justifies induction, others may justifiably disagree. Further trials of induction shortly before term for suspected fetal macrosomia are needed. Such trials should concentrate on refining the optimum gestation of induction, and improving the accuracy of the diagnosis of macrosomia.

  2. Pelvimetry for fetal cephalic presentations at or near term for deciding on mode of delivery.

    Science.gov (United States)

    Pattinson, Robert C; Cuthbert, Anna; Vannevel, Valerie

    2017-03-30

    Pelvimetry assesses the size of a woman's pelvis aiming to predict whether she will be able to give birth vaginally or not. This can be done by clinical examination, or by conventional X-rays, computerised tomography (CT) scanning, or magnetic resonance imaging (MRI). To assess the effects of pelvimetry (performed antenatally or intrapartum) on the method of birth, on perinatal mortality and morbidity, and on maternal morbidity. This review concentrates exclusively on women whose fetuses have a cephalic presentation. We searched Cochrane Pregnancy and Childbirth Group's Trials Register (31 January 2017) and reference lists of retrieved studies. Randomised controlled trials (including quasi-randomised) assessing the use of pelvimetry versus no pelvimetry or assessing different types of pelvimetry in women with a cephalic presentation at or near term were included. Cluster trials were eligible for inclusion, but none were identified. Two review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We assessed the quality of the evidence using the GRADE approach. Five trials with a total of 1159 women were included. All used X-ray pelvimetry to assess the pelvis. X-ray pelvimetry versus no pelvimetry or clinical pelvimetry is the only comparison included in this review due to the lack of trials identified that examined other types of radiological pelvimetry or that compared clinical pelvimetry versus no pelvimetry.The included trials were generally at high risk of bias. There is an overall high risk of performance bias due to lack of blinding of women and staff. Two studies were also at high risk of selection bias. We used GRADEpro software to grade evidence for our selected outcomes; for caesarean section we rated the evidence low quality and all the other outcomes (perinatal mortality, wound sepsis, blood transfusion, scar dehiscence and admission to special care baby unit) as very low quality

  3. Limits, modeling and design of high-speed permanent magnet machines

    NARCIS (Netherlands)

    Borisavljevic, A.

    2011-01-01

    There is a growing number of applications that require fast-rotating machines; motivation for this thesis comes from a project in which downsized spindles for micro-machining have been researched (TU Delft Microfactory project). The thesis focuses on analysis and design of high-speed PM machines and

  4. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  5. A mathematical model for surface roughness of fluidic channels produced by grinding aided electrochemical discharge machining (G-ECDM

    Directory of Open Access Journals (Sweden)

    Ladeesh V. G.

    2017-01-01

    Full Text Available Grinding aided electrochemical discharge machining is a hybrid technique, which combines the grinding action of an abrasive tool and thermal effects of electrochemical discharges to remove material from the workpiece for producing complex contours. The present study focuses on developing fluidic channels on borosilicate glass using G-ECDM and attempts to develop a mathematical model for surface roughness of the machined channel. Preliminary experiments are conducted to study the effect of machining parameters on surface roughness. Voltage, duty factor, frequency and tool feed rate are identified as the significant factors for controlling surface roughness of the channels produced by G-ECDM. A mathematical model was developed for surface roughness by considering the grinding action and thermal effects of electrochemical discharges in material removal. Experiments are conducted to validate the model and the results obtained are in good agreement with that predicted by the model.

  6. Feed drive modelling for the simulation of tool path tracking in multi-axis High Speed Machining

    CERN Document Server

    Prévost, David; Lartigue, Claire; Dumur, Didier

    2011-01-01

    Within the context of High Speed Machining, it is essential to manage the trajectory generation to achieve both high surface quality and high productivity. As feed drives are one part of the set Machine tool - Numerical Controller, it is necessary to improve their performances to optimize feed drive dynamics during trajectory follow up. Hence, this paper deals with the modelling of the feed drive in the case of multi axis machining. This model can be used for the simulation of axis dynamics and tool-path tracking to tune parameters and optimize new frameworks of command strategies. A procedure of identification based on modern NC capabilities is presented and applied to industrial HSM centres. Efficiency of this modelling is assessed by experimental verifications on various representative trajectories. After implementing a Generalized Predictive Control, reliable simulations are performed thanks to the model. These simulations can then be used to tune parameters of this new framework according to the tool-pat...

  7. Production Inventory Models for Deteriorating Items with Stochastic Machine Unavailability Time, Lost Sales and Price-Dependent Demand

    Directory of Open Access Journals (Sweden)

    Gede Agus Widyadana

    2010-01-01

    Full Text Available The economic production quantity (EPQ model is widely employed in reality and is also being intensively developed in the research area. This research tries to develop more realistic EPQ models for deteriorating items by considering stochastic machine unavailability time (uniformly and exponentially distributed and price-dependent demand. Lost sales will occur when machine unavailability time is longer than the non production time. Since the closed form solution cannot be derived, we use Genetic Algorithm (GA to solve the models. A numerical example and sensitivity analysis is shown to illustrate the models. The sensitivity analyses show that a management can use price policy to minimize the profit loss due to machine unavailability time under a price- dependent demand situation

  8. Thermal Error Modeling Method with the Jamming of Temperature-Sensitive Points' Volatility on CNC Machine Tools

    Science.gov (United States)

    MIAO, Enming; LIU, Yi; XU, Jianguo; LIU, Hui

    2017-03-01

    Aiming at the deficiency of the robustness of thermal error compensation models of CNC machine tools, the mechanism of improving the models' robustness is studied by regarding the Leaderway-V450 machining center as the object. Through the analysis of actual spindle air cutting experimental data on Leaderway-V450 machine, it is found that the temperature-sensitive points used for modeling is volatility, and this volatility directly leads to large changes on the collinear degree among modeling independent variables. Thus, the forecasting accuracy of multivariate regression model is severely affected, and the forecasting robustness becomes poor too. To overcome this effect, a modeling method of establishing thermal error models by using single temperature variable under the jamming of temperature-sensitive points' volatility is put forward. According to the actual data of thermal error measured in different seasons, it is proved that the single temperature variable model can reduce the loss of forecasting accuracy resulted from the volatility of temperature-sensitive points, especially for the prediction of cross quarter data, the improvement of forecasting accuracy is about 5 μm or more. The purpose that improving the robustness of the thermal error models is realized, which can provide a reference for selecting the modeling independent variable in the application of thermal error compensation of CNC machine tools.

  9. Fishery landing forecasting using EMD-based least square support vector machine models

    Science.gov (United States)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  10. Identification of the Hammerstein model of a PEMFC stack based on least squares support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Li, Chun-Hua; Zhu, Xin-Jian; Cao, Guang-Yi; Sui, Sheng; Hu, Ming-Ruo [Fuel Cell Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China)

    2008-01-03

    This paper reports a Hammerstein modeling study of a proton exchange membrane fuel cell (PEMFC) stack using least squares support vector machines (LS-SVM). PEMFC is a complex nonlinear, multi-input and multi-output (MIMO) system that is hard to model by traditional methodologies. Due to the generalization performance of LS-SVM being independent of the dimensionality of the input data and the particularly simple structure of the Hammerstein model, a MIMO SVM-ARX (linear autoregression model with exogenous input) Hammerstein model is used to represent the PEMFC stack in this paper. The linear model parameters and the static nonlinearity can be obtained simultaneously by solving a set of linear equations followed by the singular value decomposition (SVD). The simulation tests demonstrate the obtained SVM-ARX Hammerstein model can efficiently approximate the dynamic behavior of a PEMFC stack. Furthermore, based on the proposed SVM-ARX Hammerstein model, valid control strategy studies such as predictive control, robust control can be developed. (author)

  11. Research on Dynamic Models and Performances of Shield Tunnel Boring Machine Cutterhead Driving System

    Directory of Open Access Journals (Sweden)

    Xianhong Li

    2013-01-01

    Full Text Available A general nonlinear time-varying (NLTV dynamic model and linear time-varying (LTV dynamic model are presented for shield tunnel boring machine (TBM cutterhead driving system, respectively. Different gear backlashes and mesh damped and transmission errors are considered in the NLTV dynamic model. The corresponding multiple-input and multiple-output (MIMO state space models are also presented. Through analyzing the linear dynamic model, the optimal reducer ratio (ORR and optimal transmission ratio (OTR are obtained for the shield TBM cutterhead driving system, respectively. The NLTV and LTV dynamic models are numerically simulated, and the effects of physical parameters under various conditions of NLTV dynamic model are analyzed. Physical parameters such as the load torque, gear backlash and transmission error, gear mesh stiffness and damped, pinions inertia and damped, large gear inertia and damped, and motor rotor inertia and damped are investigated in detail to analyze their effects on dynamic response and performances of the shield TBM cutterhead driving system. Some preliminary approaches are proposed to improve dynamic performances of the cutterhead driving system, and dynamic models will provide a foundation for shield TBM cutterhead driving system's cutterhead fault diagnosis, motion control, and torque synchronous control.

  12. Application of least squares vector machines in modelling water vapor and carbon dioxide fluxes over a cropland

    Institute of Scientific and Technical Information of China (English)

    QIN Zhong; YU Qiang; LI Jun; WU Zhi-yi; HU Bing-min

    2005-01-01

    Least squares support vector machines (LS-SVMs), a nonlinear kemel based machine was introduced to investigate the prospects of application of this approach in modelling water vapor and carbon dioxide fluxes above a summer maize field using the dataset obtained in the North China Plain with eddy covariance technique. The performances of the LS-SVMs were compared to the corresponding models obtained with radial basis function (RBF) neural networks. The results indicated the trained LS-SVMs with a radial basis function kernel had satisfactory performance in modelling surface fluxes; its excellent approximation and generalization property shed new light on the study on complex processes in ecosystem.

  13. When Machines Design Machines!

    DEFF Research Database (Denmark)

    2011-01-01

    Until recently we were the sole designers, alone in the driving seat making all the decisions. But, we have created a world of complexity way beyond human ability to understand, control, and govern. Machines now do more trades than humans on stock markets, they control our power, water, gas...... and food supplies, manage our elevators, microclimates, automobiles and transport systems, and manufacture almost everything. It should come as no surprise that machines are now designing machines. The chips that power our computers and mobile phones, the robots and commercial processing plants on which we...... depend, all are now largely designed by machines. So what of us - will be totally usurped, or are we looking at a new symbiosis with human and artificial intelligences combined to realise the best outcomes possible. In most respects we have no choice! Human abilities alone cannot solve any of the major...

  14. Quick Estimation Model for the Concentration of Indoor Airborne Culturable Bacteria: An Application of Machine Learning

    Directory of Open Access Journals (Sweden)

    Zhijian Liu

    2017-07-01

    Full Text Available Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM2.5 and PM10, temperature, relative humidity, and CO2 concentration. Our results show that a general regression neural network (GRNN model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups.

  15. Finite State Machine Based Evaluation Model for Web Service Reliability Analysis

    CERN Document Server

    M, Thirumaran; Abarna, S; P, Lakshmi

    2011-01-01

    Now-a-days they are very much considering about the changes to be done at shorter time since the reaction time needs are decreasing every moment. Business Logic Evaluation Model (BLEM) are the proposed solution targeting business logic automation and facilitating business experts to write sophisticated business rules and complex calculations without costly custom programming. BLEM is powerful enough to handle service manageability issues by analyzing and evaluating the computability and traceability and other criteria of modified business logic at run time. The web service and QOS grows expensively based on the reliability of the service. Hence the service provider of today things that reliability is the major factor and any problem in the reliability of the service should overcome then and there in order to achieve the expected level of reliability. In our paper we propose business logic evaluation model for web service reliability analysis using Finite State Machine (FSM) where FSM will be extended to analy...

  16. A Tractable Model of the LTE Access Reservation Procedure for Machine-Type Communications

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Min Kim, Dong; Madueño, Germán Corrales;

    2015-01-01

    A canonical scenario in Machine-Type Communications (MTC) is the one featuring a large number of devices, each of them with sporadic traffic. Hence, the number of served devices in a single LTE cell is not determined by the available aggregate rate, but rather by the limitations of the LTE access...... reservation protocol. Specifically, the limited number of contention preambles and the limited amount of uplink grants per random access response are crucial to consider when dimensioning LTE networks for MTC. We propose a low-complexity model that encompasses these two limitations and allows us to evaluate...... on the preamble collisions. A comparison with the simulated LTE access reservation procedure that follows the 3GPP specifications, confirms that our model provides an accurate estimation of the system outage event and the number of supported MTC devices....

  17. Simulation modeling and tracing optimal trajectory of robotic mining machine effector

    Science.gov (United States)

    Fryanov, VN; Pavlova, LD

    2017-02-01

    Within the framework of the robotic coal mine design for deep-level coal beds with the high gas content in the seismically active areas in the southern Kuzbass, the motion path parameters for an effector of a robotic mining machine are evaluated. The simulation model is meant for selection of minimum energy-based optimum trajectory for the robot effector, calculation of stresses and strains in a coal bed in a variable perimeter shortwall in the course of coal extraction, determination of coordinates of a coal bed edge area with the maximum disintegration of coal, and for choice of direction of the robot effector to get in contact with the mentioned area and to break coal at the minimum energy input. It is suggested to use the model in the engineering of the robot intelligence.

  18. Quick Estimation Model for the Concentration of Indoor Airborne Culturable Bacteria: An Application of Machine Learning.

    Science.gov (United States)

    Liu, Zhijian; Li, Hao; Cao, Guoqing

    2017-07-30

    Indoor airborne culturable bacteria are sometimes harmful to human health. Therefore, a quick estimation of their concentration is particularly necessary. However, measuring the indoor microorganism concentration (e.g., bacteria) usually requires a large amount of time, economic cost, and manpower. In this paper, we aim to provide a quick solution: using knowledge-based machine learning to provide quick estimation of the concentration of indoor airborne culturable bacteria only with the inputs of several measurable indoor environmental indicators, including: indoor particulate matter (PM2.5 and PM10), temperature, relative humidity, and CO₂ concentration. Our results show that a general regression neural network (GRNN) model can sufficiently provide a quick and decent estimation based on the model training and testing using an experimental database with 249 data groups.

  19. Fast Fourier Transform-based Support Vector Machine for Subcellular Localization Prediction Using Different Substitution Models

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    There are approximately 109 proteins in a cell. A hotspot in bioinformatics is how to identify a protein's subcellular localization, if its sequence is known. In this paper, a method using fast Fourier transform-based support vector machine is developed to predict the subcellular localization of proteins from their physicochemical properties and structural parameters. The prediction accuracies reached 83% in prokaryotic organisms and 84% in eukaryotic organisms with the substitution model of the c-p-v matrix (c, composition; p, polarity; and v, molecular volume). The overall prediction accuracy was also evaluated using the "leave-one-out" jackknife procedure. The influence of the substitution model on prediction accuracy has also been discussed in the work. The source code of the new program is available on request from the authors.

  20. Dynamic model of heat and mass transfer in rectangular adsorber of a solar adsorption machine

    Science.gov (United States)

    Chekirou, W.; Boukheit, N.; Karaali, A.

    2016-10-01

    This paper presents the study of a rectangular adsorber of solar adsorption cooling machine. The modeling and the analysis of the adsorber are the key point of such studies; because of the complex coupled heat and mass transfer phenomena that occur during the working cycle. The adsorber is heated by solar energy and contains a porous medium constituted of activated carbon AC-35 reacting by adsorption with methanol. To study the solar collector type effect on system's performances, the used model takes into account the variation of ambient temperature and solar intensity along a simulated day, corresponding to a total daily insolation of 26.12 MJ/m2 with ambient temperature average of 27.7 °C, which is useful to know the daily thermal behavior of the rectangular adsorber.