WorldWideScience

Sample records for machine learning-based methods

  1. Improved method for SNR prediction in machine-learning-based test

    NARCIS (Netherlands)

    Sheng, Xiaoqin; Kerkhoff, Hans G.

    2010-01-01

    This paper applies an improved method for testing the signal-to-noise ratio (SNR) of Analogue-to-Digital Converters (ADC). In previous work, a noisy and nonlinear pulse signal is exploited as the input stimulus to obtain the signature results of ADC. By applying a machine-learning-based approach, th

  2. Improved method for SNR prediction in machine-learning-based test

    NARCIS (Netherlands)

    Sheng, Xiaoqin; Kerkhoff, Hans G.

    2010-01-01

    This paper applies an improved method for testing the signal-to-noise ratio (SNR) of Analogue-to-Digital Converters (ADC). In previous work, a noisy and nonlinear pulse signal is exploited as the input stimulus to obtain the signature results of ADC. By applying a machine-learning-based approach, th

  3. METAPHOR: A machine learning based method for the probability density estimation of photometric redshifts

    CERN Document Server

    Cavuoti, Stefano; Brescia, Massimo; Vellucci, Civita; Tortora, Crescenzo; Longo, Giuseppe

    2016-01-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z's). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine learning based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z Probability Density Function (PDF), due to the fact that the analytical relation mapping the photometric parameters onto the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use...

  4. Drug name recognition in biomedical texts: a machine-learning-based method.

    Science.gov (United States)

    He, Linna; Yang, Zhihao; Lin, Hongfei; Li, Yanpeng

    2014-05-01

    Currently, there is an urgent need to develop a technology for extracting drug information automatically from biomedical texts, and drug name recognition is an essential prerequisite for extracting drug information. This article presents a machine-learning-based approach to recognize drug names in biomedical texts. In this approach, a drug name dictionary is first constructed with the external resource of DrugBank and PubMed. Then a semi-supervised learning method, feature coupling generalization, is used to filter this dictionary. Finally, the dictionary look-up and the condition random field method are combined to recognize drug names. Experimental results show that our approach achieves an F-score of 92.54% on the test set of DDIExtraction2011.

  5. METAPHOR: a machine-learning-based method for the probability density estimation of photometric redshifts

    Science.gov (United States)

    Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-02-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.

  6. Recognizing Disjoint Clinical Concepts in Clinical Text Using Machine Learning-based Methods

    Science.gov (United States)

    Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong; Wu, Yonghui; Zhang, Yaoyun; Jiang, Min; Wang, Jingqi; Xu, Hua

    2015-01-01

    Clinical concept recognition (CCR) is a fundamental task in clinical natural language processing (NLP) field. Almost all current machine learning-based CCR systems can only recognize clinical concepts of consecutive words (called consecutive clinical concepts, CCCs), but can do nothing about clinical concepts of disjoint words (called disjoint clinical concepts, DCCs), which widely exist in clinical text. In this paper, we proposed two novel types of representations for disjoint clinical concepts, and applied two state-of-the-art machine learning methods to recognizing consecutive and disjoint concepts. Experiments conducted on the 2013 ShARe/CLEF challenge corpus showed that our best system achieved a “strict” F-measure of 0.803 for CCCs, a “strict” F-measure of 0.477 for DCCs, and a “strict” F-measure of 0.783 for all clinical concepts, significantly higher than the baseline systems by 4.2% and 4.1% respectively. PMID:26958258

  7. Recognizing Disjoint Clinical Concepts in Clinical Text Using Machine Learning-based Methods

    OpenAIRE

    Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong; Wu, Yonghui; Zhang, Yaoyun; Jiang, Min; Wang, Jingqi; Xu, Hua

    2015-01-01

    Clinical concept recognition (CCR) is a fundamental task in clinical natural language processing (NLP) field. Almost all current machine learning-based CCR systems can only recognize clinical concepts of consecutive words (called consecutive clinical concepts, CCCs), but can do nothing about clinical concepts of disjoint words (called disjoint clinical concepts, DCCs), which widely exist in clinical text. In this paper, we proposed two novel types of representations for disjoint clinical conc...

  8. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  9. A Novel Machine Learning Based Method of Combined Dynamic Environment Prediction

    Directory of Open Access Journals (Sweden)

    Wentao Mao

    2013-01-01

    Full Text Available In practical engineerings, structures are often excited by different kinds of loads at the same time. How to effectively analyze and simulate this kind of dynamic environment of structure, named combined dynamic environment, is one of the key issues. In this paper, a novel prediction method of combined dynamic environment is proposed from the perspective of data analysis. First, the existence of dynamic similarity between vibration responses of the same structure under different boundary conditions is theoretically proven. It is further proven that this similarity can be established by a multiple-input multiple-output regression model. Second, two machine learning algorithms, multiple-dimensional support vector machine and extreme learning machine, are introduced to establish this model. To test the effectiveness of this method, shock and stochastic white noise excitations are acted on a cylindrical shell with two clamps to simulate different dynamic environments. The prediction errors on various measuring points are all less than ±3 dB, which shows that the proposed method can predict the structural vibration response under one boundary condition by means of the response under another condition in terms of precision and numerical stability.

  10. A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.

    Science.gov (United States)

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-05-18

    Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.

  11. Machine Learning Based Diagnosis of Lithium Batteries

    Science.gov (United States)

    Ibe-Ekeocha, Chinemerem Christopher

    The depletion of the world's current petroleum reserve, coupled with the negative effects of carbon monoxide and other harmful petrochemical by-products on the environment, is the driving force behind the movement towards renewable and sustainable energy sources. Furthermore, the growing transportation sector consumes a significant portion of the total energy used in the United States. A complete electrification of this sector would require a significant development in electric vehicles (EVs) and hybrid electric vehicles (HEVs), thus translating to a reduction in the carbon footprint. As the market for EVs and HEVs grows, their battery management systems (BMS) need to be improved accordingly. The BMS is not only responsible for optimally charging and discharging the battery, but also monitoring battery's state of charge (SOC) and state of health (SOH). SOC, similar to an energy gauge, is a representation of a battery's remaining charge level as a percentage of its total possible charge at full capacity. Similarly, SOH is a measure of deterioration of a battery; thus it is a representation of the battery's age. Both SOC and SOH are not measurable, so it is important that these quantities are estimated accurately. An inaccurate estimation could not only be inconvenient for EV consumers, but also potentially detrimental to battery's performance and life. Such estimations could be implemented either online, while battery is in use, or offline when battery is at rest. This thesis presents intelligent online SOC and SOH estimation methods using machine learning tools such as artificial neural network (ANN). ANNs are a powerful generalization tool if programmed and trained effectively. Unlike other estimation strategies, the techniques used require no battery modeling or knowledge of battery internal parameters but rather uses battery's voltage, charge/discharge current, and ambient temperature measurements to accurately estimate battery's SOC and SOH. The developed

  12. A Machine Learning Based Framework for Adaptive Mobile Learning

    Science.gov (United States)

    Al-Hmouz, Ahmed; Shen, Jun; Yan, Jun

    Advances in wireless technology and handheld devices have created significant interest in mobile learning (m-learning) in recent years. Students nowadays are able to learn anywhere and at any time. Mobile learning environments must also cater for different user preferences and various devices with limited capability, where not all of the information is relevant and critical to each learning environment. To address this issue, this paper presents a framework that depicts the process of adapting learning content to satisfy individual learner characteristics by taking into consideration his/her learning style. We use a machine learning based algorithm for acquiring, representing, storing, reasoning and updating each learner acquired profile.

  13. A machine learning-based automatic currency trading system

    OpenAIRE

    Brvar, Anže

    2012-01-01

    The main goal of this thesis was to develop an automated trading system for Forex trading, which would use machine learning methods and their prediction models for deciding about trading actions. A training data set was obtained from exchange rates and values of technical indicators, which describe conditions on currency market. We estimated selected machine learning algorithms and their parameters with validation with sampling. We have prepared a set of automated trading systems with various...

  14. Initial experimental results of a machine learning-based temperature control system for an RF gun

    CERN Document Server

    Edelen, A L; Milton, S V; Chase, B E; Crawford, D J; Eddy, N; Edstrom, D; Harms, E R; Ruan, J; Santucci, J K; Stabile, P

    2015-01-01

    Colorado State University (CSU) and Fermi National Accelerator Laboratory (Fermilab) have been developing a control system to regulate the resonant frequency of an RF electron gun. As part of this effort, we present initial test results for a benchmark temperature controller that combines a machine learning-based model and a predictive control algorithm. This is part of an on-going effort to develop adaptive, machine learning-based tools specifically to address control challenges found in particle accelerator systems.

  15. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    Science.gov (United States)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  16. Machine learning based Intelligent cognitive network using fog computing

    Science.gov (United States)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  17. A Machine Learning Based Analytical Framework for Semantic Annotation Requirements

    CERN Document Server

    Hassanzadeh, Hamed; 10.5121/ijwest.2011.2203

    2011-01-01

    The Semantic Web is an extension of the current web in which information is given well-defined meaning. The perspective of Semantic Web is to promote the quality and intelligence of the current web by changing its contents into machine understandable form. Therefore, semantic level information is one of the cornerstones of the Semantic Web. The process of adding semantic metadata to web resources is called Semantic Annotation. There are many obstacles against the Semantic Annotation, such as multilinguality, scalability, and issues which are related to diversity and inconsistency in content of different web pages. Due to the wide range of domains and the dynamic environments that the Semantic Annotation systems must be performed on, the problem of automating annotation process is one of the significant challenges in this domain. To overcome this problem, different machine learning approaches such as supervised learning, unsupervised learning and more recent ones like, semi-supervised learning and active learn...

  18. Machine Learning Based Statistical Prediction Model for Improving Performance of Live Virtual Machine Migration

    Directory of Open Access Journals (Sweden)

    Minal Patel

    2016-01-01

    Full Text Available Service can be delivered anywhere and anytime in cloud computing using virtualization. The main issue to handle virtualized resources is to balance ongoing workloads. The migration of virtual machines has two major techniques: (i reducing dirty pages using CPU scheduling and (ii compressing memory pages. The available techniques for live migration are not able to predict dirty pages in advance. In the proposed framework, time series based prediction techniques are developed using historical analysis of past data. The time series is generated with transferring of memory pages iteratively. Here, two different regression based models of time series are proposed. The first model is developed using statistical probability based regression model and it is based on ARIMA (autoregressive integrated moving average model. The second one is developed using statistical learning based regression model and it uses SVR (support vector regression model. These models are tested on real data set of Xen to compute downtime, total number of pages transferred, and total migration time. The ARIMA model is able to predict dirty pages with 91.74% accuracy and the SVR model is able to predict dirty pages with 94.61% accuracy that is higher than ARIMA.

  19. Machine learning based global particle indentification algorithms at LHCb experiment

    CERN Multimedia

    Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor

    2017-01-01

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.

  20. Machine learning-based differential network analysis: a study of stress-responsive transcriptomes in Arabidopsis.

    Science.gov (United States)

    Ma, Chuang; Xin, Mingming; Feldmann, Kenneth A; Wang, Xiangfeng

    2014-02-01

    Machine learning (ML) is an intelligent data mining technique that builds a prediction model based on the learning of prior knowledge to recognize patterns in large-scale data sets. We present an ML-based methodology for transcriptome analysis via comparison of gene coexpression networks, implemented as an R package called machine learning-based differential network analysis (mlDNA) and apply this method to reanalyze a set of abiotic stress expression data in Arabidopsis thaliana. The mlDNA first used a ML-based filtering process to remove nonexpressed, constitutively expressed, or non-stress-responsive "noninformative" genes prior to network construction, through learning the patterns of 32 expression characteristics of known stress-related genes. The retained "informative" genes were subsequently analyzed by ML-based network comparison to predict candidate stress-related genes showing expression and network differences between control and stress networks, based on 33 network topological characteristics. Comparative evaluation of the network-centric and gene-centric analytic methods showed that mlDNA substantially outperformed traditional statistical testing-based differential expression analysis at identifying stress-related genes, with markedly improved prediction accuracy. To experimentally validate the mlDNA predictions, we selected 89 candidates out of the 1784 predicted salt stress-related genes with available SALK T-DNA mutagenesis lines for phenotypic screening and identified two previously unreported genes, mutants of which showed salt-sensitive phenotypes.

  1. Towards a Standard-based Domain-specific Platform to Solve Machine Learning-based Problems

    Directory of Open Access Journals (Sweden)

    Vicente García-Díaz

    2015-12-01

    Full Text Available Machine learning is one of the most important subfields of computer science and can be used to solve a variety of interesting artificial intelligence problems. There are different languages, framework and tools to define the data needed to solve machine learning-based problems. However, there is a great number of very diverse alternatives which makes it difficult the intercommunication, portability and re-usability of the definitions, designs or algorithms that any developer may create. In this paper, we take the first step towards a language and a development environment independent of the underlying technologies, allowing developers to design solutions to solve machine learning-based problems in a simple and fast way, automatically generating code for other technologies. That can be considered a transparent bridge among current technologies. We rely on Model-Driven Engineering approach, focusing on the creation of models to abstract the definition of artifacts from the underlying technologies.

  2. Machine learning based interatomic potential for amorphous carbon

    Science.gov (United States)

    Deringer, Volker L.; Csányi, Gábor

    2017-03-01

    We introduce a Gaussian approximation potential (GAP) for atomistic simulations of liquid and amorphous elemental carbon. Based on a machine learning representation of the density-functional theory (DFT) potential-energy surface, such interatomic potentials enable materials simulations with close-to DFT accuracy but at much lower computational cost. We first determine the maximum accuracy that any finite-range potential can achieve in carbon structures; then, using a hierarchical set of two-, three-, and many-body structural descriptors, we construct a GAP model that can indeed reach the target accuracy. The potential yields accurate energetic and structural properties over a wide range of densities; it also correctly captures the structure of the liquid phases, at variance with a state-of-the-art empirical potential. Exemplary applications of the GAP model to surfaces of "diamondlike" tetrahedral amorphous carbon (ta -C) are presented, including an estimate of the amorphous material's surface energy and simulations of high-temperature surface reconstructions ("graphitization"). The presented interatomic potential appears to be promising for realistic and accurate simulations of nanoscale amorphous carbon structures.

  3. Differential spatial activity patterns of acupuncture by a machine learning based analysis

    Science.gov (United States)

    You, Youbo; Bai, Lijun; Xue, Ting; Zhong, Chongguang; Liu, Zhenyu; Tian, Jie

    2011-03-01

    Acupoint specificity, lying at the core of the Traditional Chinese Medicine, underlies the theoretical basis of acupuncture application. However, recent studies have reported that acupuncture stimulation at nonacupoint and acupoint can both evoke similar signal intensity decreases in multiple regions. And these regions were spatially overlapped. We used a machine learning based Support Vector Machine (SVM) approach to elucidate the specific neural response pattern induced by acupuncture stimulation. Group analysis demonstrated that stimulation at two different acupoints (belong to the same nerve segment but different meridians) could elicit distinct neural response patterns. Our findings may provide evidence for acupoint specificity.

  4. Detecting Abnormal Word Utterances in Children With Autism Spectrum Disorders: Machine-Learning-Based Voice Analysis Versus Speech Therapists.

    Science.gov (United States)

    Nakai, Yasushi; Takiguchi, Tetsuya; Matsui, Gakuyo; Yamaoka, Noriko; Takada, Satoshi

    2017-10-01

    Abnormal prosody is often evident in the voice intonations of individuals with autism spectrum disorders. We compared a machine-learning-based voice analysis with human hearing judgments made by 10 speech therapists for classifying children with autism spectrum disorders ( n = 30) and typical development ( n = 51). Using stimuli limited to single-word utterances, machine-learning-based voice analysis was superior to speech therapist judgments. There was a significantly higher true-positive than false-negative rate for machine-learning-based voice analysis but not for speech therapists. Results are discussed in terms of some artificiality of clinician judgments based on single-word utterances, and the objectivity machine-learning-based voice analysis adds to judging abnormal prosody.

  5. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Science.gov (United States)

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as

  6. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    Science.gov (United States)

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  7. Creating the New from the Old: Combinatorial Libraries Generation with Machine-Learning-Based Compound Structure Optimization.

    Science.gov (United States)

    Podlewska, Sabina; Czarnecki, Wojciech M; Kafel, Rafał; Bojarski, Andrzej J

    2017-02-15

    The growing computational abilities of various tools that are applied in the broadly understood field of computer-aided drug design have led to the extreme popularity of virtual screening in the search for new biologically active compounds. Most often, the source of such molecules consists of commercially available compound databases, but they can also be searched for within the libraries of structures generated in silico from existing ligands. Various computational combinatorial approaches are based solely on the chemical structure of compounds, using different types of substitutions for new molecules formation. In this study, the starting point for combinatorial library generation was the fingerprint referring to the optimal substructural composition in terms of the activity toward a considered target, which was obtained using a machine learning-based optimization procedure. The systematic enumeration of all possible connections between preferred substructures resulted in the formation of target-focused libraries of new potential ligands. The compounds were initially assessed by machine learning methods using a hashed fingerprint to represent molecules; the distribution of their physicochemical properties was also investigated, as well as their synthetic accessibility. The examination of various fingerprints and machine learning algorithms indicated that the Klekota-Roth fingerprint and support vector machine were an optimal combination for such experiments. This study was performed for 8 protein targets, and the obtained compound sets and their characterization are publically available at http://skandal.if-pan.krakow.pl/comb_lib/ .

  8. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Directory of Open Access Journals (Sweden)

    Jun Yi Wang

    Full Text Available Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation to 0.978 (for SegAdapter-corrected segmentation for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large

  9. Machine-Learning Based Co-adaptive Calibration: A Perspective to Fight BCI Illiteracy

    Science.gov (United States)

    Vidaurre, Carmen; Sannelli, Claudia; Müller, Klaus-Robert; Blankertz, Benjamin

    "BCI illiteracy" is one of the biggest problems and challenges in BCI research. It means that BCI control cannot be achieved by a non-negligible number of subjects (estimated 20% to 25%). There are two main causes for BCI illiteracy in BCI users: either no SMR idle rhythm is observed over motor areas, or this idle rhythm is not attenuated during motor imagery, resulting in a classification performance lower than 70% (criterion level) already for offline calibration data. In a previous work of the same authors, the concept of machine learning based co-adaptive calibration was introduced. This new type of calibration provided substantially improved performance for a variety of users. Here, we use a similar approach and investigate to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.

  10. Beware of machine learning-based scoring functions-on the danger of developing black boxes.

    Science.gov (United States)

    Gabel, Joffrey; Desaphy, Jérémy; Rognan, Didier

    2014-10-27

    Training machine learning algorithms with protein-ligand descriptors has recently gained considerable attention to predict binding constants from atomic coordinates. Starting from a series of recent reports stating the advantages of this approach over empirical scoring functions, we could indeed reproduce the claimed superiority of Random Forest and Support Vector Machine-based scoring functions to predict experimental binding constants from protein-ligand X-ray structures of the PDBBind dataset. Strikingly, these scoring functions, trained on simple protein-ligand element-element distance counts, were almost unable to enrich virtual screening hit lists in true actives upon docking experiments of 10 reference DUD-E datasets; this is a a feature that, however, has been verified for an a priori less-accurate empirical scoring function (Surflex-Dock). By systematically varying ligand poses from true X-ray coordinates, we show that the Surflex-Dock scoring function is logically sensitive to the quality of docking poses. Conversely, our machine-learning based scoring functions are totally insensitive to docking poses (up to 10 Å root-mean square deviations) and just describe atomic element counts. This report does not disqualify using machine learning algorithms to design scoring functions. Protein-ligand element-element distance counts should however be used with extreme caution and only applied in a meaningful way. To avoid developing novel but meaningless scoring functions, we propose that two additional benchmarking tests must be systematically done when developing novel scoring functions: (i) sensitivity to docking pose accuracy, and (ii) ability to enrich hit lists in true actives upon structure-based (docking, receptor-ligand pharmacophore) virtual screening of reference datasets.

  11. Support Vector Machine Learning-based fMRI Data Group Analysis*

    OpenAIRE

    Wang, Ze; Childress, Anna R.; Wang, Jiongjiong; Detre, John A.

    2007-01-01

    To explore the multivariate nature of fMRI data and to consider the inter-subject brain response discrepancies, a multivariate and brain response model-free method is fundamentally required. Two such methods are presented in this paper by integrating a machine learning algorithm, the support vector machine (SVM), and the random effect model. Without any brain response modeling, SVM was used to extract a whole brain spatial discriminance map (SDM), representing the brain response difference be...

  12. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  13. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  14. Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting

    Directory of Open Access Journals (Sweden)

    Xiwei Huang

    2016-11-01

    Full Text Available A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT. However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR and Convolutional Neural Network based SR (CNNSR. Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications.

  15. Machine learning-based prediction of adverse drug effects: An example of seizure-inducing compounds

    Directory of Open Access Journals (Sweden)

    Mengxuan Gao

    2017-02-01

    Full Text Available Various biological factors have been implicated in convulsive seizures, involving side effects of drugs. For the preclinical safety assessment of drug development, it is difficult to predict seizure-inducing side effects. Here, we introduced a machine learning-based in vitro system designed to detect seizure-inducing side effects. We recorded local field potentials from the CA1 alveus in acute mouse neocortico-hippocampal slices, while 14 drugs were bath-perfused at 5 different concentrations each. For each experimental condition, we collected seizure-like neuronal activity and merged their waveforms as one graphic image, which was further converted into a feature vector using Caffe, an open framework for deep learning. In the space of the first two principal components, the support vector machine completely separated the vectors (i.e., doses of individual drugs that induced seizure-like events and identified diphenhydramine, enoxacin, strychnine and theophylline as “seizure-inducing” drugs, which indeed were reported to induce seizures in clinical situations. Thus, this artificial intelligence-based classification may provide a new platform to detect the seizure-inducing side effects of preclinical drugs.

  16. Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting.

    Science.gov (United States)

    Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao

    2016-11-02

    A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications.

  17. Machine learning based analytics of micro-MRI trabecular bone microarchitecture and texture in type 1 Gaucher disease.

    Science.gov (United States)

    Sharma, Gulshan B; Robertson, Douglas D; Laney, Dawn A; Gambello, Michael J; Terk, Michael

    2016-06-14

    Type 1 Gaucher disease (GD) is an autosomal recessive lysosomal storage disease, affecting bone metabolism, structure and strength. Current bone assessment methods are not ideal. Semi-quantitative MRI scoring is unreliable, not standardized, and only evaluates bone marrow. DXA BMD is also used but is a limited predictor of bone fragility/fracture risk. Our purpose was to measure trabecular bone microarchitecture, as a biomarker of bone disease severity, in type 1 GD individuals with different GD genotypes and to apply machine learning based analytics to discriminate between GD patients and healthy individuals. Micro-MR imaging of the distal radius was performed on 20 type 1 GD patients and 10 healthy controls (HC). Fifteen stereological and textural measures (STM) were calculated from the MR images. General linear models demonstrated significant differences between GD and HC, and GD genotypes. Stereological measures, main contributors to the first two principal components (PCs), explained ~50% of data variation and were significantly different between males and females. Subsequent PCs textural measures were significantly different between GD patients and HC individuals. Textural measures also significantly differed between GD genotypes, and distinguished between GD patients with normal and pathologic DXA scores. PCA and SVM predictive analyses discriminated between GD and HC with maximum accuracy of 73% and area under ROC curve of 0.79. Trabecular STM differences can be quantified between GD patients and HC, and GD sub-types using micro-MRI and machine learning based analytics. Work is underway to expand this approach to evaluate GD disease burden and treatment efficacy.

  18. A review of literature on the use of machine learning methods for opinion mining

    Directory of Open Access Journals (Sweden)

    Aytuğ ONAN

    2016-05-01

    Full Text Available Opinion mining is an emerging field which uses methods of natural language processing, text mining and computational linguistics to extract subjective information of opinion holders. Opinion mining can be viewed as a classification problem. Hence, machine learning based methods are widely employed for sentiment classification. Machine learning based methods in opinion mining can be mainly classified as supervised, semi-supervised and unsupervised methods. In this study, main existing literature on the use of machine learning methods for opinion mining has been presented. Besides, the weak and strong characteristics of machine learning methods have been discussed.

  19. Machine Learning Based Classification of Microsatellite Variation: An Effective Approach for Phylogeographic Characterization of Olive Populations.

    Science.gov (United States)

    Torkzaban, Bahareh; Kayvanjoo, Amir Hossein; Ardalan, Arman; Mousavi, Soraya; Mariotti, Roberto; Baldoni, Luciana; Ebrahimie, Esmaeil; Ebrahimi, Mansour; Hosseini-Mazinani, Mehdi

    2015-01-01

    Finding efficient analytical techniques is overwhelmingly turning into a bottleneck for the effectiveness of large biological data. Machine learning offers a novel and powerful tool to advance classification and modeling solutions in molecular biology. However, these methods have been less frequently used with empirical population genetics data. In this study, we developed a new combined approach of data analysis using microsatellite marker data from our previous studies of olive populations using machine learning algorithms. Herein, 267 olive accessions of various origins including 21 reference cultivars, 132 local ecotypes, and 37 wild olive specimens from the Iranian plateau, together with 77 of the most represented Mediterranean varieties were investigated using a finely selected panel of 11 microsatellite markers. We organized data in two '4-targeted' and '16-targeted' experiments. A strategy of assaying different machine based analyses (i.e. data cleaning, feature selection, and machine learning classification) was devised to identify the most informative loci and the most diagnostic alleles to represent the population and the geography of each olive accession. These analyses revealed microsatellite markers with the highest differentiating capacity and proved efficiency for our method of clustering olive accessions to reflect upon their regions of origin. A distinguished highlight of this study was the discovery of the best combination of markers for better differentiating of populations via machine learning models, which can be exploited to distinguish among other biological populations.

  20. Twin Support Vector Machine for Multiple Instance Learning Based on Bag Dissimilarities

    Directory of Open Access Journals (Sweden)

    Divya Tomar

    2016-01-01

    Full Text Available In multiple instance learning (MIL framework, an object is represented by a set of instances referred to as bag. A positive class label is assigned to a bag if it contains at least one positive instance; otherwise a bag is labeled with negative class label. Therefore, the task of MIL is to learn a classifier at bag level rather than at instance level. Traditional supervised learning approaches cannot be applied directly in such kind of situation. In this study, we represent each bag by a vector of its dissimilarities to the other existing bags in the training dataset and propose a multiple instance learning based Twin Support Vector Machine (MIL-TWSVM classifier. We have used different ways to represent the dissimilarity between two bags and performed a comparative analysis of them. The experimental results on ten benchmark MIL datasets demonstrate that the proposed MIL-TWSVM classifier is computationally inexpensive and competitive with state-of-the-art approaches. The significance of the experimental results has been tested by using Friedman statistic and Nemenyi post hoc tests.

  1. Interpretation of machine-learning-based disruption models for plasma control

    Science.gov (United States)

    Parsons, Matthew S.

    2017-08-01

    While machine learning techniques have been applied within the context of fusion for predicting plasma disruptions in tokamaks, they are typically interpreted with a simple ‘yes/no’ prediction or perhaps a probability forecast. These techniques take input signals, which could be real-time signals from machine diagnostics, to make a prediction of whether a transient event will occur. A major criticism of these methods is that, due to the nature of machine learning, there is no clear correlation between the input signals and the output prediction result. Here is proposed a simple method that could be applied to any existing prediction model to determine how sensitive the state of a plasma is at any given time with respect to the input signals. This is accomplished by computing the gradient of the decision function, which effectively identifies the quickest path away from a disruption as a function of the input signals and therefore could be used in a plasma control setting to avoid them. A numerical example is provided for illustration based on a support vector machine model, and the application to real data is left as an open opportunity.

  2. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.

    Science.gov (United States)

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-05-18

    The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  3. Use of a Machine Learning-Based High Content Analysis Approach to Identify Photoreceptor Neurite Promoting Molecules.

    Science.gov (United States)

    Fuller, John A; Berlinicke, Cynthia A; Inglese, James; Zack, Donald J

    2016-01-01

    High content analysis (HCA) has become a leading methodology in phenotypic drug discovery efforts. Typical HCA workflows include imaging cells using an automated microscope and analyzing the data using algorithms designed to quantify one or more specific phenotypes of interest. Due to the richness of high content data, unappreciated phenotypic changes may be discovered in existing image sets using interactive machine-learning based software systems. Primary postnatal day four retinal cells from the photoreceptor (PR) labeled QRX-EGFP reporter mice were isolated, seeded, treated with a set of 234 profiled kinase inhibitors and then cultured for 1 week. The cells were imaged with an Acumen plate-based laser cytometer to determine the number and intensity of GFP-expressing, i.e. PR, cells. Wells displaying intensities and counts above threshold values of interest were re-imaged at a higher resolution with an INCell2000 automated microscope. The images were analyzed with an open source HCA analysis tool, PhenoRipper (Rajaram et al., Nat Methods 9:635-637, 2012), to identify the high GFP-inducing treatments that additionally resulted in diverse phenotypes compared to the vehicle control samples. The pyrimidinopyrimidone kinase inhibitor CHEMBL-1766490, a pan kinase inhibitor whose major known targets are p38α and the Src family member lck, was identified as an inducer of photoreceptor neuritogenesis by using the open-source HCA program PhenoRipper. This finding was corroborated using a cell-based method of image analysis that measures quantitative differences in the mean neurite length in GFP expressing cells. Interacting with data using machine learning algorithms may complement traditional HCA approaches by leading to the discovery of small molecule-induced cellular phenotypes in addition to those upon which the investigator is initially focusing.

  4. Machine learning based data mining for Milky Way filamentary structures reconstruction

    CERN Document Server

    Riccio, Giuseppe; Schisano, Eugenio; Brescia, Massimo; Mercurio, Amata; Elia, Davide; Benedettini, Milena; Pezzuto, Stefano; Molinari, Sergio; Di Giorgio, Anna Maria

    2015-01-01

    We present an innovative method called FilExSeC (Filaments Extraction, Selection and Classification), a data mining tool developed to investigate the possibility to refine and optimize the shape reconstruction of filamentary structures detected with a consolidated method based on the flux derivative analysis, through the column-density maps computed from Herschel infrared Galactic Plane Survey (Hi-GAL) observations of the Galactic plane. The present methodology is based on a feature extraction module followed by a machine learning model (Random Forest) dedicated to select features and to classify the pixels of the input images. From tests on both simulations and real observations the method appears reliable and robust with respect to the variability of shape and distribution of filaments. In the cases of highly defined filament structures, the presented method is able to bridge the gaps among the detected fragments, thus improving their shape reconstruction. From a preliminary "a posteriori" analysis of deriv...

  5. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    Science.gov (United States)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  6. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    Science.gov (United States)

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  7. Machine learning methods in chemoinformatics

    Science.gov (United States)

    Mitchell, John B O

    2014-01-01

    Machine learning algorithms are generally developed in computer science or adjacent disciplines and find their way into chemical modeling by a process of diffusion. Though particular machine learning methods are popular in chemoinformatics and quantitative structure–activity relationships (QSAR), many others exist in the technical literature. This discussion is methods-based and focused on some algorithms that chemoinformatics researchers frequently use. It makes no claim to be exhaustive. We concentrate on methods for supervised learning, predicting the unknown property values of a test set of instances, usually molecules, based on the known values for a training set. Particularly relevant approaches include Artificial Neural Networks, Random Forest, Support Vector Machine, k-Nearest Neighbors and naïve Bayes classifiers. WIREs Comput Mol Sci 2014, 4:468–481. How to cite this article: WIREs Comput Mol Sci 2014, 4:468–481. doi:10.1002/wcms.1183 PMID:25285160

  8. Machine Learning Based Data Mining for Milky Way Filamentary Structures Reconstruction

    Science.gov (United States)

    Riccio, Giuseppe; Cavuoti, Stefano; Schisano, Eugenio; Brescia, Massimo; Mercurio, Amata; Elia, Davide; Benedettini, Milena; Pezzuto, Stefano; Molinari, Sergio; Di Giorgio, Anna Maria

    2016-06-01

    We present an innovative method called FilExSeC (Filaments Extraction, Selection and Classification), a data mining tool developed to investigate the possibility to refine and optimize the shape reconstruction of filamentary structures detected with a consolidated method based on the flux derivative analysis, through the column-density maps computed from Herschel infrared Galactic Plane Survey (Hi-GAL) observations of the Galactic plane. The present methodology is based on a feature extraction module followed by a machine learning model (Random Forest) dedicated to select features and to classify the pixels of the input images. From tests on both simulations and real observations the method appears reliable and robust with respect to the variability of shape and distribution of filaments. In the cases of highly defined filament structures, the presented method is able to bridge the gaps among the detected fragments, thus improving their shape reconstruction. From a preliminary a posteriori analysis of derived filament physical parameters, the method appears potentially able to add a sufficient contribution to complete and refine the filament reconstruction.

  9. Intelligent Video Object Classification Scheme using Offline Feature Extraction and Machine Learning based Approach

    Directory of Open Access Journals (Sweden)

    Chandra Mani Sharma

    2012-01-01

    Full Text Available Classification of objects in video stream is important because of its application in many emerging areas such as visual surveillance, content based video retrieval and indexing etc. The task is far more challenging because the video data is of heavy and highly variable nature. The processing of video data is required to be in real-time. This paper presents a multiclass object classification technique using machine learning approach. Haar-like features are used for training the classifier. The feature calculation is performed using Integral Image representation and we train the classifier offline using a Stage-wise Additive Modeling using a Multiclass Exponential loss function (SAMME. The validity of the method has been verified from the implementation of a real-time human-car detector. Experimental results show that the proposed method can accurately classify objects, in video, into their respective classes. The proposed object classifier works well in outdoor environment in presence of moderate lighting conditions and variable scene background. The proposed technique is compared, with other object classification techniques, based on various performance parameters.

  10. Machine-learning-based calving prediction from activity, lying, and ruminating behaviors in dairy cattle.

    Science.gov (United States)

    Borchers, M R; Chang, Y M; Proudfoot, K L; Wadsworth, B A; Stone, A E; Bewley, J M

    2017-07-01

    The objective of this study was to use automated activity, lying, and rumination monitors to characterize prepartum behavior and predict calving in dairy cattle. Data were collected from 20 primiparous and 33 multiparous Holstein dairy cattle from September 2011 to May 2013 at the University of Kentucky Coldstream Dairy. The HR Tag (SCR Engineers Ltd., Netanya, Israel) automatically collected neck activity and rumination data in 2-h increments. The IceQube (IceRobotics Ltd., South Queensferry, United Kingdom) automatically collected number of steps, lying time, standing time, number of transitions from standing to lying (lying bouts), and total motion, summed in 15-min increments. IceQube data were summed in 2-h increments to match HR Tag data. All behavioral data were collected for 14 d before the predicted calving date. Retrospective data analysis was performed using mixed linear models to examine behavioral changes by day in the 14 d before calving. Bihourly behavioral differences from baseline values over the 14 d before calving were also evaluated using mixed linear models. Changes in daily rumination time, total motion, lying time, and lying bouts occurred in the 14 d before calving. In the bihourly analysis, extreme values for all behaviors occurred in the final 24 h, indicating that the monitored behaviors may be useful in calving prediction. To determine whether technologies were useful at predicting calving, random forest, linear discriminant analysis, and neural network machine-learning techniques were constructed and implemented using R version 3.1.0 (R Foundation for Statistical Computing, Vienna, Austria). These methods were used on variables from each technology and all combined variables from both technologies. A neural network analysis that combined variables from both technologies at the daily level yielded 100.0% sensitivity and 86.8% specificity. A neural network analysis that combined variables from both technologies in bihourly increments was

  11. A feasibility study of automatic lung nodule detection in chest digital tomosynthesis with machine learning based on support vector machine

    Science.gov (United States)

    Lee, Donghoon; Kim, Ye-seul; Choi, Sunghoon; Lee, Haenghwa; Jo, Byungdu; Choi, Seungyeon; Shin, Jungwook; Kim, Hee-Joung

    2017-03-01

    The chest digital tomosynthesis(CDT) is recently developed medical device that has several advantage for diagnosing lung disease. For example, CDT provides depth information with relatively low radiation dose compared to computed tomography (CT). However, a major problem with CDT is the image artifacts associated with data incompleteness resulting from limited angle data acquisition in CDT geometry. For this reason, the sensitivity of lung disease was not clear compared to CT. In this study, to improve sensitivity of lung disease detection in CDT, we developed computer aided diagnosis (CAD) systems based on machine learning. For design CAD systems, we used 100 cases of lung nodules cropped images and 100 cases of normal lesion cropped images acquired by lung man phantoms and proto type CDT. We used machine learning techniques based on support vector machine and Gabor filter. The Gabor filter was used for extracting characteristics of lung nodules and we compared performance of feature extraction of Gabor filter with various scale and orientation parameters. We used 3, 4, 5 scales and 4, 6, 8 orientations. After extracting features, support vector machine (SVM) was used for classifying feature of lesions. The linear, polynomial and Gaussian kernels of SVM were compared to decide the best SVM conditions for CDT reconstruction images. The results of CAD system with machine learning showed the capability of automatically lung lesion detection. Furthermore detection performance was the best when Gabor filter with 5 scale and 8 orientation and SVM with Gaussian kernel were used. In conclusion, our suggested CAD system showed improving sensitivity of lung lesion detection in CDT and decide Gabor filter and SVM conditions to achieve higher detection performance of our developed CAD system for CDT.

  12. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar; Marianno, Fernando J.; Shao, Xiaoyan; Zhang, Jie; Hodge, Bri-Mathias; Hamann, Hendrik F.

    2015-07-15

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual model has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.

  13. Machine learning methods for planning

    CERN Document Server

    Minton, Steven

    1993-01-01

    Machine Learning Methods for Planning provides information pertinent to learning methods for planning and scheduling. This book covers a wide variety of learning methods and learning architectures, including analogical, case-based, decision-tree, explanation-based, and reinforcement learning.Organized into 15 chapters, this book begins with an overview of planning and scheduling and describes some representative learning systems that have been developed for these tasks. This text then describes a learning apprentice for calendar management. Other chapters consider the problem of temporal credi

  14. Protein subcellular localization prediction using multiple kernel learning based support vector machine.

    Science.gov (United States)

    Hasan, Md Al Mehedi; Ahmad, Shamim; Molla, Md Khademul Islam

    2017-03-28

    Predicting the subcellular locations of proteins can provide useful hints that reveal their functions, increase our understanding of the mechanisms of some diseases, and finally aid in the development of novel drugs. As the number of newly discovered proteins has been growing exponentially, which in turns, makes the subcellular localization prediction by purely laboratory tests prohibitively laborious and expensive. In this context, to tackle the challenges, computational methods are being developed as an alternative choice to aid biologists in selecting target proteins and designing related experiments. However, the success of protein subcellular localization prediction is still a complicated and challenging issue, particularly, when query proteins have multi-label characteristics, i.e., if they exist simultaneously in more than one subcellular location or if they move between two or more different subcellular locations. To date, to address this problem, several types of subcellular localization prediction methods with different levels of accuracy have been proposed. The support vector machine (SVM) has been employed to provide potential solutions to the protein subcellular localization prediction problem. However, the practicability of an SVM is affected by the challenges of selecting an appropriate kernel and selecting the parameters of the selected kernel. To address this difficulty, in this study, we aimed to develop an efficient multi-label protein subcellular localization prediction system, named as MKLoc, by introducing multiple kernel learning (MKL) based SVM. We evaluated MKLoc using a combined dataset containing 5447 single-localized proteins (originally published as part of the Höglund dataset) and 3056 multi-localized proteins (originally published as part of the DBMLoc set). Note that this dataset was used by Briesemeister et al. in their extensive comparison of multi-localization prediction systems. Finally, our experimental results indicate that

  15. A machine learning-based framework to identify type 2 diabetes through electronic health records.

    Science.gov (United States)

    Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You

    2017-01-01

    To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature

  16. Feedback for reinforcement learning based brain-machine interfaces using confidence metrics.

    Science.gov (United States)

    Prins, Noeline W; Sanchez, Justin C; Prasad, Abhishek

    2017-06-01

    For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor's weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the 'ambiguous' region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich model) and NAcc (Humphries

  17. Feedback for reinforcement learning based brain-machine interfaces using confidence metrics

    Science.gov (United States)

    Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek

    2017-06-01

    Objective. For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Approach. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor’s weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the ‘ambiguous’ region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich

  18. Photometric classification of emission line galaxies with Machine Learning methods

    CERN Document Server

    Cavuoti, Stefano; D'Abrusco, Raffaele; Longo, Giuseppe; Paolillo, Maurizio

    2013-01-01

    In this paper we discuss an application of machine learning based methods to the identification of candidate AGN from optical survey data and to the automatic classification of AGNs in broad classes. We applied four different machine learning algorithms, namely the Multi Layer Perceptron (MLP), trained respectively with the Conjugate Gradient, Scaled Conjugate Gradient and Quasi Newton learning rules, and the Support Vector Machines (SVM), to tackle the problem of the classification of emission line galaxies in different classes, mainly AGNs vs non-AGNs, obtained using optical photometry in place of the diagnostics based on line intensity ratios which are classically used in the literature. Using the same photometric features we discuss also the behavior of the classifiers on finer AGN classification tasks, namely Seyfert I vs Seyfert II and Seyfert vs LINER. Furthermore we describe the algorithms employed, the samples of spectroscopically classified galaxies used to train the algorithms, the procedure follow...

  19. Machine-learning-based diagnosis of schizophrenia using combined sensor-level and source-level EEG features.

    Science.gov (United States)

    Shim, Miseon; Hwang, Han-Jeong; Kim, Do-Won; Lee, Seung-Hwan; Im, Chang-Hwan

    2016-10-01

    Recently, an increasing number of researchers have endeavored to develop practical tools for diagnosing patients with schizophrenia using machine learning techniques applied to EEG biomarkers. Although a number of studies showed that source-level EEG features can potentially be applied to the differential diagnosis of schizophrenia, most studies have used only sensor-level EEG features such as ERP peak amplitude and power spectrum for machine learning-based diagnosis of schizophrenia. In this study, we used both sensor-level and source-level features extracted from EEG signals recorded during an auditory oddball task for the classification of patients with schizophrenia and healthy controls. EEG signals were recorded from 34 patients with schizophrenia and 34 healthy controls while each subject was asked to attend to oddball tones. Our results demonstrated higher classification accuracy when source-level features were used together with sensor-level features, compared to when only sensor-level features were used. In addition, the selected sensor-level features were mostly found in the frontal area, and the selected source-level features were mostly extracted from the temporal area, which coincide well with the well-known pathological region of cognitive processing in patients with schizophrenia. Our results suggest that our approach would be a promising tool for the computer-aided diagnosis of schizophrenia.

  20. Tracking by Machine Learning Methods

    CERN Document Server

    Jofrehei, Arash

    2015-01-01

    Current track reconstructing methods start with two points and then for each layer loop through all possible hits to find proper hits to add to that track. Another idea would be to use this large number of already reconstructed events and/or simulated data and train a machine on this data to find tracks given hit pixels. Training time could be long but real time tracking is really fast Simulation might not be as realistic as real data but tacking has been done for that with 100 percent efficiency while by using real data we would probably be limited to current efficiency.

  1. Automated discrimination of dicentric and monocentric chromosomes by machine learning-based image processing.

    Science.gov (United States)

    Li, Yanxin; Knoll, Joan H; Wilkins, Ruth C; Flegal, Farrah N; Rogan, Peter K

    2016-05-01

    Dose from radiation exposure can be estimated from dicentric chromosome (DC) frequencies in metaphase cells of peripheral blood lymphocytes. We automated DC detection by extracting features in Giemsa-stained metaphase chromosome images and classifying objects by machine learning (ML). DC detection involves (i) intensity thresholded segmentation of metaphase objects, (ii) chromosome separation by watershed transformation and elimination of inseparable chromosome clusters, fragments and staining debris using a morphological decision tree filter, (iii) determination of chromosome width and centreline, (iv) derivation of centromere candidates, and (v) distinction of DCs from monocentric chromosomes (MC) by ML. Centromere candidates are inferred from 14 image features input to a Support Vector Machine (SVM). Sixteen features derived from these candidates are then supplied to a Boosting classifier and a second SVM which determines whether a chromosome is either a DC or MC. The SVM was trained with 292 DCs and 3135 MCs, and then tested with cells exposed to either low (1 Gy) or high (2-4 Gy) radiation dose. Results were then compared with those of 3 experts. True positive rates (TPR) and positive predictive values (PPV) were determined for the tuning parameter, σ. At larger σ, PPV decreases and TPR increases. At high dose, for σ = 1.3, TPR = 0.52 and PPV = 0.83, while at σ = 1.6, the TPR = 0.65 and PPV = 0.72. At low dose and σ = 1.3, TPR = 0.67 and PPV = 0.26. The algorithm differentiates DCs from MCs, overlapped chromosomes and other objects with acceptable accuracy over a wide range of radiation exposures.

  2. A Machine Learning based Efficient Software Reusability Prediction Model for Java Based Object Oriented Software

    Directory of Open Access Journals (Sweden)

    Surbhi Maggo

    2014-01-01

    Full Text Available Software reuse refers to the development of new software systems with the likelihood of completely or partially using existing components or resources with or without modification. Reusability is the measure of the ease with which previously acquired concepts and objects can be used in new contexts. It is a promising strategy for improvements in software quality, productivity and maintainability as it provides for cost effective, reliable (with the consideration that prior testing and use has eliminated bugs and accelerated (reduced time to market development of the software products. In this paper we present an efficient automation model for the identification and evaluation of reusable software components to measure the reusability levels (high, medium or low of procedure oriented Java based (object oriented software systems. The presented model uses a metric framework for the functional analysis of the Object oriented software components that target essential attributes of reusability analysis also taking into consideration Maintainability Index to account for partial reuse. Further machine learning algorithm LMNN is explored to establish relationships between the functional attributes. The model works at functional level rather than at structural level. The system is implemented as a tool in Java and the performance of the automation tool developed is recorded using criteria like precision, recall, accuracy and error rate. The results gathered indicate that the model can be effectively used as an efficient, accurate, fast and economic model for the identification of procedure based reusable components from the existing inventory of software resources.

  3. Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Li, J; Ma, X; Singh, K; Schulz, M; de Supinski, B R; McKee, S A

    2008-10-09

    With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generate performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.

  4. Machine Learning-Based Parameter Tuned Genetic Algorithm for Energy Minimizing Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    P. L. N. U. Cooray

    2017-01-01

    Full Text Available During the last decade, tremendous focus has been given to sustainable logistics practices to overcome environmental concerns of business practices. Since transportation is a prominent area of logistics, a new area of literature known as Green Transportation and Green Vehicle Routing has emerged. Vehicle Routing Problem (VRP has been a very active area of the literature with contribution from many researchers over the last three decades. With the computational constraints of solving VRP which is NP-hard, metaheuristics have been applied successfully to solve VRPs in the recent past. This is a threefold study. First, it critically reviews the current literature on EMVRP and the use of metaheuristics as a solution approach. Second, the study implements a genetic algorithm (GA to solve the EMVRP formulation using the benchmark instances listed on the repository of CVRPLib. Finally, the GA developed in Phase 2 was enhanced through machine learning techniques to tune its parameters. The study reveals that, by identifying the underlying characteristics of data, a particular GA can be tuned significantly to outperform any generic GA with competitive computational times. The scrutiny identifies several knowledge gaps where new methodologies can be developed to solve the EMVRPs and develops propositions for future research.

  5. Computer-assisted framework for machine-learning-based delineation of GTV regions on datasets of planning CT and PET/CT images.

    Science.gov (United States)

    Ikushima, Koujiro; Arimura, Hidetaka; Jin, Ze; Yabu-Uchi, Hidetake; Kuwazuru, Jumpei; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki

    2017-01-01

    We have proposed a computer-assisted framework for machine-learning-based delineation of gross tumor volumes (GTVs) following an optimum contour selection (OCS) method. The key idea of the proposed framework was to feed image features around GTV contours (determined based on the knowledge of radiation oncologists) into a machine-learning classifier during the training step, after which the classifier produces the 'degree of GTV' for each voxel in the testing step. Initial GTV regions were extracted using a support vector machine (SVM) that learned the image features inside and outside each tumor region (determined by radiation oncologists). The leave-one-out-by-patient test was employed for training and testing the steps of the proposed framework. The final GTV regions were determined using the OCS method that can be used to select a global optimum object contour based on multiple active delineations with a LSM around the GTV. The efficacy of the proposed framework was evaluated in 14 lung cancer cases [solid: 6, ground-glass opacity (GGO): 4, mixed GGO: 4] using the 3D Dice similarity coefficient (DSC), which denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those determined using the proposed framework. The proposed framework achieved an average DSC of 0.777 for 14 cases, whereas the OCS-based framework produced an average DSC of 0.507. The average DSCs for GGO and mixed GGO were 0.763 and 0.701, respectively, obtained by the proposed framework. The proposed framework can be employed as a tool to assist radiation oncologists in delineating various GTV regions. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  6. Improving the Performance of Machine Learning Based Multi Attribute Face Recognition Algorithm Using Wavelet Based Image Decomposition Technique

    Directory of Open Access Journals (Sweden)

    S. Sakthivel

    2011-01-01

    Full Text Available Problem statement: Recognizing a face based attributes is an easy task for a human to perform; it is closely automated and requires little mental effort. A computer, on the other hand, has no innate ability to recognize a face or a facial feature and must be programmed with an algorithm to do so. Generally, to recognize a face, different kinds of the facial features were used separately or in a combined manner. In the previous work, we have developed a machine learning based multi attribute face recognition algorithm and evaluated it different set of weights to each input attribute and performance wise it is low compared to proposed wavelet decomposition technique. Approach: In this study, wavelet decomposition technique has been applied as a preprocessing technique to enhance the input face images in order to reduce the loss of classification performance due to changes in facial appearance. The Experiment was specifically designed to investigate the gain in robustness against illumination and facial expression changes. Results: In this study, a wavelet based image decomposition technique has been proposed to enhance the performance by 8.54 percent of the previously designed system. Conclusion: The proposed model has been tested on face images with difference in expression and illumination condition with a dataset obtained from face image databases from Olivetti Research Laboratory.

  7. Ionic channel current burst analysis by a machine learning based approach.

    Science.gov (United States)

    Rauch, Giuseppe; Bertolini, Simona; Sacile, Roberto; Giacomini, Mauro; Ruggiero, Carmelina

    2011-09-01

    A new method to analyze single ionic channel current conduction is presented. It is based on an automatic classification by K-means algorithm and on the concept of information entropy. This method is used to study the conductance of multistate ion current jumps induced by tetanus toxin in planar lipid bilayers. A comparison is presented with the widely used Gaussian best fit approach, whose main drawback is the fact that it is based on the manual choice of the base line and of meaningful fragments of current signal. On the contrary, the proposed method is able to automatically process a great amount of information and to remove spurious transitions and multichannels. The number of levels and their amplitudes do not have to be known a priori. In this way the presented method is able to produce a reliable evaluation of the conductance levels and their characteristic parameters in a short time.

  8. A fusion method for visible and infrared images based on contrast pyramid with teaching learning based optimization

    Science.gov (United States)

    Jin, Haiyan; Wang, Yanyan

    2014-05-01

    This paper proposes a novel image fusion scheme based on contrast pyramid (CP) with teaching learning based optimization (TLBO) for visible and infrared images under different spectrum of complicated scene. Firstly, CP decomposition is employed into every level of each original image. Then, we introduce TLBO to optimizing fusion coefficients, which will be changed under teaching phase and learner phase of TLBO, so that the weighted coefficients can be automatically adjusted according to fitness function, namely the evaluation standards of image quality. At last, obtain fusion results by the inverse transformation of CP. Compared with existing methods, experimental results show that our method is effective and the fused images are more suitable for further human visual or machine perception.

  9. Long Noncoding RNA Identification: Comparing Machine Learning Based Tools for Long Noncoding Transcripts Discrimination.

    Science.gov (United States)

    Han, Siyu; Liang, Yanchun; Li, Ying; Du, Wei

    2016-01-01

    Long noncoding RNA (lncRNA) is a kind of noncoding RNA with length more than 200 nucleotides, which aroused interest of people in recent years. Lots of studies have confirmed that human genome contains many thousands of lncRNAs which exert great influence over some critical regulators of cellular process. With the advent of high-throughput sequencing technologies, a great quantity of sequences is waiting for exploitation. Thus, many programs are developed to distinguish differences between coding and long noncoding transcripts. Different programs are generally designed to be utilised under different circumstances and it is sensible and practical to select an appropriate method according to a certain situation. In this review, several popular methods and their advantages, disadvantages, and application scopes are summarised to assist people in employing a suitable method and obtaining a more reliable result.

  10. A global machine learning based scoring function for protein structure prediction.

    Science.gov (United States)

    Faraggi, Eshel; Kloczkowski, Andrzej

    2014-05-01

    We present a knowledge-based function to score protein decoys based on their similarity to native structure. A set of features is constructed to describe the structure and sequence of the entire protein chain. Furthermore, a qualitative relationship is established between the calculated features and the underlying electromagnetic interaction that dominates this scale. The features we use are associated with residue-residue distances, residue-solvent distances, pairwise knowledge-based potentials and a four-body potential. In addition, we introduce a new target to be predicted, the fitness score, which measures the similarity of a model to the native structure. This new approach enables us to obtain information both from decoys and from native structures. It is also devoid of previous problems associated with knowledge-based potentials. These features were obtained for a large set of native and decoy structures and a back-propagating neural network was trained to predict the fitness score. Overall this new scoring potential proved to be superior to the knowledge-based scoring functions used as its inputs. In particular, in the latest CASP (CASP10) experiment our method was ranked third for all targets, and second for freely modeled hard targets among about 200 groups for top model prediction. Ours was the only method ranked in the top three for all targets and for hard targets. This shows that initial results from the novel approach are able to capture details that were missed by a broad spectrum of protein structure prediction approaches. Source codes and executable from this work are freely available at http://mathmed.org/#Software and http://mamiris.com/.

  11. Arctic Sea Ice Thickness Estimation from CryoSat-2 Satellite Data Using Machine Learning-Based Lead Detection

    Directory of Open Access Journals (Sweden)

    Sanggyun Lee

    2016-08-01

    Full Text Available Satellite altimeters have been used to monitor Arctic sea ice thickness since the early 2000s. In order to estimate sea ice thickness from satellite altimeter data, leads (i.e., cracks between ice floes should first be identified for the calculation of sea ice freeboard. In this study, we proposed novel approaches for lead detection using two machine learning algorithms: decision trees and random forest. CryoSat-2 satellite data collected in March and April of 2011–2014 over the Arctic region were used to extract waveform parameters that show the characteristics of leads, ice floes and ocean, including stack standard deviation, stack skewness, stack kurtosis, pulse peakiness and backscatter sigma-0. The parameters were used to identify leads in the machine learning models. Results show that the proposed approaches, with overall accuracy >90%, produced much better performance than existing lead detection methods based on simple thresholding approaches. Sea ice thickness estimated based on the machine learning-detected leads was compared to the averaged Airborne Electromagnetic (AEM-bird data collected over two days during the CryoSat Validation experiment (CryoVex field campaign in April 2011. This comparison showed that the proposed machine learning methods had better performance (up to r = 0.83 and Root Mean Square Error (RMSE = 0.29 m compared to thickness estimation based on existing lead detection methods (RMSE = 0.86–0.93 m. Sea ice thickness based on the machine learning approaches showed a consistent decline from 2011–2013 and rebounded in 2014.

  12. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    Science.gov (United States)

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  13. Effects of Semantic Features on Machine Learning-Based Drug Name Recognition Systems: Word Embeddings vs. Manually Constructed Dictionaries

    Directory of Open Access Journals (Sweden)

    Shengyu Liu

    2015-12-01

    Full Text Available Semantic features are very important for machine learning-based drug name recognition (DNR systems. The semantic features used in most DNR systems are based on drug dictionaries manually constructed by experts. Building large-scale drug dictionaries is a time-consuming task and adding new drugs to existing drug dictionaries immediately after they are developed is also a challenge. In recent years, word embeddings that contain rich latent semantic information of words have been widely used to improve the performance of various natural language processing tasks. However, they have not been used in DNR systems. Compared to the semantic features based on drug dictionaries, the advantage of word embeddings lies in that learning them is unsupervised. In this paper, we investigate the effect of semantic features based on word embeddings on DNR and compare them with semantic features based on three drug dictionaries. We propose a conditional random fields (CRF-based system for DNR. The skip-gram model, an unsupervised algorithm, is used to induce word embeddings on about 17.3 GigaByte (GB unlabeled biomedical texts collected from MEDLINE (National Library of Medicine, Bethesda, MD, USA. The system is evaluated on the drug-drug interaction extraction (DDIExtraction 2013 corpus. Experimental results show that word embeddings significantly improve the performance of the DNR system and they are competitive with semantic features based on drug dictionaries. F-score is improved by 2.92 percentage points when word embeddings are added into the baseline system. It is comparative with the improvements from semantic features based on drug dictionaries. Furthermore, word embeddings are complementary to the semantic features based on drug dictionaries. When both word embeddings and semantic features based on drug dictionaries are added, the system achieves the best performance with an F-score of 78.37%, which outperforms the best system of the DDIExtraction 2013

  14. Inner and outer coronary vessel wall segmentation from CCTA using an active contour model with machine learning-based 3D voxel context-aware image force

    Science.gov (United States)

    Sivalingam, Udhayaraj; Wels, Michael; Rempfler, Markus; Grosskopf, Stefan; Suehling, Michael; Menze, Bjoern H.

    2016-03-01

    In this paper, we present a fully automated approach to coronary vessel segmentation, which involves calcification or soft plaque delineation in addition to accurate lumen delineation, from 3D Cardiac Computed Tomography Angiography data. Adequately virtualizing the coronary lumen plays a crucial role for simulating blood ow by means of fluid dynamics while additionally identifying the outer vessel wall in the case of arteriosclerosis is a prerequisite for further plaque compartment analysis. Our method is a hybrid approach complementing Active Contour Model-based segmentation with an external image force that relies on a Random Forest Regression model generated off-line. The regression model provides a strong estimate of the distance to the true vessel surface for every surface candidate point taking into account 3D wavelet-encoded contextual image features, which are aligned with the current surface hypothesis. The associated external image force is integrated in the objective function of the active contour model, such that the overall segmentation approach benefits from the advantages associated with snakes and from the ones associated with machine learning-based regression alike. This yields an integrated approach achieving competitive results on a publicly available benchmark data collection (Rotterdam segmentation challenge).

  15. Application of multi-stage Monte Carlo method for solving machining optimization problems

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2014-08-01

    Full Text Available Enhancing the overall machining performance implies optimization of machining processes, i.e. determination of optimal machining parameters combination. Optimization of machining processes is an active field of research where different optimization methods are being used to determine an optimal combination of different machining parameters. In this paper, multi-stage Monte Carlo (MC method was employed to determine optimal combinations of machining parameters for six machining processes, i.e. drilling, turning, turn-milling, abrasive waterjet machining, electrochemical discharge machining and electrochemical micromachining. Optimization solutions obtained by using multi-stage MC method were compared with the optimization solutions of past researchers obtained by using meta-heuristic optimization methods, e.g. genetic algorithm, simulated annealing algorithm, artificial bee colony algorithm and teaching learning based optimization algorithm. The obtained results prove the applicability and suitability of the multi-stage MC method for solving machining optimization problems with up to four independent variables. Specific features, merits and drawbacks of the MC method were also discussed.

  16. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks

    Science.gov (United States)

    Rehan, Waqas; Fischer, Stefan; Rehan, Maaz

    2016-01-01

    Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end

  17. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Waqas Rehan

    2016-09-01

    Full Text Available Wireless sensor networks (WSNs have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM, that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI and the average of the link quality indicator (LQI of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC algorithm in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC algorithm, that can perform channel quality estimation on the basis of both current and past values of channel rank estimation

  18. Machine-Learning Based Channel Quality and Stability Estimation for Stream-Based Multichannel Wireless Sensor Networks.

    Science.gov (United States)

    Rehan, Waqas; Fischer, Stefan; Rehan, Maaz

    2016-09-12

    Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end

  19. Method of Dynamic Knowledge Representation and Learning Based on Fuzzy Petri Nets

    Institute of Scientific and Technical Information of China (English)

    WEI Sheng-jun; HU Chang-zhen; SUN Ming-qian

    2008-01-01

    A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.

  20. Finite Element Method in Machining Processes

    CERN Document Server

    Markopoulos, Angelos P

    2013-01-01

    Finite Element Method in Machining Processes provides a concise study on the way the Finite Element Method (FEM) is used in the case of manufacturing processes, primarily in machining. The basics of this kind of modeling are detailed to create a reference that will provide guidelines for those who start to study this method now, but also for scientists already involved in FEM and want to expand their research. A discussion on FEM, formulations and techniques currently in use is followed up by machining case studies. Orthogonal cutting, oblique cutting, 3D simulations for turning and milling, grinding, and state-of-the-art topics such as high speed machining and micromachining are explained with relevant examples. This is all supported by a literature review and a reference list for further study. As FEM is a key method for researchers in the manufacturing and especially in the machining sector, Finite Element Method in Machining Processes is a key reference for students studying manufacturing processes but al...

  1. A learning-based method to detect and segment text from scene images

    Institute of Scientific and Technical Information of China (English)

    JIANG Ren-jie; QI Fei-hu; XU Li; WU Guo-rong; ZHU Kai-hua

    2007-01-01

    This paper proposes a learning-based method for text detection and text segmentation in natural scene images. First,the input image is decomposed into multiple connected-components (CCs) by Niblack clustering algorithm. Then all the CCs including text CCs and non-text CCs are verified on their text features by a 2-stage classification module, where most non-text CCs are discarded by an attentional cascade classifier and remaining CCs are further verified by an SVM. All the accepted CCs are output to result in text only binary image. Experiments with many images in different scenes showed satisfactory performance of our proposed method.

  2. Machine Learning Based Dimensionality Reduction Facilitates Ligand Diffusion Paths Assessment: A Case of Cytochrome P450cam.

    Science.gov (United States)

    Rydzewski, J; Nowak, W

    2016-04-12

    In this work we propose an application of a nonlinear dimensionality reduction method to represent the high-dimensional configuration space of the ligand-protein dissociation process in a manner facilitating interpretation. Rugged ligand expulsion paths are mapped into 2-dimensional space. The mapping retains the main structural changes occurring during the dissociation. The topological similarity of the reduced paths may be easily studied using the Fréchet distances, and we show that this measure facilitates machine learning classification of the diffusion pathways. Further, low-dimensional configuration space allows for identification of residues active in transport during the ligand diffusion from a protein. The utility of this approach is illustrated by examination of the configuration space of cytochrome P450cam involved in expulsing camphor by means of enhanced all-atom molecular dynamics simulations. The expulsion trajectories are sampled and constructed on-the-fly during molecular dynamics simulations using the recently developed memetic algorithms [ Rydzewski, J.; Nowak, W. J. Chem. Phys. 2015 , 143 ( 12 ), 124101 ]. We show that the memetic algorithms are effective for enforcing the ligand diffusion and cavity exploration in the P450cam-camphor complex. Furthermore, we demonstrate that machine learning techniques are helpful in inspecting ligand diffusion landscapes and provide useful tools to examine structural changes accompanying rare events.

  3. Machine-learning based comparison of CT-perfusion maps and dual energy CT for pancreatic tumor detection

    Science.gov (United States)

    Goetz, Michael; Skornitzke, Stephan; Weber, Christian; Fritz, Franziska; Mayer, Philipp; Koell, Marco; Stiller, Wolfram; Maier-Hein, Klaus H.

    2016-03-01

    Perfusion CT is well-suited for diagnosis of pancreatic tumors but tends to be associated with a high radiation exposure. Dual-energy CT (DECT) might be an alternative to perfusion CT, offering correlating contrasts while being acquired at lower radiation doses. While previous studies compared intensities of Dual Energy iodine maps and CT-perfusion maps, no study has assessed the combined discriminative power of all information that can be generated from an acquisition of both functional imaging methods. We therefore propose the use of a machine learning algorithm for assessing the amount of information that becomes available by the combination of multiple images. For this, we train a classifier on both imaging methods, using a new approach that allows us to train only from small regions of interests (ROIs). This makes our study comparable to other - ROI-based analysis - and still allows comparing the ability of both classifiers to discriminate between healthy and tumorous tissue. We were able to train classifiers that yield DICE scores over 80% with both imaging methods. This indicates that Dual Energy Iodine maps might be used for diagnosis of pancreatic tumors instead of Perfusion CT, although the detection rate is lower. We also present tumor risk maps that visualize possible tumorous areas in an intuitive way and can be used during diagnosis as an additional information source.

  4. Machine Learning-Based Content Analysis: Automating the analysis of frames and agendas in political communication research

    NARCIS (Netherlands)

    Burscher, B.

    2016-01-01

    We used machine learning to study policy issues and frames in political messages. With regard to frames, we investigated the automation of two content-analytical tasks: frame coding and frame identification. We found that both tasks can be successfully automated by means of machine learning techniqu

  5. Machine Learning-Based Content Analysis: Automating the analysis of frames and agendas in political communication research

    NARCIS (Netherlands)

    Burscher, B.

    2016-01-01

    We used machine learning to study policy issues and frames in political messages. With regard to frames, we investigated the automation of two content-analytical tasks: frame coding and frame identification. We found that both tasks can be successfully automated by means of machine learning techniqu

  6. Rotating electrical machines part 4: methods for determining synchronous machine quantities from tests

    CERN Document Server

    International Electrotechnical Commission. Geneva

    1985-01-01

    Applies to three-phase synchronous machines of 1 kVA rating and larger with rated frequency of not more than 400 Hz and not less than 15 Hz. An appendix gives unconfirmed test methods for determining synchronous machine quantities. Notes: 1 -Tests are not applicable to synchronous machines such as permanent magnet field machines, inductor type machines, etc. 2 -They also apply to brushless machines, but certain variations exist and special precautions should be taken.

  7. A Novel Global Path Planning Method for Mobile Robots Based on Teaching-Learning-Based Optimization

    Directory of Open Access Journals (Sweden)

    Zongsheng Wu

    2016-07-01

    Full Text Available The Teaching-Learning-Based Optimization (TLBO algorithm has been proposed in recent years. It is a new swarm intelligence optimization algorithm simulating the teaching-learning phenomenon of a classroom. In this paper, a novel global path planning method for mobile robots is presented, which is based on an improved TLBO algorithm called Nonlinear Inertia Weighted Teaching-Learning-Based Optimization (NIWTLBO algorithm in our previous work. Firstly, the NIWTLBO algorithm is introduced. Then, a new map model of the path between start-point and goal-point is built by coordinate system transformation. Lastly, utilizing the NIWTLBO algorithm, the objective function of the path is optimized; thus, a global optimal path is obtained. The simulation experiment results show that the proposed method has a faster convergence rate and higher accuracy in searching for the path than the basic TLBO and some other algorithms as well, and it can effectively solve the optimization problem for mobile robot global path planning.

  8. Machine learning-based analysis of MR radiomics can help to improve the diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer.

    Science.gov (United States)

    Wang, Jing; Wu, Chen-Jiang; Bao, Mei-Ling; Zhang, Jing; Wang, Xiao-Ning; Zhang, Yu-Dong

    2017-04-03

    To investigate whether machine learning-based analysis of MR radiomics can help improve the performance PI-RADS v2 in clinically relevant prostate cancer (PCa). This IRB-approved study included 54 patients with PCa undergoing multi-parametric (mp) MRI before prostatectomy. Imaging analysis was performed on 54 tumours, 47 normal peripheral (PZ) and 48 normal transitional (TZ) zone based on histological-radiological correlation. Mp-MRI was scored via PI-RADS, and quantified by measuring radiomic features. Predictive model was developed using a novel support vector machine trained with: (i) radiomics, (ii) PI-RADS scores, (iii) radiomics and PI-RADS scores. Paired comparison was made via ROC analysis. For PCa versus normal TZ, the model trained with radiomics had a significantly higher area under the ROC curve (Az) (0.955 [95% CI 0.923-0.976]) than PI-RADS (Az: 0.878 [0.834-0.914], p Machine learning analysis of MR radiomics can help improve the performance of PI-RADS in clinically relevant PCa. • Machine-based analysis of MR radiomics outperformed in TZ cancer against PI-RADS. • Adding MR radiomics significantly improved the performance of PI-RADS. • DKI-derived Dapp and Kapp were two strong markers for the diagnosis of PCa.

  9. Method for machining steel with diamond tools

    Science.gov (United States)

    Casstevens, John M.

    1986-01-01

    The present invention is directed to a method for machining optical quality inishes and contour accuracies of workpieces of carbon-containing metals such as steel with diamond tooling. The wear rate of the diamond tooling is significantly reduced by saturating the atmosphere at the interface of the workpiece and the diamond tool with a gaseous hydrocarbon during the machining operation. The presence of the gaseous hydrocarbon effectively eliminates the deterioration of the diamond tool by inhibiting or preventing the conversion of the diamond carbon to graphite carbon at the point of contact between the cutting tool and the workpiece.

  10. MULTIPHYSICAL ANALYSIS METHODS OF TRANSPORT MACHINES

    Directory of Open Access Journals (Sweden)

    L. Avtonomova

    2009-01-01

    Full Text Available The complex of theoretical, calculable and applied questions of elements transport machine are studied. Coupled-field analyses are useful for solving problems where the coupled interaction of phenomena from various disciplines of physical science is significant. There are basically 3 methods of coupling distinguished by the finite element formulation techniques used to develop the matrix equations.

  11. Parallelization of the ROOT Machine Learning Methods

    CERN Document Server

    Vakilipourtakalou, Pourya

    2016-01-01

    Today computation is an inseparable part of scientific research. Specially in Particle Physics when there is a classification problem like discrimination of Signals from Backgrounds originating from the collisions of particles. On the other hand, Monte Carlo simulations can be used in order to generate a known data set of Signals and Backgrounds based on theoretical physics. The aim of Machine Learning is to train some algorithms on known data set and then apply these trained algorithms to the unknown data sets. However, the most common framework for data analysis in Particle Physics is ROOT. In order to use Machine Learning methods, a Toolkit for Multivariate Data Analysis (TMVA) has been added to ROOT. The major consideration in this report is the parallelization of some TMVA methods, specially Cross-Validation and BDT.

  12. Machine-Learning-Based Analysis in Genome-Edited Cells Reveals the Efficiency of Clathrin-Mediated Endocytosis

    Directory of Open Access Journals (Sweden)

    Sun Hae Hong

    2015-09-01

    Full Text Available Cells internalize various molecules through clathrin-mediated endocytosis (CME. Previous live-cell imaging studies suggested that CME is inefficient, with about half of the events terminated. These CME efficiency estimates may have been confounded by overexpression of fluorescently tagged proteins and inability to filter out false CME sites. Here, we employed genome editing and machine learning to identify and analyze authentic CME sites. We examined CME dynamics in cells that express fluorescent fusions of two defining CME proteins, AP2 and clathrin. Support vector machine classifiers were built to identify and analyze authentic CME sites. From inception until disappearance, authentic CME sites contain both AP2 and clathrin, have the same degree of limited mobility, continue to accumulate AP2 and clathrin over lifetimes >∼20 s, and almost always form vesicles as assessed by dynamin2 recruitment. Sites that contain only clathrin or AP2 show distinct dynamics, suggesting they are not part of the CME pathway.

  13. Study of on-machine error identification and compensation methods for micro machine tools

    Science.gov (United States)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-08-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  14. PACE: Probabilistic Assessment for Contributor Estimation- A machine learning-based assessment of the number of contributors in DNA mixtures.

    Science.gov (United States)

    Marciano, Michael A; Adelman, Jonathan D

    2017-03-01

    The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016

  15. Local Search Method for a Parallel Machine Scheduling Problemof Minimizing the Number of Machines Operated

    Science.gov (United States)

    Yamana, Takashi; Iima, Hitoshi; Sannomiya, Nobuo

    Although there have been many studies on parallel machine scheduling problems, the number of machines operated is fixed in these studies. It is desirable to generate a schedule with fewer machines operated from the viewpoint of the operation cost of machines. In this paper, we cope with a problem of minimizing the number of parallel machines subject to the constraint that the total tardiness is not greater than the value given in advance. For this problem, we introduce a local search method in which the number of machines operated is changed efficiently and appropriately in a short time as well as reducing the total tardiness.

  16. Fast and scalable prediction of local energy at grain boundaries: machine-learning based modeling of first-principles calculations

    Science.gov (United States)

    Tamura, Tomoyuki; Karasuyama, Masayuki; Kobayashi, Ryo; Arakawa, Ryuichi; Shiihara, Yoshinori; Takeuchi, Ichiro

    2017-10-01

    We propose a new scheme based on machine learning for the efficient screening in grain-boundary (GB) engineering. A set of results obtained from first-principles calculations based on density functional theory (DFT) for a small number of GB systems is used as a training data set. In our scheme, by partitioning the total energy into atomic energies using a local-energy analysis scheme, we can increase the training data set significantly. We use atomic radial distribution functions and additional structural features as atom descriptors to predict atomic energies and GB energies simultaneously using the least absolute shrinkage and selection operator, which is a recent standard regression technique in statistical machine learning. In the test study with fcc-Al [110] symmetric tilt GBs, we could achieve enough predictive accuracy to understand energy changes at and near GBs at a glance, even if we collected training data from only 10 GB systems. The present scheme can emulate time-consuming DFT calculations for large GB systems with negligible computational costs, and thus enable the fast screening of possible alternative GB systems.

  17. Multilevel Cognitive Machine-Learning-Based Concept for Artificial Awareness: Application to Humanoid Robot Awareness Using Visual Saliency

    Directory of Open Access Journals (Sweden)

    Kurosh Madani

    2012-01-01

    Full Text Available As part of “intelligence,” the “awareness” is the state or ability to perceive, feel, or be mindful of events, objects, or sensory patterns: in other words, to be conscious of the surrounding environment and its interactions. Inspired by early-ages human skills developments and especially by early-ages awareness maturation, the present paper accosts the robots intelligence from a different slant directing the attention to combining both “cognitive” and “perceptual” abilities. Within such a slant, the machine (robot shrewdness is constructed on the basis of a multilevel cognitive concept attempting to handle complex artificial behaviors. The intended complex behavior is the autonomous discovering of objects by robot exploring an unknown environment: in other words, proffering the robot autonomy and awareness in and about unknown backdrop.

  18. Numerical analysis method for linear induction machines.

    Science.gov (United States)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  19. Machine learning-based receiver operating characteristic (ROC) curves for crisp and fuzzy classification of DNA microarrays in cancer research.

    Science.gov (United States)

    Peterson, Leif E; Coleman, Matthew A

    2008-01-01

    Receiver operating characteristic (ROC) curves were generated to obtain classification area under the curve (AUC) as a function of feature standardization, fuzzification, and sample size from nine large sets of cancer-related DNA microarrays. Classifiers used included k nearest neighbor (kNN), näive Bayes classifier (NBC), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), learning vector quantization (LVQ1), logistic regression (LOG), polytomous logistic regression (PLOG), artificial neural networks (ANN), particle swarm optimization (PSO), constricted particle swarm optimization (CPSO), kernel regression (RBF), radial basis function networks (RBFN), gradient descent support vector machines (SVMGD), and least squares support vector machines (SVMLS). For each data set, AUC was determined for a number of combinations of sample size, total sum[-log(p)] of feature t-tests, with and without feature standardization and with (fuzzy) and without (crisp) fuzzification of features. Altogether, a total of 2,123,530 classification runs were made. At the greatest level of sample size, ANN resulted in a fitted AUC of 90%, while PSO resulted in the lowest fitted AUC of 72.1%. AUC values derived from 4NN were the most dependent on sample size, while PSO was the least. ANN depended the most on total statistical significance of features used based on sum[-log(p)], whereas PSO was the least dependent. Standardization of features increased AUC by 8.1% for PSO and -0.2% for QDA, while fuzzification increased AUC by 9.4% for PSO and reduced AUC by 3.8% for QDA. AUC determination in planned microarray experiments without standardization and fuzzification of features will benefit the most if CPSO is used for lower levels of feature significance (i.e., sum[-log(p)] ~ 50) and ANN is used for greater levels of significance (i.e., sum[-log(p)] ~ 500). When only standardization of features is performed, studies are likely to benefit most by using CPSO for low levels

  20. Discrimination of Brazilian propolis according to the seasoning using chemometrics and machine learning based on UV-Vis scanning data.

    Science.gov (United States)

    Tomazzoli, Maíra Maciel; Pai Neto, Remi Dal; Moresco, Rodolfo; Westphal, Larissa; Zeggio, Amélia Regina Somensi; Specht, Leandro; Costa, Christopher; Rocha, Miguel; Maraschin, Marcelo

    2015-10-21

    Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plant's resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis' chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds (λ = 280-400ηm), suggesting that besides the biological activities of those

  1. Method and apparatus for monitoring machine performance

    Science.gov (United States)

    Smith, Stephen F.; Castleberry, Kimberly N.

    1996-01-01

    Machine operating conditions can be monitored by analyzing, in either the time or frequency domain, the spectral components of the motor current. Changes in the electric background noise, induced by mechanical variations in the machine, are correlated to changes in the operating parameters of the machine.

  2. Multitask Learning-Based Security Event Forecast Methods for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hui He

    2016-01-01

    Full Text Available Wireless sensor networks have strong dynamics and uncertainty, including network topological changes, node disappearance or addition, and facing various threats. First, to strengthen the detection adaptability of wireless sensor networks to various security attacks, a region similarity multitask-based security event forecast method for wireless sensor networks is proposed. This method performs topology partitioning on a large-scale sensor network and calculates the similarity degree among regional subnetworks. The trend of unknown network security events can be predicted through multitask learning of the occurrence and transmission characteristics of known network security events. Second, in case of lacking regional data, the quantitative trend of unknown regional network security events can be calculated. This study introduces a sensor network security event forecast method named Prediction Network Security Incomplete Unmarked Data (PNSIUD method to forecast missing attack data in the target region according to the known partial data in similar regions. Experimental results indicate that for an unknown security event forecast the forecast accuracy and effects of the similarity forecast algorithm are better than those of single-task learning method. At the same time, the forecast accuracy of the PNSIUD method is better than that of the traditional support vector machine method.

  3. Computerization of Hungarian reforestation manual with machine learning methods

    Science.gov (United States)

    Czimber, Kornél; Gálos, Borbála; Mátyás, Csaba; Bidló, András; Gribovszki, Zoltán

    2017-04-01

    Hungarian forests are highly sensitive to the changing climate, especially to the available precipitation amount. Over the past two decades several drought damages were observed for tree species which are in the lower xeric limit of their distribution. From year to year these affected forest stands become more difficult to reforest with the same native species because these are not able to adapt to the increasing probability of droughts. The climate related parameter set of the Hungarian forest stand database needs updates. Air humidity that was formerly used to define the forest climate zones is not measured anymore and its value based on climate model outputs is highly uncertain. The aim was to develop a novel computerized and objective method to describe the species-specific climate conditions that is essential for survival, growth and optimal production of the forest ecosystems. The method is expected to project the species spatial distribution until 2100 on the basis of regional climate model simulations. Until now, Hungarian forest managers have been using a carefully edited spreadsheet for reforestation purposes. Applying binding regulations this spreadsheet prescribes the stand-forming and admixed tree species and their expected growth rate for each forest site types. We are going to present a new machine learning based method to replace the former spreadsheet. We took into great consideration of various methods, such as maximum likelihood, Bayesian networks, Fuzzy logic. The method calculates distributions, setups classification, which can be validated and modified by experts if necessary. Projected climate change conditions makes necessary to include into this system an additional climate zone that does not exist in our region now, as well as new options for potential tree species. In addition to or instead of the existing ones, the influence of further limiting parameters (climatic extremes, soil water retention) are also investigated. Results will be

  4. A confidence metric for using neurobiological feedback in actor-critic reinforcement learning based brain-machine interfaces

    Directory of Open Access Journals (Sweden)

    Noeline Wilhelmina Prins

    2014-05-01

    Full Text Available Brain-Machine Interfaces (BMIs can be used to restore function in people living with paralysis. Current BMIs require extensive calibration that increase the set-up times and external inputs for decoder training that may be difficult to produce in paralyzed individuals. Both these factors have presented challenges in transitioning the technology from research environments to activities of daily living (ADL. For BMIs to be seamlessly used in ADL, these issues should be handled with minimal external input thus reducing the need for a technician/caregiver to calibrate the system. Reinforcement Learning (RL based BMIs are a good tool to be used when there is no external training signal and can provide an adaptive modality to train BMI decoders. However, RL based BMIs are sensitive to the feedback provided to adapt the BMI. In actor-critic BMIs, this feedback is provided by the critic and the overall system performance is limited by the critic accuracy. In this work, we developed an adaptive BMI that could handle inaccuracies in the critic feedback in an effort to produce more accurate RL based BMIs. We developed a confidence measure, which indicated how appropriate the feedback is for updating the decoding parameters of the actor. The results show that with the new update formulation, the critic accuracy is no longer a limiting factor for the overall performance. We tested and validated the system on three different data sets: synthetic data generated by an Izhikevich neural spiking model, synthetic data with a Gaussian noise distribution, and data collected from a non-human primate engaged in a reaching task. All results indicated that the system with the critic confidence built in always outperformed the system without the critic confidence. Results of this study suggest the potential application of the technique in developing an autonomous BMI that does not need an external signal for training or extensive calibration.

  5. Machine Learning-Based Classification of 38 Years of Spine-Related Literature Into 100 Research Topics.

    Science.gov (United States)

    Sing, David C; Metz, Lionel N; Dudli, Stefan

    2017-06-01

    Retrospective review. To identify the top 100 spine research topics. Recent advances in "machine learning," or computers learning without explicit instructions, have yielded broad technological advances. Topic modeling algorithms can be applied to large volumes of text to discover quantifiable themes and trends. Abstracts were extracted from the National Library of Medicine PubMed database from five prominent peer-reviewed spine journals (European Spine Journal [ESJ], The Spine Journal [SpineJ], Spine, Journal of Spinal Disorders and Techniques [JSDT], Journal of Neurosurgery: Spine [JNS]). Each abstract was entered into a latent Dirichlet allocation model specified to discover 100 topics, resulting in each abstract being assigned a probability of belonging in a topic. Topics were named using the five most frequently appearing terms within that topic. Significance of increasing ("hot") or decreasing ("cold") topic popularity over time was evaluated with simple linear regression. From 1978 to 2015, 25,805 spine-related research articles were extracted and classified into 100 topics. Top two most published topics included "clinical, surgeons, guidelines, information, care" (n = 496 articles) and "pain, back, low, treatment, chronic" (424). Top two hot trends included "disc, cervical, replacement, level, arthroplasty" (+0.05%/yr, P topics were ESJ-"operative, surgery, postoperative, underwent, preoperative"; SpineJ-"clinical, surgeons, guidelines, information, care"; Spine-"pain, back, low, treatment, chronic"; JNS- "tumor, lesions, rare, present, diagnosis"; JSDT-"cervical, anterior, plate, fusion, ACDF." Topics discovered through latent Dirichlet allocation modeling represent unbiased meaningful themes relevant to spine care. Topic dynamics can provide historical context and direction for future research for aspiring investigators and trainees interested in spine careers. Please explore https://singdc.shinyapps.io/spinetopics. N A.

  6. Machine learning based Uyghur language text categorization%基于机器学习的维吾尔文文本分类研究

    Institute of Scientific and Technical Information of China (English)

    阿力木江·艾沙; 吐尔根·依布拉音; 艾山·吾买尔; 马尔哈巴·艾力

    2012-01-01

    随着Internet上维吾尔文信息的迅速发展,维吾尔文文本分类成为处理和组织这些大量文本数据的关键技术.研究维吾尔文文本分类相关技术和方法,针对维吾尔文文本在向量空间模型(VSM)表示下的高维性,采用词干提取和IG相结合的方法对表示空间进行降维.采用基于机器学习的分类算法(kNN和Narve Bayes)对维吾尔文文本语料进行了分类实验并分析了实验结果.%With the rapid increase of Uyghur language text information on the Internet, Uyghur language text categorization has become a key technique for processing and organizing these text data. As to the high dimensionality of Uyghur language texts under vector space model representation, the stemming technique is used along with IG to reduce the dimensionality. The categorization experiments are performed using machine learning based text categorization algorithms such as Naive Bayes and kNN on Uyghur language text corpus and the experimental results are analyzed.

  7. Machine learning methods for metabolic pathway prediction

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2010-01-01

    Full Text Available Abstract Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations.

  8. Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines

    Directory of Open Access Journals (Sweden)

    Jing-Kui Zhang

    2016-03-01

    Full Text Available The impact-echo (IE method is a popular non-destructive testing (NDT technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location, because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures.

  9. Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines.

    Science.gov (United States)

    Zhang, Jing-Kui; Yan, Weizhong; Cui, De-Mi

    2016-03-26

    The impact-echo (IE) method is a popular non-destructive testing (NDT) technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location), because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures.

  10. Performance of machine learning methods for classification tasks

    OpenAIRE

    B. Krithika; Dr. V. Ramalingam; Rajan, K

    2013-01-01

    In this paper, the performance of various machine learning methods on pattern classification and recognition tasks are proposed. The proposed method for evaluating performance will be based on the feature representation, feature selection and setting model parameters. The nature of the data, the methods of feature extraction and feature representation are discussed. The results of the Machine Learning algorithms on the classification task are analysed. The performance of Machine Learning meth...

  11. Machine Learning Based Malware Detection

    Science.gov (United States)

    2015-05-18

    algorithms analyze records designated for training to generate a mathematical model that maps the relationship of file features and labels. That...Microsoft Windows: - Windows Vista Enterprise - Windows 7 Professional - Windows Server 2008 R2 Standard - Windows 8.1 Professional Additionally, we

  12. Methods and systems for micro machines

    Science.gov (United States)

    Stalford, Harold L.

    2017-04-11

    A micro machine may be in or less than the micrometer domain. The micro machine may include a micro actuator and a micro shaft coupled to the micro actuator. The micro shaft is operable to be driven by the micro actuator. A tool is coupled to the micro shaft and is operable to perform work in response to at least motion of the micro shaft.

  13. Method of fabricating a micro machine

    Science.gov (United States)

    Stalford, Harold L

    2014-11-11

    A micro machine may be in or less than the micrometer domain. The micro machine may include a micro actuator and a micro shaft coupled to the micro actuator. The micro shaft is operable to be driven by the micro actuator. A tool is coupled to the micro shaft and is operable to perform work in response to at least motion of the micro shaft.

  14. A Method for Design of Modular Reconfigurable Machine Tools

    Directory of Open Access Journals (Sweden)

    Zhengyi Xu

    2017-02-01

    Full Text Available Presented in this paper is a method for the design of modular reconfigurable machine tools (MRMTs. An MRMT is capable of using a minimal number of modules through reconfiguration to perform the required machining tasks for a family of parts. The proposed method consists of three steps: module identification, module determination, and layout synthesis. In the first step, the module components are collected from a family of general-purpose machines to establish a module library. In the second step, for a given family of parts to be machined, a set of needed modules are selected from the module library to construct a desired reconfigurable machine tool. In the third step, a final machine layout is decided though evaluation by considering a number of performance indices. Based on this method, a software package has been developed that can design an MRMT for a given part family.

  15. Machine learning methods for nanolaser characterization

    CERN Document Server

    Zibar, Darko; Winther, Ole; Moerk, Jesper; Schaeffer, Christian

    2016-01-01

    Nanocavity lasers, which are an integral part of an on-chip integrated photonic network, are setting stringent requirements on the sensitivity of the techniques used to characterize the laser performance. Current characterization tools cannot provide detailed knowledge about nanolaser noise and dynamics. In this progress article, we will present tools and concepts from the Bayesian machine learning and digital coherent detection that offer novel approaches for highly-sensitive laser noise characterization and inference of laser dynamics. The goal of the paper is to trigger new research directions that combine the fields of machine learning and nanophotonics for characterizing nanolasers and eventually integrated photonic networks

  16. Method of change management based on dynamic machining error propagation

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In multistage machining processes(MMPs),the final quality of a part is influenced by a series of machining processes,which are complex correlations.So it is necessary to research the rule of machin-ing error propagation to ensure the machining quality.For this issue,a change management method of quality control nodes(i.e.,QC-nodes) for machining error propagation is proposed.A new framework of QC-nodes is proposed including association analysis of quality attributes,quality closed-loop control,error tracing and error coordination optimization.And the weighted directed network is introduced to describe and analyze the correlativity among the machining processes.In order to establish the dynamic machining error propagation network(D-MEPN),QC-nodes are defined as the network nodes,and the correlation among the QC-nodes is mapped onto the network.Based on the network analysis,the dynamic characteristics of machining error propagation are explored.An adaptive control method based on the stability theory is introduced for error coordination optimization.At last,a simple example is used to verify the proposed method.

  17. Sensor fusion method for machine performance enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Mou, J.I. [Arizona State Univ., Tempe, AZ (United States); King, C.; Hillaire, R. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems Center; Jones, S.; Furness, R. [Ford Motor Co., Dearborn, MI (United States)

    1998-03-01

    A sensor fusion methodology was developed to uniquely integrate pre-process, process-intermittent, and post-process measurement and analysis technology to cost-effectively enhance the accuracy and capability of computer-controlled manufacturing equipment. Empirical models and computational algorithms were also developed to model, assess, and then enhance the machine performance.

  18. SUPPORT VECTOR MACHINE METHOD FOR PREDICTING INVESTMENT MEASURES

    Directory of Open Access Journals (Sweden)

    Olga V. Kitova

    2016-01-01

    Full Text Available Possibilities of applying intelligent machine learning technique based on support vectors for predicting investment measures are considered in the article. The base features of support vector method over traditional econometric techniques for improving the forecast quality are described. Computer modeling results in terms of tuning support vector machine models developed with programming language Python for predicting some investment measures are shown.

  19. Method of change management based on dynamic machining error propagation

    Institute of Scientific and Technical Information of China (English)

    FENG Jia; JIANG PingYu

    2009-01-01

    In multistage machining processes (MMPs), the final quality of a part is influenced by a series of machining processes, which are complex correlations. So it is necessary to research the rule of machining error propagation to ensure the machining quality. For this issue, a change management method of quality control nodes (i.e., QC-nodes) for machining error propagation is proposed. A new framework of QC-nodes is proposed including association analysis of quality attributes, quality closed-loop control,error tracing and error coordination optimization. And the weighted directed network is introduced to describe and analyze the correlativity among the machining processes. In order to establish the dynamic machining error propagation network (D-MEPN), QC-nodes are defined as the network nodes,and the correlation among the QC-nodes is mapped onto the network. Based on the network analysis,the dynamic characteristics of machining error propagation are explored. An adaptive control method based on the stability theory is introduced for error coordination optimization. At last, a simple example is used to verify the proposed method.

  20. Machine Self-Teaching Methods for Parameter Optimization.

    Science.gov (United States)

    1986-12-01

    A199 285 MACHINE SELF- TEACHING METHODS FOR PARAMETER / OPTIMIZATION(U) NAVAL OCEAN SYSTEMS CENTER SAN DIEGO CA R A DILLARD DEC 86 NOSC/TR-1S39...Technical Document 1039 C) ,December 1986 Machine Self- Teaching Methods for Parameter Optimization Robin A. Dillard DTICS ELECTE MAY i01 𔄁 STAra Approved...ELEMEWt NO PROECi’ No TASK NO ARC Locally FundedI I1 I TE (ewd* Seawmy Cft*Wi., Machine Self- Teaching Methods for Parameter Optimization it PERSONAL

  1. NetiNeti: discovery of scientific names from text using machine learning methods

    Directory of Open Access Journals (Sweden)

    Akella Lakshmi

    2012-08-01

    Full Text Available Abstract Background A scientific name for an organism can be associated with almost all biological data. Name identification is an important step in many text mining tasks aiming to extract useful information from biological, biomedical and biodiversity text sources. A scientific name acts as an important metadata element to link biological information. Results We present NetiNeti (Name Extraction from Textual Information-Name Extraction for Taxonomic Indexing, a machine learning based approach for recognition of scientific names including the discovery of new species names from text that will also handle misspellings, OCR errors and other variations in names. The system generates candidate names using rules for scientific names and applies probabilistic machine learning methods to classify names based on structural features of candidate names and features derived from their contexts. NetiNeti can also disambiguate scientific names from other names using the contextual information. We evaluated NetiNeti on legacy biodiversity texts and biomedical literature (MEDLINE. NetiNeti performs better (precision = 98.9% and recall = 70.5% compared to a popular dictionary based approach (precision = 97.5% and recall = 54.3% on a 600-page biodiversity book that was manually marked by an annotator. On a small set of PubMed Central’s full text articles annotated with scientific names, the precision and recall values are 98.5% and 96.2% respectively. NetiNeti found more than 190,000 unique binomial and trinomial names in more than 1,880,000 PubMed records when used on the full MEDLINE database. NetiNeti also successfully identifies almost all of the new species names mentioned within web pages. Conclusions We present NetiNeti, a machine learning based approach for identification and discovery of scientific names. The system implementing the approach can be accessed at http://namefinding.ubio.org.

  2. Studying depression using imaging and machine learning methods.

    Science.gov (United States)

    Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J

    2016-01-01

    Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.

  3. Performance of machine learning methods for classification tasks

    Directory of Open Access Journals (Sweden)

    B. Krithika

    2013-06-01

    Full Text Available In this paper, the performance of various machine learning methods on pattern classification and recognition tasks are proposed. The proposed method for evaluating performance will be based on the feature representation, feature selection and setting model parameters. The nature of the data, the methods of feature extraction and feature representation are discussed. The results of the Machine Learning algorithms on the classification task are analysed. The performance of Machine Learning methods on classifying Tamil word patterns, i.e., classification of noun and verbs are analysed.The software WEKA (data mining tool is used for evaluating the performance. WEKA has several machine learning algorithms like Bayes, Trees, Lazy, Rule based classifiers.

  4. LHC Machine Protection System: Method for Balancing Machine Safety and Beam Availability

    CERN Document Server

    Wagner, Sigrid; Schmidt, R

    The Large Hadron Collider (LHC) at CERN in Geneva, Switzerland, exceeds existing particle accelerators in terms of size and complexity. The most remarkable machine damage potential is held by the amount of stored energy. This thesis introduces a quantitative method for the reliability analysis of the LHC Machine Protection System (MPS) in terms of machine safety and beam availability. It is based on object-oriented modelling of the primary signal path, where the components’ behaviour is described by a simple Markov model with two failure states. The explicit inclusion of machine failure allows for the quantification of five scenarios. They include the safety-relevant scenario of a missed emergency shutdown and the scenario of a preventive shutdown, which is crucial with regard to beam availability. The presented MPS model covers two of the main MPS subsystems, namely the Beam Loss Monitor System and the Beam Interlock System. The model includes almost 5000 individually modelled components. It is implemented...

  5. New method to characterize a machining system: application in turning

    CERN Document Server

    Bisu, Claudiu-Florinel; Darnis, Philippe; Laheurte, Raynald; Gérard, Alain; 10.1007/s12289-009-0395-y

    2009-01-01

    Many studies simulates the machining process by using a single degree of freedom spring-mass sytem to model the tool stiffness, or the workpiece stiffness, or the unit tool-workpiece stiffness in modelings 2D. Others impose the tool action, or use more or less complex modelings of the efforts applied by the tool taking account the tool geometry. Thus, all these models remain two-dimensional or sometimes partially three-dimensional. This paper aims at developing an experimental method allowing to determine accurately the real three-dimensional behaviour of a machining system (machine tool, cutting tool, tool-holder and associated system of force metrology six-component dynamometer). In the work-space model of machining, a new experimental procedure is implemented to determine the machining system elastic behaviour. An experimental study of machining system is presented. We propose a machining system static characterization. A decomposition in two distinct blocks of the system "Workpiece-Tool-Machine" is realiz...

  6. Ensemble Machine Learning Methods and Applications

    CERN Document Server

    Ma, Yunqian

    2012-01-01

    It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics.   Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...

  7. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear...... models to kernel learning, and means for restoring the generalizability in both kernel Principal Component Analysis and the Support Vector Machine are proposed. Viability is proved on a wide range of benchmark machine learning data sets....... as innerproducts in the model formulation. This dissertation presents research on improving the performance of standard kernel methods like kernel Principal Component Analysis and the Support Vector Machine. Moreover, the goal of the thesis has been two-fold. The first part focuses on the use of kernel Principal...

  8. Research in non-equalization machining method for spatial cam

    Institute of Scientific and Technical Information of China (English)

    Jun-hua CHEN; Yi-jie WU

    2008-01-01

    Many kinds of devices with cam have been widely used in various mechanical equipments.However,non-equalization machining for spatial cam trough remains to be a difficult problem.This paper focuses on the analysis of ruaning conditions and machining processes of spatial cam with oscillating follower.We point out the common errors in the biased distance cutting.By analyzing the motion of oscillating follower of spatial cam,we present a new 3D curve expansion model of spatial cam trough-outline.Based on this model.we have proposed a machining method for trochoidal milling with non-equalization diameter cutter.This new method has led to a creative and effective Way for non.equalization diameter machining for spatial cam with oscillating follower.

  9. Machine cost analysis using the traditional machine-rate method and ChargeOut!

    Science.gov (United States)

    E. M. (Ted). Bilek

    2009-01-01

    Forestry operations require ever more use of expensive capital equipment. Mechanization is frequently necessary to perform cost-effective and safe operations. Increased capital should mean more sophisticated capital costing methodologies. However the machine rate method, which is the costing methodology most frequently used, dates back to 1942. CHARGEOUT!, a recently...

  10. Adjustable entropy function method for support vector machine

    Institute of Scientific and Technical Information of China (English)

    Wu Qing; Liu Sanyang; Zhang Leyou

    2008-01-01

    Based on KKT complementary condition in optimization theory,an unconstrained non-differential optimization model for support vector machine is proposed.An adjustable entropy function method is given to deal with the proposed optimization problem and the Newton algorithm is used to figure out the optimal solution.The proposed method can find an optimal solution with a relatively small parameter p,which avoids the numerical overflow in the traditional entropy function methods.It is a new approach to solve support vector machine.The theoretical analysis and experimental results illustrate the feasibility and efficiency of the proposed algorithm.

  11. Machine Learning Methods for Articulatory Data

    Science.gov (United States)

    Berry, Jeffrey James

    2012-01-01

    Humans make use of more than just the audio signal to perceive speech. Behavioral and neurological research has shown that a person's knowledge of how speech is produced influences what is perceived. With methods for collecting articulatory data becoming more ubiquitous, methods for extracting useful information are needed to make this data…

  12. METHODS OF DEFINITION OF COMPLEX MACHINE GRAINS’ AND SEEDS’ TRAUMATIZING

    Directory of Open Access Journals (Sweden)

    Pekhalskiy I. A.

    2016-06-01

    Full Text Available Damage of grain and seeds by machines makes essential negative impact on sowing qualities of seeds and processing properties of grain. While processing of grain a lot of various cars and actions differently injure weevils. To exclude traumatizing of grains in the course of mechanical preparation is not obviously possible, as working bodies of cars are a source of mechanical and thermomechanical damages. Besides, injured weevils on the physical-mechanical properties practically do not differ from whole, i.e. they do not possess signs for machine division. To reduce traumatizing of weevils is possible with the help of application of optimum technologies of machining, selection of the conforming technological modes, using as a part of actions of constructional stuffs with a low elastic modulus, perfection of their design data. For definition of injuring ability of various machines and actions through which takes place grain lots, have developed a procedure which allows with high degree of reliance to estimate complex traumatizing of weevils (namely, their outside integuments and intrinsic frames machines and the actions which are a part of aggregates and complexes for machine preparation of grain and seeds. The developed procedure bases on a basis of the standard documents regulating test methods of agricultural machinery and together with it allows to consider connatural heterogeneity of the grain lots arriving for processing

  13. DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.

    Science.gov (United States)

    Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P

    2015-12-01

    Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.

  14. A defect-driven diagnostic method for machine tool spindles.

    Science.gov (United States)

    Vogl, Gregory W; Donmez, M Alkan

    2015-01-01

    Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition.

  15. Oceanic eddy detection and lifetime forecast using machine learning methods

    Science.gov (United States)

    Ashkezari, Mohammad D.; Hill, Christopher N.; Follett, Christopher N.; Forget, Gaël.; Follows, Michael J.

    2016-12-01

    We report a novel altimetry-based machine learning approach for eddy identification and characterization. The machine learning models use daily maps of geostrophic velocity anomalies and are trained according to the phase angle between the zonal and meridional components at each grid point. The trained models are then used to identify the corresponding eddy phase patterns and to predict the lifetime of a detected eddy structure. The performance of the proposed method is examined at two dynamically different regions to demonstrate its robust behavior and region independency.

  16. Design methods for fault-tolerant finite state machines

    Science.gov (United States)

    Niranjan, Shailesh; Frenzel, James F.

    1993-01-01

    VLSI electronic circuits are increasingly being used in space-borne applications where high levels of radiation may induce faults, known as single event upsets. In this paper we review the classical methods of designing fault tolerant digital systems, with an emphasis on those methods which are particularly suitable for VLSI-implementation of finite state machines. Four methods are presented and will be compared in terms of design complexity, circuit size, and estimated circuit delay.

  17. Machine learning methods without tears: a primer for ecologists.

    Science.gov (United States)

    Olden, Julian D; Lawler, Joshua J; Poff, N LeRoy

    2008-06-01

    Machine learning methods, a family of statistical techniques with origins in the field of artificial intelligence, are recognized as holding great promise for the advancement of understanding and prediction about ecological phenomena. These modeling techniques are flexible enough to handle complex problems with multiple interacting elements and typically outcompete traditional approaches (e.g., generalized linear models), making them ideal for modeling ecological systems. Despite their inherent advantages, a review of the literature reveals only a modest use of these approaches in ecology as compared to other disciplines. One potential explanation for this lack of interest is that machine learning techniques do not fall neatly into the class of statistical modeling approaches with which most ecologists are familiar. In this paper, we provide an introduction to three machine learning approaches that can be broadly used by ecologists: classification and regression trees, artificial neural networks, and evolutionary computation. For each approach, we provide a brief background to the methodology, give examples of its application in ecology, describe model development and implementation, discuss strengths and weaknesses, explore the availability of statistical software, and provide an illustrative example. Although the ecological application of machine learning approaches has increased, there remains considerable skepticism with respect to the role of these techniques in ecology. Our review encourages a greater understanding of machin learning approaches and promotes their future application and utilization, while also providing a basis from which ecologists can make informed decisions about whether to select or avoid these approaches in their future modeling endeavors.

  18. Plasma disruption prediction using machine learning methods: DIII-D

    Science.gov (United States)

    Lupin-Jimenez, L.; Kolemen, E.; Eldon, D.; Eidietis, N.

    2016-10-01

    Plasma disruption prediction is becoming more important with the development of larger tokamaks, due to the larger amount of thermal and magnetic energy that can be stored. By accurately predicting an impending disruption, the disruption's impact can be mitigated or, better, prevented. Recent approaches to disruption prediction have been through implementation of machine learning methods, which characterize raw and processed diagnostic data to develop accurate prediction models. Using disruption trials from the DIII-D database, the effectiveness of different machine learning methods are characterized. Developed real time disruption prediction approaches are focused on tearing and locking modes. Machine learning methods used include random forests, multilayer perceptrons, and traditional regression analysis. The algorithms are trained with data within short time frames, and whether or not a disruption occurs within the time window after the end of the frame. Initial results from the machine learning algorithms will be presented. Work supported by US DOE under the Science Undergraduate Laboratory Internship (SULI) program, DE-FC02-04ER54698, and DE-AC02-09CH11466.

  19. New Cogging Torque Reduction Methods for Permanent Magnet Machine

    Science.gov (United States)

    Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.

    2017-08-01

    Permanent magnet type motors (PMs) especially permanent magnet synchronous motor (PMSM) are expanding its limbs in industrial application system and widely used in various applications. The key features of this machine include high power and torque density, extending speed range, high efficiency, better dynamic performance and good flux-weakening capability. Nevertheless, high in cogging torque, which may cause noise and vibration, is one of the threat of the machine performance. Therefore, with the aid of 3-D finite element analysis (FEA) and simulation using JMAG Designer, this paper proposed new method for cogging torque reduction. Based on the simulation, methods of combining the skewing with radial pole pairing method and skewing with axial pole pairing method reduces the cogging torque effect up to 71.86% and 65.69% simultaneously.

  20. A novel sensing method of fault in moving machine

    Science.gov (United States)

    Seo, Dae-Hoon; Jeon, Jong-Hoon; Kim, Yang-Hann

    2014-03-01

    Fault in rotating parts of a machine such as bearings and gears often causes periodic impulses and they are transmitted to adjacent parts while it is moving with a constant speed. It has been an issue, therefore, to find a best means that can tell us the existence of periodic impulse and the period as early as possible. Previous researches mainly use accelerometers since it can easily measure the vibration due to impulse. They normally require considerable measurement time and inconvenience, especially if we have to use them for many different machines. This is straightforward consequence because the sensor is to be removed from and attached to the machine elements as many time as required. This paper proposes a novel method to sense the periodic impulse of moving machinery, by using a non-contact sensor such as a microphone. The method uses the periodic impulsive sound radiated by the fault instead of the impulsive vibration. It is not only more convenient than using the accelerometers, but it can also promptly test a lot of machines; they only have to pass by the microphone during the measurement. However, because the machine under test is moving, the measured impulsive signal is not periodic due to Doppler effect. This makes it difficult to estimate the period of impulses as well as to find the existence of fault. In order to solve this, we firstly model and analyze the characteristics of the moving periodic impulsive sound. Based on this, a method to sense the existence of fault is introduced by utilizing characteristics of moving periodic impulsive sound. The performance is tested by theory and simulation with respect to the signal to noise ratio.

  1. DETECTION OF BACTERIA IN FOODSTUFF BY MACHINE LEARNING METHODS

    Directory of Open Access Journals (Sweden)

    A. P. Saenko

    2014-01-01

    Full Text Available The paper deals with an actual problem of ensuring the control of foodstuff quality by means of machine learning methods. Existing analysis methods require special laboratory environment, significant time and depend on the qualification and some physiological characteristics of an expert while the suggested method gives the possibility to decrease significantly the costs due to automatization. The mobile analysis platform performing this method is based on the fluorescence microscopy. The problem of the object classification as either “bacterium” or “third-party artifact” was solved for the test data with some classification algorithms as support vector machine, random forest, decision tree C4.5, k-nearest neighbors, Bayes method. The analysis showed that the most effective algorithms are support vector machine and random forest. This research is performed on the Mechatronics Department of Saint Petersburg National Research University of Information Technologies, Mechanics and Optics and the Quality Assurance and Industrial Image Processing Department of Ilmenau University of Technology with the support of the program “Mikhail Lomonosov” of the Ministry of Education and Science of Russia and the German Academic Exchange Service.

  2. Analysis on Large Deformation Compensation Method for Grinding Machine

    Directory of Open Access Journals (Sweden)

    Wang Ya-jie

    2013-08-01

    Full Text Available The positioning accuracy of computer numerical control machines tools and manufacturing systems is affected by structural deformations, especially for large sized systems. Structural deformations of the machine body are difficult to model and to predict. Researchs for the direct measurement of the amount of deformation and its compensation are farly limited in domestic and overseas,not involved to calculate the amount of deformation compensation. A new method to compensate large deformation caused by self-weight was presented in the paper. First of all, the compensation method is summarized; Then,static force analysis was taken on the large grinding machine through APDL(ANSYS Parameter Design Language. It could automatic extract results and form data files, getting the N points displacement in the working stroke of mechanical arm. Then, the mathematical model and corresponding flat rectangular function were established. The conclusion that the new compensation method is feasible was obtained through the analysis of displacement of N points. Finally, the MATLAB as a tool is used to calculate compensate amount and the accuracy of the proposed method is proved. Practice shows that the error caused by large deformatiion compensation method can meet the requirements of grinding.  

  3. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    Science.gov (United States)

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is

  4. A machine learning method for the prediction of receptor activation in the simulation of synapses.

    Directory of Open Access Journals (Sweden)

    Jesus Montes

    Full Text Available Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of

  5. Machine Learning and Data Mining Methods in Diabetes Research.

    Science.gov (United States)

    Kavakiotis, Ioannis; Tsave, Olga; Salifoglou, Athanasios; Maglaveras, Nicos; Vlahavas, Ioannis; Chouvarda, Ioanna

    2017-01-01

    The remarkable advances in biotechnology and health sciences have led to a significant production of data, such as high throughput genetic data and clinical information, generated from large Electronic Health Records (EHRs). To this end, application of machine learning and data mining methods in biosciences is presently, more than ever before, vital and indispensable in efforts to transform intelligently all available information into valuable knowledge. Diabetes mellitus (DM) is defined as a group of metabolic disorders exerting significant pressure on human health worldwide. Extensive research in all aspects of diabetes (diagnosis, etiopathophysiology, therapy, etc.) has led to the generation of huge amounts of data. The aim of the present study is to conduct a systematic review of the applications of machine learning, data mining techniques and tools in the field of diabetes research with respect to a) Prediction and Diagnosis, b) Diabetic Complications, c) Genetic Background and Environment, and e) Health Care and Management with the first category appearing to be the most popular. A wide range of machine learning algorithms were employed. In general, 85% of those used were characterized by supervised learning approaches and 15% by unsupervised ones, and more specifically, association rules. Support vector machines (SVM) arise as the most successful and widely used algorithm. Concerning the type of data, clinical datasets were mainly used. The title applications in the selected articles project the usefulness of extracting valuable knowledge leading to new hypotheses targeting deeper understanding and further investigation in DM.

  6. Unsupervised process monitoring and fault diagnosis with machine learning methods

    CERN Document Server

    Aldrich, Chris

    2013-01-01

    This unique text/reference describes in detail the latest advances in unsupervised process monitoring and fault diagnosis with machine learning methods. Abundant case studies throughout the text demonstrate the efficacy of each method in real-world settings. The broad coverage examines such cutting-edge topics as the use of information theory to enhance unsupervised learning in tree-based methods, the extension of kernel methods to multiple kernel learning for feature extraction from data, and the incremental training of multilayer perceptrons to construct deep architectures for enhanced data

  7. A Photometric Machine-Learning Method to Infer Stellar Metallicity

    Science.gov (United States)

    Miller, Adam A.

    2015-01-01

    Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' machine-learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..

  8. Study of CNC Grinding Machining Method About Isometric Polygon Profile

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The formed principle and CNC grinding machining method of isometric polygonal profile are studied deeply and systematically. Equation about section curve of isometric polygon profile is set up by means of geometric principle. With the use of differential geometry theory, the curve is proved to be with geometric feature of convex curve. It is referred to as Isometric Polygonal Curve (IPC), because that is a kind of convex curve on which the distance between any parallel tangent lines is equal. Isometric Poly...

  9. Housing Value Forecasting Based on Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Jingyi Mu

    2014-01-01

    Full Text Available In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing the real estate on corresponding regions or not. In this paper, support vector machine (SVM, least squares support vector machine (LSSVM, and partial least squares (PLS methods are used to forecast the home values. And these algorithms are compared according to the predicted results. Experiment shows that although the data set exists serious nonlinearity, the experiment result also show SVM and LSSVM methods are superior to PLS on dealing with the problem of nonlinearity. The global optimal solution can be found and best forecasting effect can be achieved by SVM because of solving a quadratic programming problem. In this paper, the different computation efficiencies of the algorithms are compared according to the computing times of relevant algorithms.

  10. Machine learning method for knowledge discovery experimented with otoneurological data.

    Science.gov (United States)

    Varpa, Kirsi; Iltanen, Kati; Juhola, Martti

    2008-08-01

    We have been interested in developing an otoneurological decision support system that supports diagnostics of vertigo diseases. In this study, we concentrate on testing its inference mechanism and knowledge discovery method. Knowledge is presented as patterns of classes. Each pattern includes attributes with weight and fitness values concerning the class. With the knowledge discovery method it is possible to form fitness values from data. Knowledge formation is based on frequency distributions of attributes. Knowledge formed by the knowledge discovery method is tested with two vertigo data sets and compared to experts' knowledge. The experts' and machine learnt knowledge are also combined in various ways in order to examine effects of weights on classification accuracy. The classification accuracy of knowledge discovery method is compared to 1- and 5-nearest neighbour method and Naive-Bayes classifier. The results showed that knowledge bases combining machine learnt knowledge with the experts' knowledge yielded the best classification accuracies. Further, attribute weighting had an important effect on the classification capability of the system. When considering different diseases in the used data sets, the performance of the knowledge discovery method and the inference method is comparable to other methods employed in this study.

  11. Yarn Properties Prediction Based on Machine Learning Method

    Institute of Scientific and Technical Information of China (English)

    YANG Jian-guo; L(U) Zhi-jun; LI Bei-zhi

    2007-01-01

    Although many works have been done to constructprediction models on yarn processing quality, the relationbetween spinning variables and yam properties has not beenestablished conclusively so far. Support vector machines(SVMs), based on statistical learning theory, are gainingapplications in the areas of machine learning and patternrecognition because of the high accuracy and goodgeneralization capability. This study briefly introduces theSVM regression algorithms, and presents the SVM basedsystem architecture for predicting yam properties. Model.selection which amounts to search in hyper-parameter spaceis performed for study of suitable parameters with grid-research method. Experimental results have been comparedwith those of artificial neural network(ANN) models. Theinvestigation indicates that in the small data sets and real-life production, SVM models are capable of remaining thestability of predictive accuracy, and more suitable for noisyand dynamic spinning process.

  12. A new method for thread calibration on coordinate measuring machines

    DEFF Research Database (Denmark)

    Carmignato, Simone; De Chiffre, Leonardo

    2003-01-01

    CIRP Annals – Paper proposal temporary reference: P15. This paper presents a new method for the calibration of thread gauges on coordinate measuring machines. The procedure involves scanning of thread profiles using a needle-like probe, achieving traceability by substitution of different thread......-3 gave measuring uncertainties comparable to the values from usual calibration methods on dedicated equipment, e.g. a measuring uncertainty of 1.5 µm was achieved for measurement of the pitch, and 2-2.5 µm for diameter measurements....

  13. Supervised Machine Learning Methods Applied to Predict Ligand- Binding Affinity.

    Science.gov (United States)

    Heck, Gabriela S; Pintro, Val O; Pereira, Richard R; de Ávila, Mauricio B; Levin, Nayara M B; de Azevedo, Walter F

    2017-01-01

    Calculation of ligand-binding affinity is an open problem in computational medicinal chemistry. The ability to computationally predict affinities has a beneficial impact in the early stages of drug development, since it allows a mathematical model to assess protein-ligand interactions. Due to the availability of structural and binding information, machine learning methods have been applied to generate scoring functions with good predictive power. Our goal here is to review recent developments in the application of machine learning methods to predict ligand-binding affinity. We focus our review on the application of computational methods to predict binding affinity for protein targets. In addition, we also describe the major available databases for experimental binding constants and protein structures. Furthermore, we explain the most successful methods to evaluate the predictive power of scoring functions. Association of structural information with ligand-binding affinity makes it possible to generate scoring functions targeted to a specific biological system. Through regression analysis, this data can be used as a base to generate mathematical models to predict ligandbinding affinities, such as inhibition constant, dissociation constant and binding energy. Experimental biophysical techniques were able to determine the structures of over 120,000 macromolecules. Considering also the evolution of binding affinity information, we may say that we have a promising scenario for development of scoring functions, making use of machine learning techniques. Recent developments in this area indicate that building scoring functions targeted to the biological systems of interest shows superior predictive performance, when compared with other approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Low-Resolution Tactile Image Recognition for Automated Robotic Assembly Using Kernel PCA-Based Feature Fusion and Multiple Kernel Learning-Based Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-01-01

    Full Text Available In this paper, we propose a robust tactile sensing image recognition scheme for automatic robotic assembly. First, an image reprocessing procedure is designed to enhance the contrast of the tactile image. In the second layer, geometric features and Fourier descriptors are extracted from the image. Then, kernel principal component analysis (kernel PCA is applied to transform the features into ones with better discriminating ability, which is the kernel PCA-based feature fusion. The transformed features are fed into the third layer for classification. In this paper, we design a classifier by combining the multiple kernel learning (MKL algorithm and support vector machine (SVM. We also design and implement a tactile sensing array consisting of 10-by-10 sensing elements. Experimental results, carried out on real tactile images acquired by the designed tactile sensing array, show that the kernel PCA-based feature fusion can significantly improve the discriminating performance of the geometric features and Fourier descriptors. Also, the designed MKL-SVM outperforms the regular SVM in terms of recognition accuracy. The proposed recognition scheme is able to achieve a high recognition rate of over 85% for the classification of 12 commonly used metal parts in industrial applications.

  15. Machining of composite materials. I - Traditional methods. II - Non-traditional methods

    Science.gov (United States)

    Abrate, S.; Walton, D. A.

    Traditional and nontraditional methods for machining organic-matrix and metal-matrix composites are reviewed. Such traditional procedures as drilling, cutting, sawing, routing, and grinding are discussed together with the damage introduced into composites by these manipulations. Particular attention is given to new, nontraditional methods, including laser, water-jet, electrodischarge, electrochemical spark, and ultrasonic machining methods showing that, these methods often speed up cutting and improve the surface quality. Moreover, it is sometimes possible to use new methods in cases where traditional methods are ineffective.

  16. A New Method for Incremental Testing of Finite State Machines

    Science.gov (United States)

    Pedrosa, Lehilton Lelis Chaves; Moura, Arnaldo Vieira

    2010-01-01

    The automatic generation of test cases is an important issue for conformance testing of several critical systems. We present a new method for the derivation of test suites when the specification is modeled as a combined Finite State Machine (FSM). A combined FSM is obtained conjoining previously tested submachines with newly added states. This new concept is used to describe a fault model suitable for incremental testing of new systems, or for retesting modified implementations. For this fault model, only the newly added or modified states need to be tested, thereby considerably reducing the size of the test suites. The new method is a generalization of the well-known W-method and the G-method, but is scalable, and so it can be used to test FSMs with an arbitrarily large number of states.

  17. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  18. Accurate Measurement Method for Tube's Endpoints Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    LIU Shaoli; JIN Peng; LIU Jianhua; WANG Xiao; SUN Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles,and their accurate assembly can directly affect the assembling reliability and the quality of products.It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly.However,the traditional tube inspection method is time-consuming and complex operations.Therefore,a new measurement method for a tube's endpoints based on machine vision is proposed.First,reflected light on tube's surface can be removed by using photometric linearization.Then,based on the optimization model for the tube's endpoint measurements and the principle of stereo matching,the global coordinates and the relative distance of the tube's endpoint are obtained.To confirm the feasibility,11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured.The experiment results show that the measurement repeatability accuracy is 0.167 mm,and the absolute accuracy is 0.328 mm.The measurement takes less than 1 min.The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  19. Accurate measurement method for tube's endpoints based on machine vision

    Science.gov (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2016-08-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  20. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  1. Machine Learning Methods for Attack Detection in the Smart Grid.

    Science.gov (United States)

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  2. Research advances in coupling bionic optimization design method for CNC machine tools based on ergonomics

    Directory of Open Access Journals (Sweden)

    Shihao LIU

    2015-06-01

    Full Text Available Currently, most Chinese CNC machine tools' dynamic and static performances have large gap comparing with the similar foreign products, and the CNC machine tools users' human-centered design demand are ignored, which results in that the domestic CNC machine tools' overall competitiveness is relatively low. In order to solve the above problem, the ergonomics and coupling bionics are adopted to study collaborative optimization design method for CNC machine tools based on the domestic and foreign machine tool design method research achievement. The CNC machine tools' "man-machine-environment" interaction mechanism can be built by combining with ergonomic, and then the CNC ergonomic design criteria is obtained. Taking the coupling bionics as theoretical basis, the biological structures "morphology-structure-function-adaptive growth" multiple coupling mechanism can be studied, and the mechanical performance benefits structure can be extracted, then the CNC machine tools structural coupling bionic design technology is obtained by combining with the similarity principle. Combination of CNC machine tools' ergonomic design criteria and coupling bionic design technology, and considering the CNC machine tool performance's interaction and coupling mechanisms, a new multi-objective optimization design method can be obtained, which is verified through CNC machine tools' prototype experiments. The new optimization design method for CNC machine tools can not only help improve the whole machine's dynamic and static performance, but also has a bright prospect because of the "man-oriented" design concept.

  3. A New Method for Locating Calculation of Arbitrary Spatial Straight Line or Plane in NC Machining

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In the manufacturing process, we often encounter so me location machining of space arbitrary straight lines and planes that are not on ly unparalleled but also not vertical with the machine tool spindle or the cutti ng tool. In the past, we can do the location machining through the methods of dr awing line and making level in the ordinary machine tool. In the numerical contr ol machining of the CNC machine tool and manufacturing center, however, the spac e location and angle of the arbitrary straight lines...

  4. Object-Oriented Support for Adaptive Methods on Paranel Machines

    Directory of Open Access Journals (Sweden)

    Sandeep Bhatt

    1993-01-01

    Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.

  5. Fourier transform based dynamic error modeling method for ultra-precision machine tool

    Science.gov (United States)

    Chen, Guoda; Liang, Yingchun; Ehmann, Kornel F.; Sun, Yazhou; Bai, Qingshun

    2014-08-01

    In some industrial fields, the workpiece surface need to meet not only the demand of surface roughness, but the strict requirement of multi-scale frequency domain errors. Ultra-precision machine tool is the most important carrier for the ultra-precision machining of the parts, whose errors is the key factor to influence the multi-scale frequency domain errors of the machined surface. The volumetric error modeling is the important bridge to link the relationship between the machine error and machined surface error. However, the available error modeling method from the previous research is hard to use to analyze the relationship between the dynamic errors of the machine motion components and multi-scale frequency domain errors of the machined surface, which plays the important reference role in the design and accuracy improvement of the ultra-precision machine tool. In this paper, a fourier transform based dynamic error modeling method is presented, which is also on the theoretical basis of rigid body kinematics and homogeneous transformation matrix. A case study is carried out, which shows the proposed method can successfully realize the identical and regular numerical description of the machine dynamic errors and the volumetric errors. The proposed method has strong potential for the prediction of the frequency domain errors on the machined surface, extracting of the information of multi-scale frequency domain errors, and analysis of the relationship between the machine motion components and frequency domain errors of the machined surface.

  6. Brain emotional learning based Brain Computer Interface

    Directory of Open Access Journals (Sweden)

    Abdolreza Asadi Ghanbari

    2012-09-01

    Full Text Available A brain computer interface (BCI enables direct communication between a brain and a computer translating brain activity into computer commands using preprocessing, feature extraction and classification operations. Classification is crucial as it has a substantial effect on the BCI speed and bit rate. Recent developments of brain-computer interfaces (BCIs bring forward some challenging problems to the machine learning community, of which classification of time-varying electrophysiological signals is a crucial one. Constructing adaptive classifiers is a promising approach to deal with this problem. In this paper, we introduce adaptive classifiers for classify electroencephalogram (EEG signals. The adaptive classifier is brain emotional learning based adaptive classifier (BELBAC, which is based on emotional learning process. The main purpose of this research is to use a structural model based on the limbic system of mammalian brain, for decision making and control engineering applications. We have adopted a network model developed by Moren and Balkenius, as a computational model that mimics amygdala, orbitofrontal cortex, thalamus, sensory input cortex and generally, those parts of the brain thought responsible for processing emotions. The developed method was compared with other methods used for EEG signals classification (support vector machine (SVM and two different neural network types (MLP, PNN. The result analysis demonstrated an efficiency of the proposed approach.

  7. A Method for Identifying the Mechanical Parameters in Resistance Spot Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    Mechanical dynamic responses of resistance welding machine have a significant influence on weld quality and electrode service life, it must be considered when the real welding production is carried out or the welding process is stimulated. The mathematical models for characterizing the mechanical...... and differences of machine constructions. In this paper, a method of identifying the machine mechanical parameters based on measured data is presented, which is independent on the construction and the type of machines. The computations are implemented in MATLAB....

  8. Hybrid Neural Network and Support Vector Machine Method for Optimization

    Science.gov (United States)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  9. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  10. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  11. UTILITY OF THE METHOD T.H.M. (MACHINE - HOUR - RATE PRODUCTION CENTURY PROCESS AUTOMATION

    Directory of Open Access Journals (Sweden)

    Cristina-Otilia, ȚENOVICI

    2014-11-01

    Full Text Available The method T.H.M. (machine - hour - rate gives greater accuracy in the factories or departments, where production is largely by machinery. In the specialty literature, the notion of price - the time - the car is defined as "œa rate calculated by dividing the budgeted or estimated overhead or labour and overhead cost attributable to a machine or group of similar machines by the appropriate number of machine hours. The hours may be the number of hours for which the machine or group is expected to be operated, the number of hours which would relate to normal working for the factory, or full capacity". In a highly mechanised cost centre, majority of the overhead expenses are incurred on account of using the machine, such as, depreciation, power, repairs and maintenance, insurance, etc. This method is currently offering the most equitable basis for absorption of overheads in machine intensive cost centres.

  12. Machine-learning methods in the classification of water bodies

    Directory of Open Access Journals (Sweden)

    Sołtysiak Marek

    2016-06-01

    Full Text Available Amphibian species have been considered as useful ecological indicators. They are used as indicators of environmental contamination, ecosystem health and habitat quality., Amphibian species are sensitive to changes in the aquatic environment and therefore, may form the basis for the classification of water bodies. Water bodies in which there are a large number of amphibian species are especially valuable even if they are located in urban areas. The automation of the classification process allows for a faster evaluation of the presence of amphibian species in the water bodies. Three machine-learning methods (artificial neural networks, decision trees and the k-nearest neighbours algorithm have been used to classify water bodies in Chorzów – one of 19 cities in the Upper Silesia Agglomeration. In this case, classification is a supervised data mining method consisting of several stages such as building the model, the testing phase and the prediction. Seven natural and anthropogenic features of water bodies (e.g. the type of water body, aquatic plants, the purpose of the water body (destination, position of the water body in relation to any possible buildings, condition of the water body, the degree of littering, the shore type and fishing activities have been taken into account in the classification. The data set used in this study involved information about 71 different water bodies and 9 amphibian species living in them. The results showed that the best average classification accuracy was obtained with the multilayer perceptron neural network.

  13. Recent Advances in Conotoxin Classification by Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Fu-Ying Dao

    2017-06-01

    Full Text Available Conotoxins are disulfide-rich small peptides, which are invaluable peptides that target ion channel and neuronal receptors. Conotoxins have been demonstrated as potent pharmaceuticals in the treatment of a series of diseases, such as Alzheimer’s disease, Parkinson’s disease, and epilepsy. In addition, conotoxins are also ideal molecular templates for the development of new drug lead compounds and play important roles in neurobiological research as well. Thus, the accurate identification of conotoxin types will provide key clues for the biological research and clinical medicine. Generally, conotoxin types are confirmed when their sequence, structure, and function are experimentally validated. However, it is time-consuming and costly to acquire the structure and function information by using biochemical experiments. Therefore, it is important to develop computational tools for efficiently and effectively recognizing conotoxin types based on sequence information. In this work, we reviewed the current progress in computational identification of conotoxins in the following aspects: (i construction of benchmark dataset; (ii strategies for extracting sequence features; (iii feature selection techniques; (iv machine learning methods for classifying conotoxins; (v the results obtained by these methods and the published tools; and (vi future perspectives on conotoxin classification. The paper provides the basis for in-depth study of conotoxins and drug therapy research.

  14. System and method for cooling a superconducting rotary machine

    Science.gov (United States)

    Ackermann, Robert Adolf; Laskaris, Evangelos Trifon; Huang, Xianrui; Bray, James William

    2011-08-09

    A system for cooling a superconducting rotary machine includes a plurality of sealed siphon tubes disposed in balanced locations around a rotor adjacent to a superconducting coil. Each of the sealed siphon tubes includes a tubular body and a heat transfer medium disposed in the tubular body that undergoes a phase change during operation of the machine to extract heat from the superconducting coil. A siphon heat exchanger is thermally coupled to the siphon tubes for extracting heat from the siphon tubes during operation of the machine.

  15. Sparse Machine Learning Methods for Understanding Large Text Corpora

    Data.gov (United States)

    National Aeronautics and Space Administration — Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional data with high degree of interpretability, at low computational...

  16. NESVM: a Fast Gradient Method for Support Vector Machines

    CERN Document Server

    Zhou, Tianyi; Wu, Xindong

    2010-01-01

    Support vector machines (SVMs) are invaluable tools for many practical applications in artificial intelligence, e.g., classification and event recognition. However, popular SVM solvers are not sufficiently efficient for applications with a great deal of samples as well as a large number of features. In this paper, thus, we present NESVM, a fast gradient SVM solver that can optimize various SVM models, e.g., classical SVM, linear programming SVM and least square SVM. Compared against SVM-Perf \\cite{SVM_Perf}\\cite{PerfML} (its convergence rate in solving the dual SVM is upper bounded by $\\mathcal O(1/\\sqrt{k})$, wherein $k$ is the number of iterations.) and Pegasos \\cite{Pegasos} (online SVM that converges at rate $\\mathcal O(1/k)$ for the primal SVM), NESVM achieves the optimal convergence rate at $\\mathcal O(1/k^{2})$ and a linear time complexity. In particular, NESVM smoothes the non-differentiable hinge loss and $\\ell_1$-norm in the primal SVM. Then the optimal gradient method without any line search is ado...

  17. Fault Diagnosis of Batch Reactor Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Sujatha Subramanian

    2014-01-01

    Full Text Available Fault diagnosis of a batch reactor gives the early detection of fault and minimizes the risk of thermal runaway. It provides superior performance and helps to improve safety and consistency. It has become more vital in this technical era. In this paper, support vector machine (SVM is used to estimate the heat release (Qr of the batch reactor both normal and faulty conditions. The signature of the residual, which is obtained from the difference between nominal and estimated faulty Qr values, characterizes the different natures of faults occurring in the batch reactor. Appropriate statistical and geometric features are extracted from the residual signature and the total numbers of features are reduced using SVM attribute selection filter and principle component analysis (PCA techniques. artificial neural network (ANN classifiers like multilayer perceptron (MLP, radial basis function (RBF, and Bayes net are used to classify the different types of faults from the reduced features. It is observed from the result of the comparative study that the proposed method for fault diagnosis with limited number of features extracted from only one estimated parameter (Qr shows that it is more efficient and fast for diagnosing the typical faults.

  18. Rotating electrical machines. Part 2: Methods for determining losses and efficiency of rotating electrical machinery from tests (excluding machines for traction vehicles)

    CERN Document Server

    International Electrotechnical Commission. Geneva

    1972-01-01

    Applies to d.c. machines and to a.c. synchronous and induction machines. The principles can be applied to other types of machines such as rotary converters, a.c. commutator motors and single-phase induction motors for which other methods of determining losses are used.

  19. Identification of machining defects by Small Displacement Torsor and form parameterization method

    CERN Document Server

    Sergent, Alain; Favreliere, Hugues; Duret, Daniel; Samper, Serge; Villeneuve, François

    2011-01-01

    In the context of product quality, the methods that can be used to estimate machining defects and predict causes of these defects are one of the important factors of a manufacturing process. The two approaches that are presented in this article are used to determine the machining defects. The first approach uses the Small Displacement Torsor (SDT) concept [BM] to determine displacement dispersions (translations and rotations) of machined surfaces. The second one, which takes into account form errors of machined surface (i.e. twist, comber, undulation), uses a geometrical model based on the modal shape's properties, namely the form parameterization method [FS1]. A case study is then carried out to analyze the machining defects of a batch of machined parts.

  20. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  1. Methods, systems and apparatus for synchronous current regulation of a five-phase machine

    Science.gov (United States)

    Gallegos-Lopez, Gabriel; Perisic, Milun

    2012-10-09

    Methods, systems and apparatus are provided for controlling operation of and regulating current provided to a five-phase machine when one or more phases has experienced a fault or has failed. In one implementation, the disclosed embodiments can be used to synchronously regulate current in a vector controlled motor drive system that includes a five-phase AC machine, a five-phase inverter module coupled to the five-phase AC machine, and a synchronous current regulator.

  2. A Method to Evaluate the Performance of a Multiprocessor Machine based on Data Flow Principles

    OpenAIRE

    1989-01-01

    In this paper we present a method to model a static data pow oriented multiprocessor system. This methodology of modelling can be used to examine the machine behaviour for executing a program according to three scheduling strategies, viz., static, dynamic and quasi-dynamic policies. The processing elements (PEs) of the machine go through different states in order to complete tasks they are allotted. Hence, the time taken by the machine to execute a program is directly dependent on the tim...

  3. Redesigned Surface Based Machining Strategy and Method in Peripheral Milling of Thin-walled Parts

    Institute of Scientific and Technical Information of China (English)

    JIA Zhenyuan; GUO Qiang; SUN Yuwen; GUO Dongming

    2010-01-01

    Currently, simultaneously ensuring the machining accuracy and efficiency of thin-walled structures especially high performance parts still remains a challenge. Existing compensating methods are mainly focusing on 3-aixs machining, which sometimes only take one given point as the compensative point at each given cutter location. This paper presents a redesigned surface based machining strategy for peripheral milling of thin-walled parts. Based on an improved cutting force/heat model and finite element method(FEM) simulation environment, a deflection error prediction model, which takes sequence of cutter contact lines as compensation targets, is established. And an iterative algorithm is presented to determine feasible cutter axis positions. The final redesigned surface is subsequently generated by skinning all discrete cutter axis vectors after compensating by using the proposed algorithm. The proposed machining strategy incorporates the thermo-mechanical coupled effect in deflection prediction, and is also validated with flank milling experiment by using five-axis machine tool. At the same time, the deformation error is detected by using three-coordinate measuring machine. Error prediction values and experimental results indicate that they have a good consistency and the proposed approach is able to significantly reduce the dimension error under the same machining conditions compared with conventional methods. The proposed machining strategy has potential in high-efficiency precision machining of thin-walled parts.

  4. Assessing and comparison of different machine learning methods in parent-offspring trios for genotype imputation.

    Science.gov (United States)

    Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi

    2016-06-21

    Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation.

  5. APPLICATION OF THE PERFORMANCE SELECTION INDEX METHOD FOR SOLVING MACHINING MCDM PROBLEMS

    Directory of Open Access Journals (Sweden)

    Dušan Petković

    2017-04-01

    Full Text Available Complex nature of machining processes requires the use of different methods and techniques for process optimization. Over the past few years a number of different optimization methods have been proposed for solving continuous machining optimization problems. In manufacturing environment, engineers are also facing a number of discrete machining optimization problems. In order to help decision makers in solving this type of optimization problems a number of multi criteria decision making (MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. performance selection index (PSI method for solving machining MCDM problems. The main motivation for using the PSI method is that it is not necessary to determine criteria weights as in other MCDM methods. Applicability and effectiveness of the PSI method have been demonstrated while solving two case studies dealing with machinability of materials and selection of the most suitable cutting fluid for the given machining application. The obtained rankings have good correlation with those derived by the past researchers using other MCDM methods which validate the usefulness of this method for solving machining MCDM problems.

  6. Manufacturing Methods for Cutting, Machining and Drilling Composites. Volume 1. Composites Machining Handbook

    Science.gov (United States)

    1978-08-01

    0.003 to 0.014 inch in diameter. It should be noted that a hand-held cutting head has been recently marketed . 4.5 RECIPROCATING MECHANICAL CUTTER... PRODUCTO MACHINE CO. MODEL 4F CHUCK 1/4 INCH DIAMETER SPEED 0-350 STROKES/MINUTE 16,000 RPM WITH ROUTER MOTOR FEED HAND EQUIPMENT RELIABILITY...WITH LOT SIZE, MARKET CONDITIONS, ETC. Figure 9-3 Cutting Tool Cost Summary 94 5 - 4 - CO O O o 5 tr 3 - MANUAL NOTE: COSTS FOR N/C

  7. Application of Artificial Intelligence Methods of Tool Path Optimization in CNC Machines: A Review

    Directory of Open Access Journals (Sweden)

    Khashayar Danesh Narooei

    2014-08-01

    Full Text Available Today, in most of metal machining process, Computer Numerical Control (CNC machine tools have been very popular due to their efficiencies and repeatability to achieve high accuracy positioning. One of the factors that govern the productivity is the tool path travel during cutting a work piece. It has been proved that determination of optimal cutting parameters can enhance the machining results to reach high efficiency and minimum the machining cost. In various publication and articles, scientist and researchers adapted several Artificial Intelligence (AI methods or hybrid method for tool path optimization such as Genetic Algorithms (GA, Artificial Neural Network (ANN, Artificial Immune Systems (AIS, Ant Colony Optimization (ACO and Particle Swarm Optimization (PSO. This study presents a review of researches in tool path optimization with different types of AI methods that show the capability of using different types of optimization methods in CNC machining process.

  8. An Improved Optimization Method for the Relevance Voxel Machine

    DEFF Research Database (Denmark)

    Ganz, Melanie; Sabuncu, M. R.; Van Leemput, Koen

    2013-01-01

    In this paper, we will re-visit the Relevance Voxel Machine (RVoxM), a recently developed sparse Bayesian framework used for predicting biological markers, e.g., presence of disease, from high-dimensional image data, e.g., brain MRI volumes. The proposed improvement, called IRVoxM, mitigates the ...

  9. Micro rotary machine and methods for using same

    Science.gov (United States)

    Stalford, Harold L [Norman, OK

    2012-04-17

    A micro rotary machine may include a micro actuator and a micro shaft coupled to the micro actuator. The micro shaft comprises a horizontal shaft and is operable to be rotated by the micro actuator. A micro tool is coupled to the micro shaft and is operable to perform work in response to motion of the micro shaft.

  10. Manufacturing methods for machining spring ends parallel at loaded length

    Science.gov (United States)

    Hinke, Patrick Thomas (Inventor); Benson, Dwayne M. (Inventor); Atkins, Donald J. (Inventor)

    1995-01-01

    A first end surface of a coiled compression spring at its relaxed length is machined to a plane transverse to the spring axis. The spring is then placed in a press structure having first and second opposed planar support surfaces, with the machined spring end surface bearing against the first support surface, the unmachined spring end surface bearing against a planar first surface of a lateral force compensation member, and an opposite, generally spherically curved surface of the compensation member bearing against the second press structure support surface. The spring is then compressed generally to its loaded length, and a circumferentially spaced series of marks, lying in a plane parallel to the second press structure support surface, are formed on the spring coil on which the second spring end surface lies. The spring is then removed from the press structure, and the second spring end surface is machined to the mark plane. When the spring is subsequently compressed to its loaded length the precisely parallel relationship between the machined spring end surfaces substantially eliminates undesirable lateral deflection of the spring.

  11. Method for producing fabrication material for constructing micrometer-scaled machines, fabrication material for micrometer-scaled machines

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, F.J.

    1995-12-31

    A method for producing fabrication material for use in the construction of nanometer-scaled machines is provided whereby similar protein molecules are isolated and manipulated at predetermined residue positions so as to facilitate noncovalent interaction, but without compromising the folding configuration or native structure of the original protein biomodules. A fabrication material is also provided consisting of biomodules systematically constructed and arranged at specific solution parameters.

  12. Effects of Machine Tool Configuration on Its Dynamics Based on Orthogonal Experiment Method

    Institute of Scientific and Technical Information of China (English)

    GAO Xiangsheng; ZHANG Yidu; ZHANG Hongwei; WU Qiong

    2012-01-01

    In order to analyze the influence of configuration parameters on dynamic characteristics of machine tools in the working space,the configuration parameters have been suggested based on the orthogonal experiment method.Dynamic analysis of a milling machine,which is newly designed for producing turbine blades,has been conducted by utilizing the modal synthesis method.The finite element model is verified and updated by experimental modal analysis (EMA) of the machine tool.The result gained by modal synthesis method is compared with whole-model finite element method (FEM) result as well.According to the orthogonal experiment method,four configuration parameters of machine tool are considered as four factors for dynamic characteristics.The influence of configuration parameters on the first three natural frequencies is obtained by range analysis.It is pointed out that configuration parameter is the most important factor affecting the fundamental frequency of machine tools,and configuration parameter has less effect on lower-order modes of the system than others.The combination of configuration parameters which makes the fundamental frequency reach the maximum value is provided.Through demonstration,the conclusion can be drawn that the influence of configuration parameters on the natural frequencies of machine tools can be analyzed explicitly by the orthogonal experiment method,which offers a new method for estimating the dynamic characteristics of machine tools.

  13. Method and apparatus for mining machine cutter head

    Energy Technology Data Exchange (ETDEWEB)

    Mann, J.P.; Levenstein, V.M.; Fox, C.H.

    1987-03-10

    This patent describes a continuous mining machine having a frame assembly pivotally attached to the machine and a rotary cutting head mounted on a shaft on the forward end of the frame assembly and having cutting bits thereon for cutting loose mining material from the face of a mine and extending beyond each side of the frame assembly. The improvement described here comprises: a. at least one bearing mounted on the cutting head shaft for enabling rotation of the shaft, b. at least one C-shaped bearing support housing on the forward end of the frame assembly for receiving the bearing on the shaft, c. a matching removable C-shaped bearing support end cap for attachment to the frame assembly C-shaped bearing support housing for encompassing and supporting the shaft bearing for rotation of the shaft. This is used so that the shaft may be removed from the mining machine without lateral movement of the shaft by removing the matching removable C-shaped bearing support housing, and d. drive means coupled to the cutting head shaft to provide rotary motion thereto for cutting mining material.

  14. An inline surface measurement method for membrane mirror fabrication using two-stage trained Zernike polynomials and elitist teaching-learning-based optimization

    Science.gov (United States)

    Liu, Yang; Chen, Zhenyu; Yang, Zhile; Li, Kang; Tan, Jiubin

    2016-12-01

    The accuracy of surface measurement determines the manufacturing quality of membrane mirrors. Thus, an efficient and accurate measuring method is critical in membrane mirror fabrication. This paper formulates this measurement issue as a surface reconstruction problem and employs two-stage trained Zernike polynomials as an inline measuring tool to solve the optical surface measurement problem in the membrane mirror manufacturing process. First, all terms of the Zernike polynomial are generated and projected to a non-circular region as the candidate model pool. The training data are calculated according to the measured values of distance sensors and the geometrical relationship between the ideal surface and the installed sensors. Then the terms are selected by minimizing the cost function each time successively. To avoid the problem of ill-conditioned matrix inversion by the least squares method, the coefficient of each model term is achieved by modified elitist teaching-learning-based optimization. Subsequently, the measurement precision is further improved by a second stage of model refinement. Finally, every point on the membrane surface can be measured according to this model, providing more the subtle feedback information needed for the precise control of membrane mirror fabrication. Experimental results confirm that the proposed method is effective in a membrane mirror manufacturing system driven by negative pressure, and the measurement accuracy can achieve 15 µm.

  15. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

    Science.gov (United States)

    Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

    2016-12-01

    This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

  16. Application of neural network method to process planning in ship pipe machining

    Institute of Scientific and Technical Information of China (English)

    ZHONG Yu-guang; QIU Chang-hua; SHI Dong-yan

    2004-01-01

    Based on artificial neural network for process planning decision in ship pipe manufacturing, a novel method is established by analyzing process characteristics of the ship pipe machining. The process knowledge of pipe machining is shifted from the expression of the external rules to the description of the internal net weight value in order for the net inferring engine to decide the process route of pipe machining rapidly and rightly. Simulation shows that the method can resolve problems of process decision, and overcome the drawbacks of "matching difficulty" and "combination explosion" in traditional intelligent CAPP based on symbol reasoning.

  17. Machine Learning Method Applied in Readout System of Superheated Droplet Detector

    Science.gov (United States)

    Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco

    2017-07-01

    Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.

  18. Machine Selection in A Dairy Product Company with Entropy and SAW Method Integration

    Directory of Open Access Journals (Sweden)

    Aşkın Özdağoğlu

    2017-07-01

    Full Text Available Machine selection is an important and difficult process for the firms, and its results may generate more problems than anticipated. In order to find the best alternative, managers should define the requirements of the factory and determine the necessary criteria. On the other hand, the decision making criteria in order to choose the right equipment may vary according to the type of the manufacturing facility, market requirements, and consumer assigned criteria. This study aims to find the best machine alternative  among  the three machine offerings according to twelve evaluation criteria by integrating entropy method with SAW method.

  19. Assessment Method of Heavy NC Machine Reliability Based on Bayes Theory

    Institute of Scientific and Technical Information of China (English)

    张雷; 王太勇; 胡占齐

    2016-01-01

    It is difficult to collect the prior information for small-sample machinery products when their reliability is assessed by using Bayes method. In this study, an improved Bayes method with gradient reliability (GR) results as prior information was proposed to solve the problem. A certain type of heavy NC boring and milling machine was considered as the research subject, and its reliability model was established on the basis of its functional and structural characteristics and working principle. According to the stress-intensity interference theory and the reli-ability model theory, the GR results of the host machine and its key components were obtained. Then the GR results were deemed as prior information to estimate the probabilistic reliability (PR) of the spindle box, the column and the host machine in the present method. The comparative studies demonstrated that the improved Bayes method was applicable in the reliability assessment of heavy NC machine tools.

  20. IDENTIFICATION AND MONITORING OF NOISE SOURCES OF CNC MACHINE TOOLS BY ACOUSTIC HOLOGRAPHY METHODS

    Directory of Open Access Journals (Sweden)

    Jerzy Józwik

    2016-06-01

    Full Text Available The paper presents the analysis of sound field emitted by selected CNC machine tools. The identification of noise sources and level was measured by acoustic holography for the 3-axis DMC 635eco machine tool and the 5-axis vertical machining centre DMU 65 monoBlock. The acoustic holography method allows precise identification and measurement of noise sources at different bandwidths of frequency. Detection of noise sources in tested objects allows diagnosis of their technical condition, as well as choice of effective means of noise reduction, which is highly significant from the perspective of minimising noise at the CNC machine operator workstation. Test results were presented as acoustic maps in various frequency ranges. Noise sources of the machine tool itself were identified, as well as the range of noise influence and the most frequent places of reflections and their span. The results of measurements were presented in figures and diagrams.

  1. A Novel Tetrahedral Mesh Generation Method for Rotating Machines Including End-Coil Region

    OpenAIRE

    Yamashita, Hideo; Yamaji, Akihisa; Cingoski, Vlatko; Kaneda, Kazufumi

    1996-01-01

    In this paper, a novel method for generating tetrahedral finite-element meshes suitable for 3-D finite element analysis of rotating machines is presented. The proposed method enables the easy development of 3-D meshes for various rotating machines, especially in the end-coil region and the surrounding air region. Tessellation of the 3-D region is made possible by simple extension of a previously generated 2-D triangular mesh, used as a model mesh, into the third dimension.

  2. A Review for Detecting Gene-Gene Interactions Using Machine Learning Methods in Genetic Epidemiology

    Directory of Open Access Journals (Sweden)

    Ching Lee Koo

    2013-01-01

    Full Text Available Recently, the greatest statistical computational challenge in genetic epidemiology is to identify and characterize the genes that interact with other genes and environment factors that bring the effect on complex multifactorial disease. These gene-gene interactions are also denoted as epitasis in which this phenomenon cannot be solved by traditional statistical method due to the high dimensionality of the data and the occurrence of multiple polymorphism. Hence, there are several machine learning methods to solve such problems by identifying such susceptibility gene which are neural networks (NNs, support vector machine (SVM, and random forests (RFs in such common and multifactorial disease. This paper gives an overview on machine learning methods, describing the methodology of each machine learning methods and its application in detecting gene-gene and gene-environment interactions. Lastly, this paper discussed each machine learning method and presents the strengths and weaknesses of each machine learning method in detecting gene-gene interactions in complex human disease.

  3. Support vector machine method for forecasting future strong earthquakes in Chinese mainland

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Statistical learning theory is for small-sample statistics. And support vector machine is a new machine learning method based on the statistical learning theory. The support vector machine not only has solved certain problems in many learning methods, such as small sample, over fitting, high dimension and local minimum, but also has a higher generalization (forecasting) ability than that of artificial neural networks. The strong earthquakes in Chinese mainland are related to a certain extent to the intensive seismicity along the main plate boundaries in the world,however, the relation is nonlinear. In the paper, we have studied this unclear relation by the support vector machine method for the purpose of forecasting strong earthquakes in Chinese mainland.

  4. New method for computer numerical control machine tool calibration: Relay method

    Institute of Scientific and Technical Information of China (English)

    LIU Huanlao; SHI Hanming; LI Bin; ZHOU Huichen

    2007-01-01

    Relay measurement method,which uses the kilogram-meter (KGM) measurement system to identify volumetric errors on the planes of computer numerical con trol (CNC) machine tools,is verified through experimental tests.During the process,all position errors on the entire plane table are measured by the equipment,which is limited to a small field.All errors are obtained first by measuring the error of the basic position near the original point.On the basis of that positional error,the positional errors far away from the original point are measured.Using this analogy,the error information on the positional points on the entire plane can be obtained.The process outlined above is called the relay meth od.Test results indicate that the accuracy and repeatability are high,and the method can be used to calibrate geometric errors on the plane of CNC machine tools after backlash errors have been well compensated.

  5. Machine learning methods for detection of dust from Meteosat imagery

    Science.gov (United States)

    Kolios, Stavros; Hatzianastassiou, Nikos

    2017-04-01

    Dust and sand storms can create potentially hazardous air quality conditions and adversely affect climate on a regional and world-wide scale, by modifying the shortwave and longwave radiation budgets, and human health. The indirect effects of dust are also significant because they modify cloud and precipitation properties and influence the general circulation of the atmosphere. In addition, consideration of dust has been shown to improve the weather forecast ability of models. For these reasons, there is an increasing and strong interest for real-time dust detection and monitoring as well as for dust load estimation from satellite observations, which offer the best solution to the problem. Indeed, remote sensing has been shown to be a valuable tool for detecting, mapping and forecasting dust events. Furthermore, dust satellite remote sensing is also useful in providing long-term and global observations of dust. Nevertheless, the majority of the approaches for dust detection and monitoring are still based on simple thresholding of the multispectral satellite imagery. This study is an effort to investigate the efficiency of machine learning techniques in order to accurately classify different cloud features in Meteosat imagery and detect dust in different atmospheric layers over the greater Mediterranean basin. More specifically, different Support Vector Machines (SVM) and Artificial Neural Network (ANN) schemes are tested to conclude on the most appropriate parameterization of the examined classification schemes. The training samples are collected after spatiotemporal correlation of AERONET station measurements with Meteosat images. Τhe efficiency of the examined algorithms is also tested using AERONET station data in selected cases. This study is first step toward the development of an integrated methodology for an accurate detection, monitoring and estimation of dust using exclusively satellite imagery.

  6. A reliability assessment method based on support vector machines for CNC equipment

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    With the applications of high technology,a catastrophic failure of CNC equipment rarely occurs at normal operation conditions.So it is difficult for traditional reliability assessment methods based on time-to-failure distributions to deduce the reliability level.This paper presents a novel reliability assessment methodology to estimate the reliability level of equipment with machining performance degradation data when only a few samples are available.The least squares support vector machines(LS-SVM) are introduced to analyze the performance degradation process on the equipment.A two-stage parameter optimization and searching method is proposed to improve the LS-SVM regression performance and a reliability assessment model based on the LS-SVM is built.A machining performance degradation experiment has been carried out on an OTM650 machine tool to validate the effectiveness of the proposed reliability assessment methodology.

  7. A reliability assessment method based on support vector machines for CNC equipment

    Institute of Scientific and Technical Information of China (English)

    WU Jun; DENG Chao; SHAO XinYu; XIE S Q

    2009-01-01

    With the applications of high technology, a catastrophic failure of CNC equipment rarely occurs at normal operation conditions. So it is difficult for traditional reliability assessment methods based on time-to-failure distributions to deduce the reliability level. This paper presents a novel reliability assessment methodology to estimate the reliability level of equipment with machining performance degradation data when only a few samples are available. The least squares support vector machines(LS-SVM) are introduced to analyze the performance degradation process on the equipment. A two-stage parameter optimization and searching method is proposed to improve the LS-SVM regression performance and a reliability assessment model based on the LS-SVM is built. A machining performance degradation experiment has been carried out on an OTM650 machine tool to validate the effectiveness of the proposed reliability assessment methodology.

  8. Dynamic Evaluation Model and Application Methods for Engineering Machine Maintenance Quality

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Yan-feng; DAI Ling; WANG Xi

    2012-01-01

    It is an important content of equipment management to keep the engineering machine well. Based on the theory of component technology and grey related algorithm arithmetic, the requirements and procedures of engineering machine maintenance predicting process are analyzed, and a support object evaluation system is provided. The qualitative and quantitative indexes of evaluating process are fully taken into consideration to provide scientific methods and ways for proper evaluation and decision.

  9. Method and apparatus for improving the quality and efficiency of ultrashort-pulse laser machining

    Science.gov (United States)

    Stuart, Brent C.; Nguyen, Hoang T.; Perry, Michael D.

    2001-01-01

    A method and apparatus for improving the quality and efficiency of machining of materials with laser pulse durations shorter than 100 picoseconds by orienting and maintaining the polarization of the laser light such that the electric field vector is perpendicular relative to the edges of the material being processed. Its use is any machining operation requiring remote delivery and/or high precision with minimal collateral dames.

  10. Method of determining the process applied for feature machining : experimental validation of a slot

    OpenAIRE

    Martin, Patrick; D'ACUNTO, Alain

    2007-01-01

    International audience; In this paper, we will be evaluating the "manufacturability" levels for several machining processes of "slot" feature. Using the STEP standard, we will identify the slot feature characteristics. Then, using the ascendant generation of process method, we will define the associated milling process. The expertise is based on a methodology relative to the experience plans carried out during the formalization and systematic evaluation of the machining process associated wit...

  11. Evaluation of machining methods for trabecular metal implants in a rabbit intramedullary osseointegration model.

    Science.gov (United States)

    Deglurkar, Mukund; Davy, Dwight T; Stewart, Matthew; Goldberg, Victor M; Welter, Jean F

    2007-02-01

    Implant success is dependent in part on the interaction of the implant with the surrounding tissues. Porous tantalum implants (Trabecular Metal, TM) have been shown to have excellent osseointegration. Machining this material to complex shapes with close tolerances is difficult because of its open structure and the ductile nature of metallic tantalum. Conventional machining results in occlusion of most of the surface porosity by the smearing of soft metal. This study compared TM samples finished by three processing techniques: conventional machining, electrical discharge machining, and nonmachined, "as-prepared." The TM samples were studied in a rabbit distal femoral intramedullary osseointegration model and in cell culture. We assessed the effects of these machining methods at 4, 8, and 12 weeks after implant placement. The finishing technique had a profound effect on the physical presentation of the implant interface: conventional machining reduced surface porosity to 30% compared to bulk porosities in the 70% range. Bone ongrowth was similar in all groups, while bone ingrowth was significantly greater in the nonmachined samples. The resulting mechanical properties of the bone implant-interface were similar in all three groups, with only interface stiffness and interface shear modulus being significantly higher in the machined samples.

  12. Vibration Prediction Method of Electric Machines by using Experimental Transfer Function and Magnetostatic Finite Element Analysis

    Science.gov (United States)

    Saito, A.; Kuroishi, M.; Nakai, H.

    2016-09-01

    This paper concerns the noise and structural vibration caused by rotating electric machines. Special attention is given to the magnetic-force induced vibration response of interior-permanent magnet machines. In general, to accurately predict and control the vibration response caused by the electric machines, it is inevitable to model not only the magnetic force induced by the fluctuation of magnetic fields, but also the structural dynamic characteristics of the electric machines and surrounding structural components. However, due to complicated boundary conditions and material properties of the components, such as laminated magnetic cores and varnished windings, it has been a challenge to compute accurate vibration response caused by the electric machines even after their physical models are available. In this paper, we propose a highly-accurate vibration prediction method that couples experimentally-obtained discrete structural transfer functions and numerically-obtained distributed magnetic-forces. The proposed vibration synthesis methodology has been applied to predict vibration responses of an interior permanent magnet machine. The results show that the predicted vibration response of the electric machine agrees very well with the measured vibration response for several load conditions, for wide frequency ranges.

  13. On-line estimation of laser-drilled hole depth using a machine vision method.

    Science.gov (United States)

    Ho, Chao-Ching; He, Jun-Jia; Liao, Te-Ying

    2012-01-01

    The paper presents a novel method for monitoring and estimating the depth of a laser-drilled hole using machine vision. Through on-line image acquisition and analysis in laser machining processes, we could simultaneously obtain correlations between the machining processes and analyzed images. Based on the machine vision method, the depths of laser-machined holes could be estimated in real time. Therefore, a low cost on-line inspection system is developed to increase productivity. All of the processing work was performed in air under standard atmospheric conditions and gas assist was used. A correlation between the cumulative size of the laser-induced plasma region and the depth of the hole is presented. The result indicates that the estimated depths of the laser-drilled holes were a linear function of the cumulative plasma size, with a high degree of confidence. This research provides a novel machine vision-based method for estimating the depths of laser-drilled holes in real time.

  14. On-Line Estimation of Laser-Drilled Hole Depth Using a Machine Vision Method

    Directory of Open Access Journals (Sweden)

    Te-Ying Liao

    2012-07-01

    Full Text Available The paper presents a novel method for monitoring and estimating the depth of a laser-drilled hole using machine vision. Through on-line image acquisition and analysis in laser machining processes, we could simultaneously obtain correlations between the machining processes and analyzed images. Based on the machine vision method, the depths of laser-machined holes could be estimated in real time. Therefore, a low cost on-line inspection system is developed to increase productivity. All of the processing work was performed in air under standard atmospheric conditions and gas assist was used. A correlation between the cumulative size of the laser-induced plasma region and the depth of the hole is presented. The result indicates that the estimated depths of the laser-drilled holes were a linear function of the cumulative plasma size, with a high degree of confidence. This research provides a novel machine vision-based method for estimating the depths of laser-drilled holes in real time.

  15. New machining method of high precision infrared window part

    Science.gov (United States)

    Yang, Haicheng; Su, Ying; Xu, Zengqi; Guo, Rui; Li, Wenting; Zhang, Feng; Liu, Xuanmin

    2016-10-01

    Most of the spherical shell of the photoelectric multifunctional instrument was designed as multi optical channel mode to adapt to the different band of the sensor, there were mainly TV, laser and infrared channels. Without affecting the optical diameter, wind resistance and pneumatic performance of the optical system, the overall layout of the spherical shell was optimized to save space and reduce weight. Most of the shape of the optical windows were special-shaped, each optical window directly participated in the high resolution imaging of the corresponding sensor system, and the optical axis parallelism of each sensor needed to meet the accuracy requirement of 0.05mrad.Therefore precision machining of optical window parts quality will directly affect the photoelectric system's pointing accuracy and interchangeability. Processing and testing of the TV and laser window had been very mature, while because of the special nature of the material, transparent and high refractive rate, infrared window parts had the problems of imaging quality and the control of the minimum focal length and second level parallel in the processing. Based on years of practical experience, this paper was focused on how to control the shape and parallel difference precision of infrared window parts in the processing. Single pass rate was increased from 40% to more than 95%, the processing efficiency was significantly enhanced, an effective solution to the bottleneck problem in the actual processing, which effectively solve the bottlenecks in research and production.

  16. Multi-method automated diagnostics of rotating machines

    Science.gov (United States)

    Kostyukov, A. V.; Boychenko, S. N.; Shchelkanov, A. V.; Burda, E. A.

    2017-08-01

    The automated machinery diagnostics and monitoring systems utilized within the petrochemical plants are an integral part of the measures taken to ensure safety and, as a consequence, the efficiency of these industrial facilities. Such systems are often limited in their functionality due to the specifics of the diagnostic techniques adopted. As the diagnostic techniques applied in each system are limited, and machinery defects can have different physical nature, it becomes necessary to combine several diagnostics and monitoring systems to control various machinery components. Such an approach is inconvenient, since it requires additional measures to bring the diagnostic results in a single view of the technical condition of production assets. In this case, we mean by a production facility a bonded complex of a process unit, a drive, a power source and lines. A failure of any of these components will cause an outage of the production asset, which is unacceptable. The purpose of the study is to test a combined use of vibration diagnostics and partial discharge techniques within the diagnostic systems of enterprises for automated control of the technical condition of rotating machinery during maintenance and at production facilities. The described solutions allow you to control the condition of mechanical and electrical components of rotating machines. It is shown that the functionality of the diagnostics systems can be expanded with minimal changes in technological chains of repair and operation of rotating machinery. Automation of such systems reduces the influence of the human factor on the quality of repair and diagnostics of the machinery.

  17. MEDLINE MeSH Indexing: Lessons Learned from Machine Learning and Future Directions

    DEFF Research Database (Denmark)

    Jimeno-Yepes, Antonio; Mork, James G.; Wilkowski, Bartlomiej

    2012-01-01

    Map and a k-NN approach called PubMed Related Citations (PRC). Our motivation is to improve the quality of MTI based on machine learning. Typical machine learning approaches fit this indexing task into text categorization. In this work, we have studied some Medical Subject Headings (MeSH) recommended by MTI...... and analyzed the issues when using standard machine learning algorithms. We show that in some cases machine learning can improve the annotations already recommended by MTI, that machine learning based on low variance methods achieves better performance and that each MeSH heading presents a different behavior...

  18. Numerical-Analytical Method for Magnetic Field Computation in Rotational Electric Machines

    Institute of Scientific and Technical Information of China (English)

    章跃进; 江建中; 屠关镇

    2003-01-01

    A numerical-analytical method is applied for the two-dimensional magnetic field computation in rotational electric machines in this paper. The analytical expressions for air gap magnetic field axe derived. The pole pairs in the expressions are taken into account so that the solution region can be reduced within one periodic range. The numerical and analytical magnetic field equations are linked with equal vector magnetic potential boundary conditions. The magnetic field of a brushless permanent magnet machine is computed by the proposed method. The result is compared to that obtained by finite element method so as to validate the correction of th method.

  19. NEW FEATURE SELECTION METHOD IN MACHINE FAULT DIAGNOSIS

    Institute of Scientific and Technical Information of China (English)

    Wang Xinfeng; Qiu Jing; Liu Guanjun

    2005-01-01

    Aiming to deficiency of the filter and wrapper feature selection methods, a new method based on composite method of filter and wrapper method is proposed. First the method filters original features to form a feature subset which can meet classification correctness rate, then applies wrapper feature selection method select optimal feature subset. A successful technique for solving optimization problems is given by genetic algorithm (GA). GA is applied to the problem of optimal feature selection. The composite method saves computing time several times of the wrapper method with holding the classification accuracy in data simulation and experiment on bearing fault feature selection. So this method possesses excellent optimization property, can save more selection time, and has the characteristics of high accuracy and high efficiency.

  20. A Method For Producing Hollow Shafts By Rotary Compression Using A Specially Designed Forging Machine

    Directory of Open Access Journals (Sweden)

    Tomczak J.

    2015-09-01

    Full Text Available The paper presents a new method for manufacturing hollow shafts, where tubes are used as billet. First, the design of a specially designed forging machine for rotary compression is described. The machine is then numerically tested with regard to its strength, and the effect of elastic strains of the roll system on the quality of produced parts is determined. The machine’s strength is calculated by the finite element method using the NX Nastran program. Technological capabilities of the machine are determined, too. Next, the results of the modeling of the rotary compression process for a hollow stepped shafts by the finite element method are given. The process for manufacturing hollow shafts was modeled using the Simufact.Forming simulation program. The FEM results are then verified experimentally in the designed forging machine for rotary compression. The experimental results confirm that axisymmetric hollow shafts can be produced by the rotary compression method. It is also confirmed that numerical methods are suitable for investigating both machine design and metal forming processes.

  1. A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines

    Energy Technology Data Exchange (ETDEWEB)

    Meza, Juan C.; Woods, Mark

    2009-12-18

    Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.

  2. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna

    2013-01-01

    machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...

  3. Comparative aspects about the studying methods of cast irons machinability, based on the tool wear

    Science.gov (United States)

    Carausu, C.; Pruteanu, O.

    2016-08-01

    The paper presents some considerations of the authors, regarding the studying methods of the cast irons machinability, based on the tools wear on drilling operations. Are described the conditions in which the experimental researches were conducted, intended to offer an overview on drilling machinability of some cast irons categories. It is presented a comparison between long-term methods and short-term methods, for determining the optimal speed chipping of a grey cast iron with lamellar graphite, with average values of tensile strength. Are described: the research methodology, obtained results and conclusions drawn after the results analysis.

  4. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna;

    2013-01-01

    machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...

  5. A Multilevel Design Method of Large-scale Machine System Oriented Network Environment

    Institute of Scientific and Technical Information of China (English)

    LI Shuiping; HE Jianjun

    2006-01-01

    The design of large-scale machine system is a very complex problem. These design problems usually have a lot of design variables and constraints so that they are difficult to be solved rapidly and efficiently by using conventional methods. In this paper, a new multilevel design method oriented network environment is proposed, which maps the design problem of large-scale machine system into a hypergraph with degree of linking strength (DLS) between vertices. By decomposition of hypergraph, this method can divide the complex design problem into some small and simple subproblems that can be solved concurrently in a network.

  6. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  7. Space cutter compensation method for five-axis nonuniform rational basis spline machining

    Directory of Open Access Journals (Sweden)

    Yanyu Ding

    2015-07-01

    Full Text Available In view of the good machining performance of traditional three-axis nonuniform rational basis spline interpolation and the space cutter compensation issue in multi-axis machining, this article presents a triple nonuniform rational basis spline five-axis interpolation method, which uses three nonuniform rational basis spline curves to describe cutter center location, cutter axis vector, and cutter contact point trajectory, respectively. The relative position of the cutter and workpiece is calculated under the workpiece coordinate system, and the cutter machining trajectory can be described precisely and smoothly using this method. The three nonuniform rational basis spline curves are transformed into a 12-dimentional Bézier curve to carry out discretization during the discrete process. With the cutter contact point trajectory as the precision control condition, the discretization is fast. As for different cutters and corners, the complete description method of space cutter compensation vector is presented in this article. Finally, the five-axis nonuniform rational basis spline machining method is further verified in a two-turntable five-axis machine.

  8. STUDY ON NEW METHOD OF IDENTIFYING GEOMETRIC ERROR PARAMETERS FOR NC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The methods of identifying geometric error parameters for NC machine tools are introduced. According to analyzing and comparing the different methods, a new method-displacement method with 9 lines is developed based on the theories of the movement errors of multibody system (MBS). A lot of experiments are also made to obtain 21 terms geometric error parameters by using the error identification software based on the new method.

  9. Recent progresses in the exploration of machine learning methods as in-silico ADME prediction tools.

    Science.gov (United States)

    Tao, L; Zhang, P; Qin, C; Chen, S Y; Zhang, C; Chen, Z; Zhu, F; Yang, S Y; Wei, Y Q; Chen, Y Z

    2015-06-23

    In-silico methods have been explored as potential tools for assessing ADME and ADME regulatory properties particularly in early drug discovery stages. Machine learning methods, with their ability in classifying diverse structures and complex mechanisms, are well suited for predicting ADME and ADME regulatory properties. Recent efforts have been directed at the broadening of application scopes and the improvement of predictive performance with particular focuses on the coverage of ADME properties, and exploration of more diversified training data, appropriate molecular features, and consensus modeling. Moreover, several online machine learning ADME prediction servers have emerged. Here we review these progresses and discuss the performances, application prospects and challenges of exploring machine learning methods as useful tools in predicting ADME and ADME regulatory properties.

  10. A Novel Cogging Torque Simulation Method for Permanent-Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    Chun-Yu Hsiao

    2011-12-01

    Full Text Available Cogging torque exists between rotor mounted permanent magnets and stator teeth due to magnetic attraction and this is an undesired phenomenon which produces output ripple, vibration and noise in machines. The purpose of this paper is to study the existence and effects of cogging torque, and to present a novel, rapid, half magnet pole pair technique for forecasting and evaluating cogging torque. The technique uses the finite element method as well as Matlab research and development oriented software tools to reduce numerous computing jobs and simulation time. An example of a rotor-skewed structure used to reduce cogging torque of permanent magnet synchronous machines is evaluated and compared with a conventional analysis method for the same motor to verify the effectiveness of the proposed approach. The novel method is proved valuable and suitable for large-capacity machine design.

  11. The Electrochemical Machining Analysis of Aeroengine Blade Based on Isogeometric Method

    Directory of Open Access Journals (Sweden)

    Neng Wan

    2015-02-01

    Full Text Available Electrochemical machining is an important method for the blade in aeroengine. Analysis of electric field in machining gap is the basis of cathode design. For solving the low precision problem for sensitive boundary which is caused by traditional numerical analysis method, this paper proposed a method of promoting the analysis precision by using the isogeometric analysis. The NURBS basis functions are used to replace the Lagrange basis function for establishing the solving equations of the electrochemical machining gap. The problem of the noninterpolation feature belonging to the NURBS basis functions, which could bring the error for imposing Dirichlet boundary condition, is settled. At last, the superiority, including precision and the rate of convergence, of the isogeometric analysis is proved by the comparison test.

  12. The Qualitative Research Method of Dynamics Vibration of a Washing Machine Based on Riccati Equation

    Directory of Open Access Journals (Sweden)

    Sergey P. Petrosov

    2012-09-01

    Full Text Available The accurate method for finding the common solutions of weakly interconnected system of nonhomogeneous differential equations with variable coefficients based on Riccati Equation was developed. This method describes the dynamics of vibration of the suspension drum type of a washing machine in spin mode.

  13. A novel method to estimate model uncertainty using machine learning techniques

    NARCIS (Netherlands)

    Solomatine, D.P.; Lal Shrestha, D.

    2009-01-01

    A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the

  14. CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD

    Directory of Open Access Journals (Sweden)

    Yakup TURGUT

    2004-03-01

    Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.

  15. MODAL ANALYSIS OF CARRIER SYSTEM FOR HEAVY HORIZONTAL MULTIFUNCTION MACHINING CENTER BY FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    Yu. V. Vasilevich

    2014-01-01

    Full Text Available The aim of the paper is to reveal and analyze resonance modes of a large-scale milling-drilling-boring machine. The machine has a movable column with vertical slot occupied by a symmetrical carriage with horizontal ram. Static rigidity of the machine is relatively low due to its large dimensions. So it is necessary to assess possible vibration activity. Virtual and operational trials of the machine have been carried out simultaneously. Modeling has been executed with the help of a finite element method (FEM. The FEM-model takes into account not only rigidity of machine structures but also flexibility of bearings, feed drive systems and guides. Modal FEM-analysis has revealed eight resonance modes that embrace the whole machine tool. They form a frequency interval from 12 to 75 Hz which is undesirable for machining. Three closely located resonances (31-37 Hz are considered as the most dangerous ones. They represent various combinations of three simple motions: vertical oscillations of a carriage, horizontal vibrations of a ram and column torsion. Reliability of FEM- estimations has been proved by in-situ vibration measurements.An effect for stabilization of resonance modes has been detected while making variations in design parameters of the machine tool. For example, a virtual replacement of cast iron for steel in machine structures practically does not have any effect on resonance frequencies. Rigidity increase in some parts (e.g. a ram has also a small effect on a resonance pattern. On the other hand, resonance stability makes it possible to avoid them while selecting a spindle rotation frequency.It is recommended to set double feed drives for all axes. A pair of vertical screws prevents a “pecking” resonance of the carriage at frequency of 54 Hz. It is necessary to foresee an operation of a main drive of such heavy machine tool in the above resonance interval with the spindle frequency of more than 75 Hz. For this purpose it is necessary

  16. One method for life time estimation of a bucket wheel machine for coal moving

    Science.gov (United States)

    Vîlceanu, Fl; Iancu, C.

    2016-08-01

    Rehabilitation of outdated equipment with lifetime expired, or in the ultimate life period, together with high cost investments for their replacement, makes rational the efforts made to extend their life. Rehabilitation involves checking operational safety based on relevant expertise of metal structures supporting effective resistance and assessing the residual lifetime. The bucket wheel machine for coal constitute basic machine within deposits of coal of power plants. The estimate of remaining life can be done by checking the loading on the most stressed subassembly by Finite Element Analysis on a welding detail. The paper presents step-by-step the method of calculus applied in order to establishing the residual lifetime of a bucket wheel machine for coal moving using non-destructive methods of study (fatigue cracking analysis + FEA). In order to establish the actual state of machine and areas subject to study, was done FEA of this mining equipment, performed on the geometric model of mechanical analyzed structures, with powerful CAD/FEA programs. By applying the method it can be calculated residual lifetime, by extending the results from the most stressed area of the equipment to the entire machine, and thus saving time and money from expensive replacements.

  17. Selection Of Cutting Inserts For Aluminum Alloys Machining By Using MCDM Method

    Science.gov (United States)

    Madić, Miloš; Radovanović, Miroslav; Petković, Dušan; Nedić, Bogdan

    2015-07-01

    Machining of aluminum and its alloys requires the use of cutting tools with special geometry and material. Since there exists a number of cutting tools for aluminum machining, each with unique characteristics, selection of the most appropriate cutting tool for a given application is very complex task which can be viewed as a multi-criteria decision making (MCDM) problem. This paper is focused on multi-criteria analysis of VCGT cutting inserts for aluminum alloys turning by applying recently developed MCDM method, i.e. weighted aggregated sum product assessment (WASPAS) method. The MCDM model was defined using the available catalogue data from cutting tool manufacturers.

  18. An Universal Modeling Method for Enhancing the Volumetric Accuracy of CNC Machine Tools

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Volumetric error modeling method is an important te ch nique for enhancement the accuracy of CNC machine tools by error compensation. I n the research field, the main question is how to find an universal kinematics m odeling method for different kinds of NC machine tools. Multi-body system theor y is always used to solve the dynamics problem of complex physical system. But t ill now, the error factors that always exist in practice system is still not con sidered. In this paper, the accuracy kinematics of MB...

  19. Detecting Milling Deformation in 7075 Aluminum Alloy Aeronautical Monolithic Components Using the Quasi-Symmetric Machining Method

    Directory of Open Access Journals (Sweden)

    Qiong Wu

    2016-04-01

    Full Text Available The deformation of aeronautical monolithic components due to CNC machining is a bottle-neck issue in the aviation industry. The residual stress releases and redistributes in the process of material removal, and the distortion of the monolithic component is generated. The traditional one-side machining method will produce oversize deformation. Based on the three-stage CNC machining method, the quasi-symmetric machining method is developed in this study to reduce deformation by symmetry material removal using the M-symmetry distribution law of residual stress. The mechanism of milling deformation due to residual stress is investigated. A deformation experiment was conducted using traditional one-side machining method and quasi-symmetric machining method to compare with finite element method (FEM. The deformation parameters are validated by comparative results. Most of the errors are within 10%. The reason for these errors is determined to improve the reliability of the method. Moreover, the maximum deformation value of using quasi-symmetric machining method is within 20% of that of using the traditional one-side machining method. This result shows the quasi-symmetric machining method is effective in reducing deformation caused by residual stress. Thus, this research introduces an effective method for reducing the deformation of monolithic thin-walled components in the CNC milling process.

  20. Multi-Stage Convex Relaxation Methods for Machine Learning

    Science.gov (United States)

    2013-03-01

    relaxation with Lasso (L1 regularization), the multi-stage convex relaxation method can 3 Initialize v̂ = 1 Repeat the following two steps until convergence...observations using the following sparse regression method: ŵ = arg min w  1 n ‖Xw − y‖22 + λ d∑ j=1 g(|wj |)  , (9) where g(|wj |) is a...estimation problems. Statistical Science, 27:576–593, 2012. Tong Zhang. Some sharp performance bounds for least squares regression with L1

  1. Application of atmospheric pressure plasma polishing method in machining of silicon ultra-smooth surfaces

    Institute of Scientific and Technical Information of China (English)

    Jufan ZHANG; Bo WANG; Shen DONG

    2008-01-01

    The modern optics industry demands rigorous surface quality with minimum defects, which presents challenges to optics machining technologies. There are always certain defects on the final surfaces of the compo-nents formed in conventional contacting machining proc-esses, such as micro-cracks, lattice disturbances, etc. It is especially serious for hard-brittle functional materials, such as crystals, glass and ceramics because of their special characteristics. To solve these problems, the atmospheric pressure plasma polishing (APPP) method is developed. It utilizes chemical reactions between reactive plasma and surface atoms to perform atom-scale material removal. Since the machining process is chemical in nature, APPP avoids the surface/subsurface defects mentioned above. As the key component, a capacitance coupled radio-fre-quency plasma torch is first introduced. In initial opera-tions, silicon wafers were machined as samples. Before applying operations, both the temperature distribution on the work-piece surface and the spatial gas diffusion in the machining process were studied qualitatively by finite element analysis. Then the following temperature measurement experiments demonstrate the formation of the temperature gradient on the wafer surface predicted by the theoretical analysis and indicated a peak temper-ature about 90℃ in the center. By using commercialized form talysurf, the machined surface was detected and the result shows regular removal profile that corresponds well to the flow field model. Moreover, the removal profile also indicates a 32 mm3/min removal rate. By using atomic force microscopy (AFM), the surface roughness was also measured and the result demonstrates an Ra 0.6 nm surface roughness. Then the element composition of the machined surface was detected and analyzed by X-ray photoelectron spectroscopy (XPS) technology. The results also demonstrate the occurrence of the anticipated main reactions. All the experiments have proved that

  2. A Two-stage Tuning Method of Servo Parameters for Feed Drives in Machine Tools

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the evaluation of dynamic performance for feed drives in machine tools, this paper presents a two-stage tuning method of servo parameters. In the first stage, the evaluation of dynamic performance, parameter tuning and optimization on a mechatronic integrated system simulation platform of feed drives are performed. As a result, a servo parameter combination is acquired. In the second stage, the servo parameter combination from the first stage is set and tuned further in a real machine tool whose dynamic performance is measured and evaluated using the cross grid encoder developed by Heidenhain GmbH. A case study shows that this method simplifies the test process effectively and results in a good dynamic performance in a real machine tool.

  3. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    Science.gov (United States)

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  4. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  5. An Approximation Method of NURBS Curves in NC Machining

    Institute of Scientific and Technical Information of China (English)

    YUE Ying; HAN Qingyao; WANG Zhangqi

    2006-01-01

    An algorithm for approximating arbitrary NURBS curve with straight line is presented. Firstly, NURBS curve is acquired according to data points on the curve. Secondly, Approximating arbitrary NURBS curve is based on dichotomy. The resulting straight line approaches to the original curve with relatively fewer segments within the required tolerance. The example shows that the algorithm is simple and its approximation precision is high. The method is most useful in numerical control to drive the cutter along straight line or circular paths.

  6. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    Science.gov (United States)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  7. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  8. Machine learning a probabilistic perspective

    CERN Document Server

    Murphy, Kevin P

    2012-01-01

    Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic method...

  9. Applying the Support Vector Machine Method to Matching IRAS and SDSS Catalogues

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2007-10-01

    Full Text Available This paper presents results of applying a machine learning technique, the Support Vector Machine (SVM, to the astronomical problem of matching the Infra-Red Astronomical Satellite (IRAS and Sloan Digital Sky Survey (SDSS object catalogues. In this study, the IRAS catalogue has much larger positional uncertainties than those of the SDSS. A model was constructed by applying the supervised learning algorithm (SVM to a set of training data. Validation of the model shows a good identification performance (∼ 90% correct, better than that derived from classical cross-matching algorithms, such as the likelihood-ratio method used in previous studies.

  10. A method for classification of network traffic based on C5.0 Machine Learning Algorithm

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup

    2012-01-01

    current network traffic. To overcome the drawbacks of existing methods for traffic classification, usage of C5.0 Machine Learning Algorithm (MLA) was proposed. On the basis of statistical traffic information received from volunteers and C5.0 algorithm we constructed a boosted classifier, which was shown...

  11. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed to ...

  12. MOTION ERROR ESTIMATION OF5-AXIS MACHINING CENTER USING DBB METHOD

    Institute of Scientific and Technical Information of China (English)

    CHEN Huawei; ZHANG Dawei; TIAN Yanling; ICHIRO Hagiwara

    2006-01-01

    In order to estimate the motion errors of 5-axis machine center, the double ball bar (DBB)method is adopted to realize the diagnosis procedure. The motion error sources of rotary axes in 5-axis machining center comprise of the alignment error of rotary axes and the angular error due to various factors, e.g. the inclination of rotary axes. From sensitive viewpoints, each motion error is possible to have a particular sensitive direction in which deviation of DBB error trace arises from only some specific error sources. The model of the DBB error trace is established according to the spatial geometry theory. Accordingly, the sensitive direction of each motion error source is made clear through numerical simulation, which is used as the reference patterns for rotational error estimation.The estimation method is proposed to easily estimate the motion error sources of rotary axes in quantitative manner. To verify the proposed DBB method for rotational error estimation, the experimental tests are carried out on a 5-axis machining center M-400 (MORISEIKI). The effect of the mismatch of the DBB is also studied to guarantee the estimation accuracy. From the experimental data, it is noted that the proposed estimation method for 5-axis machining center is feasible and effective.

  13. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods

    OpenAIRE

    Zhang, Tong

    2001-01-01

    This book is an introduction to support vector machines and related kernel methods in supervised learning, whose task is to estimate an input-output functional relationship from a training set of examples. A learning problem is referred to as classification if its output take discrete values in a set of possible categories and regression if it has continuous real-valued output.

  14. Machine Learning Method for Pattern Recognition in Volcano Seismic Spectra

    Science.gov (United States)

    Radic, V.; Unglert, K.; Jellinek, M.

    2016-12-01

    Variations in the spectral content of volcano seismicity related to changes in volcanic activity are commonly identified manually in spectrograms. However, long time series of monitoring data at volcano observatories require tools to facilitate automated and rapid processing. Techniques such as Self-Organizing Maps (SOM), Principal Component Analysis (PCA) and clustering methods can help to quickly and automatically identify important patterns related to impending eruptions. In this study we develop and evaluate an algorithm applied on a set of synthetic volcano seismic spectra as well as observed spectra from Kılauea Volcano, Hawai`i. Our goal is to retrieve a set of known spectral patterns that are associated with dominant phases of volcanic tremor before, during, and after periods of volcanic unrest. The algorithm is based on training a SOM on the spectra and then identifying local maxima and minima on the SOM 'topography'. The topography is derived from the first two PCA modes so that the maxima represent the SOM patterns that carry most of the variance in the spectra. Patterns identified in this way reproduce the known set of spectra. Our results show that, regardless of the level of white noise in the spectra, the algorithm can accurately reproduce the characteristic spectral patterns and their occurrence in time. The ability to rapidly classify spectra of volcano seismic data without prior knowledge of the character of the seismicity at a given volcanic system holds great potential for real time or near-real time applications, and thus ultimately for eruption forecasting.

  15. An improved method of support vector machine and its applications to financial time series forecasting

    Institute of Scientific and Technical Information of China (English)

    LIANG Yanchun; SUN Yanfeng

    2003-01-01

    A novel method for kernel function of support vector machine is presented based on the information geometry theory. The kernel function is modified using a conformal mapping to make the kernel data-dependent so as to increase the ability of predicting high noise data of the method. Numerical simulations demonstrate the effectiveness of the method. Simulated results on the prediction of the stock price show that the improved approach possesses better forecasting precision and ability of generalization than the conventional models.

  16. Novel Method of Predicting Network Bandwidth Based on Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    沈伟; 冯瑞; 邵惠鹤

    2004-01-01

    In order to solve the problems of small sample over-fitting and local minima when neural networks learn online, a novel method of predicting network bandwidth based on support vector machines(SVM) is proposed. The prediction and learning online will be completed by the proposed moving window learning algorithm(MWLA). The simulation research is done to validate the proposed method, which is compared with the method based on neural networks.

  17. Optimization of the Machining parameter of LM6 Alminium alloy in CNC Turning using Taguchi method

    Science.gov (United States)

    Arunkumar, S.; Muthuraman, V.; Baskaralal, V. P. M.

    2017-03-01

    Due to widespread use of highly automated machine tools in the industry, manufacturing requires reliable models and methods for the prediction of output performance of machining process. In machining of parts, surface quality is one of the most specified customer requirements. In order for manufactures to maximize their gains from utilizing CNC turning, accurate predictive models for surface roughness must be constructed. The prediction of optimum machining conditions for good surface finish plays an important role in process planning. This work deals with the study and development of a surface roughness prediction model for machining LM6 aluminum alloy. Two important tools used in parameter design are Taguchi orthogonal arrays and signal to noise ratio (S/N). Speed, feed, depth of cut and coolant are taken as process parameter at three levels. Taguchi’s parameters design is employed here to perform the experiments based on the various level of the chosen parameter. The statistical analysis results in optimum parameter combination of speed, feed, depth of cut and coolant as the best for obtaining good roughness for the cylindrical components. The result obtained through Taguchi is confirmed with real time experimental work.

  18. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods

    Science.gov (United States)

    Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-01-01

    Background To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient’s weight kept rising in the past year). This process becomes infeasible with limited budgets. Objective This study’s goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. Methods This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new

  19. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    Science.gov (United States)

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077).

  20. Preventing IP Source Address Spoofing: A Two-Level,State Machine-Based Method

    Institute of Scientific and Technical Information of China (English)

    BI Jun; LIU Bingyang; WU Jianping; SHEN Yan

    2009-01-01

    A signature-and-verification-based method, automatic peer-to-peer anti-spoofing (APPA), is pro-posed to prevent IP source address spoofing. In this method, signatures are tagged into the packets at the source peer, and verified and removed at the verification peer where packets with incorrect signatures are filtered. A unique state machine, which is used to generate signatures, is associated with each ordered pair of APPA peers. As the state machine automatically transits, the signature changes accordingly. KISS ran-dom number generator is used as the signature generating algorithm, which makes the state machine very small and fast and requires very low management costs. APPA has an intre-AS (autonomous system) level and an inter-AS level. In the intra-AS level, signatures are tagged into each departing packet at the host and verified at the gateway to achieve finer-grained anti-spoofing than ingress filtering. In the inter-AS level, signatures are tagged at the source AS border router and verified at the destination AS border muter to achieve prefix-level anti-spoofing, and the automatic state machine enables the peers to change signatures without negotiation which makes APPA attack-resilient compared with the spoofing prevention method. The results show that the two levels are both incentive for deployment, and they make APPA an integrated anti-spoofing solution.

  1. Preparation of Machinable Bioactive Glass-ceramics by Sol-gel Method

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The purpose of this research was to prepare machinable bioactive glass-ceramics by sol-gel method. A multi-component composite sol with great uniformity and stability was first prepared by a 2-step method.The composite sol was then transformed into gel by aging under different temperatures. The gel was dried finally by super critically drying method and sintered to obtain the machinable bioactive glass-ceramics. Effect of thermal treatment on crystallization of the glass-ceramics was investigated by X-ray diffraction (XRD) analysis. Microstructure of the glass-ceramics was observed by Scanning Electron Microscopy (SEM) and the mechanism of machinability was discussed. Phlogopite and hydroxylapatite were identified as main crystal phases by XRD analysis under thermal treatment at 750 ℃ and 950 ℃ for 1.5 h separately. The relative bulk density could achieve 99%under 1050 ℃ for 4 h. Microstructure of the glass-ceramics showed that the randomly distributed phlogopite and hydroxylapatite phases were favorable to the machinability of the glass-ceramics. A mean bending strength of about 160-180 MPa and a fracture toughness parameter KIC of about 2.1-2.3 were determined for the glass-ceramics.

  2. Novel fabrication method for zirconia restorations: bonding strength of machinable ceramic to zirconia with resin cements.

    Science.gov (United States)

    Kuriyama, Soichi; Terui, Yuichi; Higuchi, Daisuke; Goto, Daisuke; Hotta, Yasuhiro; Manabe, Atsufumi; Miyazaki, Takashi

    2011-01-01

    A novel method was developed to fabricate all-ceramic restorations which comprised CAD/CAM-fabricated machinable ceramic bonded to CAD/CAM-fabricated zirconia framework using resin cement. The feasibility of this fabrication method was assessed in this study by investigating the bonding strength of a machinable ceramic to zirconia. A machinable ceramic was bonded to a zirconia plate using three kinds of resin cements: ResiCem (RE), Panavia (PA), and Multilink (ML). Conventional porcelain-fused-to-zirconia specimens were also prepared to serve as control. Shear bond strength test (SBT) and Schwickerath crack initiation test (SCT) were carried out. SBT revealed that PA (40.42 MPa) yielded a significantly higher bonding strength than RE (28.01 MPa) and ML (18.89 MPa). SCT revealed that the bonding strengths of test groups using resin cement were significantly higher than those of Control. Notably, the bonding strengths of RE and ML were above 25 MPa even after 10,000 times of thermal cycling -adequately meeting the ISO 9693 standard for metal-ceramic restorations. These results affirmed the feasibility of the novel fabrication method, in that a CAD/CAM-fabricated machinable ceramic is bonded to a CAD/CAM-fabricated zirconia framework using a resin cement.

  3. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  4. Gamma/hadron segregation for a ground based imaging atmospheric Cherenkov telescope using machine learning methods: Random Forest leads

    CERN Document Server

    Sharma, Mradul; Koul, M K; Bose, S; Mitra, Abhas

    2014-01-01

    A detailed case study of $\\gamma$-hadron segregation for a ground based atmospheric Cherenkov telescope is presented. We have evaluated and compared various supervised machine learning methods such as the Random Forest method, Artificial Neural Network, Linear Discriminant method, Naive Bayes Classifiers,Support Vector Machines as well as the conventional dynamic supercut method by simulating triggering events with the Monte Carlo method and applied the results to a Cherenkov telescope. It is demonstrated that the Random Forest method is the most sensitive machine learning method for $\\gamma$-hadron segregation.

  5. Vibration reliability analysis for aeroengine compressor blade based on support vector machine response surface method

    Institute of Scientific and Technical Information of China (English)

    GAO Hai-feng; BAI Guang-chen

    2015-01-01

    To ameliorate reliability analysis efficiency for aeroengine components, such as compressor blade, support vector machine response surface method (SRSM) is proposed. SRSM integrates the advantages of support vector machine (SVM) and traditional response surface method (RSM), and utilizes experimental samples to construct a suitable response surface function (RSF) to replace the complicated and abstract finite element model. Moreover, the randomness of material parameters, structural dimension and operating condition are considered during extracting data so that the response surface function is more agreeable to the practical model. The results indicate that based on the same experimental data, SRSM has come closer than RSM reliability to approximating Monte Carlo method (MCM); while SRSM (17.296 s) needs far less running time than MCM (10958 s) and RSM (9840 s). Therefore, under the same simulation conditions, SRSM has the largest analysis efficiency, and can be considered a feasible and valid method to analyze structural reliability.

  6. Compensation method for temperature error of fiber optical gyroscope based on relevance vector machine.

    Science.gov (United States)

    Wang, Guochen; Wang, Qiuying; Zhao, Bo; Wang, Zhenpeng

    2016-02-10

    Aiming to improve the bias stability of the fiber optical gyroscope (FOG) in an ambient temperature-change environment, a temperature-compensation method based on the relevance vector machine (RVM) under Bayesian framework is proposed and applied. Compared with other temperature models such as quadratic polynomial regression, neural network, and the support vector machine, the proposed RVM method possesses higher accuracy to explain the temperature dependence of the FOG gyro bias. Experimental results indicate that, with the proposed RVM method, the bias stability of an FOG can be apparently reduced in the whole temperature ranging from -40°C to 60°C. Therefore, the proposed method can effectively improve the adaptability of the FOG in a changing temperature environment.

  7. Design of a new torque standard machine based on a torque generation method using electromagnetic force

    Science.gov (United States)

    Nishino, Atsuhiro; Ueda, Kazunaga; Fujii, Kenichi

    2017-02-01

    To allow the application of torque standards in various industries, we have been developing torque standard machines based on a lever deadweight system, i.e. a torque generation method using gravity. However, this method is not suitable for expanding the low end of the torque range, because of the limitations to the sizes of the weights and moment arms. In this study, the working principle of the torque generation method using an electromagnetic force was investigated by referring to watt balance experiments used for the redefinition of the kilogram. Applying this principle to a rotating coordinate system, an electromagnetic force type torque standard machine was designed and prototyped. It was experimentally demonstrated that SI-traceable torque could be generated by converting electrical power to mechanical power. Thus, for the first time, SI-traceable torque was successfully realized using a method other than that based on the force of gravity.

  8. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. Implementation Methods of Computer Aided Design-Drawing and Drawing Management for Plate Cutting-Machine

    Institute of Scientific and Technical Information of China (English)

    DONG Yu-de; ZHAO Han; TAN Jian-rong

    2002-01-01

    The implementation methods of computer aided design,drawing and drawing management for plate cuttingmachine are discussed. The system structure for plate cutting- machine design is put forward firstly, then some key technologies and their implementation methods are introduced, which include the structure management of graphics, the unification of graph and design calculation, information share of part, assemble and drawing management system, and movement simulation of key components.

  10. Book Recommendation Using Machine Learning Methods Based on Library Loan Records and Bibliographic Information

    OpenAIRE

    Tsuji, Keita; Yoshikane, Fuyuki; Sato, Sho; Itsumura, Hiroshi

    2015-01-01

    In this paper, we propose a method to recommend Japanese books to university students throughmachine learning modules based on several features, including library loan records. We determine themost effective method among the ones that used (a) a support vector machine (SVM), (b) a randomforest, and (c) Adaboost. Furthermore, we assess the most effective combination of relevant featuresamong (1) the association rules derived from library loan records, (2) book titles, (3) Nippon DecimalClassif...

  11. A Method for Identification and Compensation of Machining Errors of Digital Gear Tooth Surfaces

    Institute of Scientific and Technical Information of China (English)

    WANG Fulin; YI Chuanyun; CHEN Jing; YANG Shuzi

    2006-01-01

    In order to generate the digital gear tooth surfaces (DGTS) with high efficiency and high precision, a method for identification and compensation of machining errors is demonstrated in this paper. Machining errors are analyzed directly from the real tooth surfaces. The topography data of the part are off-line measured in the post-process. A comparison is made between two models: CAD model of DGTS and virtual model of the physical measured surface. And a matching rule is given to determine these two surfaces in an appropriate fashion. The developed error estimation model creates a point-to-point map of the real surface to the theoretical surface in the normal direction. A "pre-calibration error compensation" strategy is presented. Through processing the results of the first trail cutting, the total compensation error is predicted and an imaginary digital tooth surface is reconstructed. The machining errors in the final manufactured surfaces are minimized by generating this imaginary surface. An example of machining 2-D DGTS verifies the developed method. The research is of important theoretical and practical value to manufacture the DGTS and other digital conjugate surfaces.

  12. Modelling synchronous machines using the finite elements method; Modelagem de maquinas sincronas utilizando o metodo de elementos finitos

    Energy Technology Data Exchange (ETDEWEB)

    Sadowski, Nelson; Bastos, J.P. Assumpcao; Carlson, R. [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Eletrica; Lajoie-Mazenc, M. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France). Lab. d`Eletrotechnique et d`Eletronique Industrielle

    1995-12-31

    The finite elements method is used in the analysis of electric machines giving special emphasis to the synchronous machine. The developed computational method enables the obtention of global information such as torch, electromotive force, inductance, among others. The methodology is presented 6 figs., 7 refs.

  13. Provision of Controlled Motion Accuracy of Industrial Robots and Multiaxis Machines by the Method of Integrated Deviations Correction

    Science.gov (United States)

    Krakhmalev, O. N.; Petreshin, D. I.; Fedonin, O. N.

    2016-04-01

    There is a developed method of correction of the integrated motion deviations of industrial robots and multiaxis machines, which are caused by the primary geometrical deviations of their segments. This method can be used to develop a control system providing the motion correction for industrial robots and multiaxis machines.

  14. A newly conceived cylinder measuring machine and methods that eliminate the spindle errors

    Science.gov (United States)

    Vissiere, A.; Nouira, H.; Damak, M.; Gibaru, O.; David, J.-M.

    2012-09-01

    Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reducing measurement quality. The obtained measurement also combines both the motion error of the guiding system and the form error of the artifact. Detailed uncertainty budgeting might be improved, using error separation methods (multi-step, reversal and multi-probe error separation methods, etc), enabling identification of the systematic (synchronous or repeatable) guiding system motion errors as well as form error of the artifact. Nevertheless, the performance of this kind of machine is limited by the repeatability level of the mechanical guiding elements, which usually exceeds 25 nm (in the case of an air bearing spindle and a linear bearing). In order to guarantee a 5 nm measurement uncertainty level, LNE is currently developing an original machine dedicated to form measurement on cylindrical and spherical artifacts with an ultra-high level of accuracy. The architecture of this machine is based on the ‘dissociated metrological technique’ principle and contains reference probes and cylinder. The form errors of both cylindrical artifact and reference cylinder are obtained after a mathematical combination between the information given by the probe sensing the artifact and the information given by the probe sensing the reference cylinder by applying the modified multi-step separation method.

  15. Periodical capacity setting methods for make-to-order multi-machine production systems.

    Science.gov (United States)

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-08-18

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods' impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity.

  16. Object-Based Image Classification of Summer Crops with Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    José M. Peña

    2014-05-01

    Full Text Available The strategic management of agricultural lands involves crop field monitoring each year. Crop discrimination via remote sensing is a complex task, especially if different crops have a similar spectral response and cropping pattern. In such cases, crop identification could be improved by combining object-based image analysis and advanced machine learning methods. In this investigation, we evaluated the C4.5 decision tree, logistic regression (LR, support vector machine (SVM and multilayer perceptron (MLP neural network methods, both as single classifiers and combined in a hierarchical classification, for the mapping of nine major summer crops (both woody and herbaceous from ASTER satellite images captured in two different dates. Each method was built with different combinations of spectral and textural features obtained after the segmentation of the remote images in an object-based framework. As single classifiers, MLP and SVM obtained maximum overall accuracy of 88%, slightly higher than LR (86% and notably higher than C4.5 (79%. The SVM+SVM classifier (best method improved these results to 89%. In most cases, the hierarchical classifiers considerably increased the accuracy of the most poorly classified class (minimum sensitivity. The SVM+SVM method offered a significant improvement in classification accuracy for all of the studied crops compared to the conventional decision tree classifier, ranging between 4% for safflower and 29% for corn, which suggests the application of object-based image analysis and advanced machine learning methods in complex crop classification tasks.

  17. Identification of Village Building via Google Earth Images and Supervised Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Zhiling Guo

    2016-03-01

    Full Text Available In this study, a method based on supervised machine learning is proposed to identify village buildings from open high-resolution remote sensing images. We select Google Earth (GE RGB images to perform the classification in order to examine its suitability for village mapping, and investigate the feasibility of using machine learning methods to provide automatic classification in such fields. By analyzing the characteristics of GE images, we design different features on the basis of two kinds of supervised machine learning methods for classification: adaptive boosting (AdaBoost and convolutional neural networks (CNN. To recognize village buildings via their color and texture information, the RGB color features and a large number of Haar-like features in a local window are utilized in the AdaBoost method; with multilayer trained networks based on gradient descent algorithms and back propagation, CNN perform the identification by mining deeper information from buildings and their neighborhood. Experimental results from the testing area at Savannakhet province in Laos show that our proposed AdaBoost method achieves an overall accuracy of 96.22% and the CNN method is also competitive with an overall accuracy of 96.30%.

  18. Fractional Slot Concentrated Windings: A New Method to Manage the Mutual Inductance between Phases in Three-Phase Electrical Machines and Multi-Star Electrical Machines

    Directory of Open Access Journals (Sweden)

    Olivier Barre

    2015-06-01

    Full Text Available Mutual inductance is a phenomenon caused by the circulation of the magnetic flux in the core of an electrical machine. It is the result of the effect of the current flowing in one phase on the other phases. In conventional three-phase machines, such an effect has no influence on the electrical behaviour of the device. Although these machines are powered by power inverters, no problem should occur. The result is not the same for multi-star machines. If these machines are using a conventional winding structure, namely distributed windings, and are powered by voltage source converters, current ripples appear in the power supply lines. These current ripples are related to magnetic couplings between the stars. Designers should check these current ripples in order to stay within the limits imposed by the specifications. These electric current disturbances also provide torque ripples. With concentrated windings, a new degree of freedom appears; the configuration—number of slots/number of poles—can have a positive impact. The circulation of the magnetic flux is the initial phenomenon that produces the mutual inductance. The main goal of this discussion is to describe a design method that is able to produce not only a machine with low mutual inductance between phases, but also a multi-star machine where the stars and the phases are magnetically decoupled or less coupled. This discussion only takes into account the machines that use permanent magnets mounted on the rotor surface. This article is part of a study aimed at designing a high efficiency generator using fractional-slot concentrated-windings (FSCW.

  19. Feature Subset Selection for Hot Method Prediction using Genetic Algorithm wrapped with Support Vector Machines

    Directory of Open Access Journals (Sweden)

    S. Johnson

    2011-01-01

    Full Text Available Problem statement: All compilers have simple profiling-based heuristics to identify and predict program hot methods and also to make optimization decisions. The major challenge in the profile-based optimization is addressing the problem of overhead. The aim of this work is to perform feature subset selection using Genetic Algorithms (GA to improve and refine the machine learnt static hot method predictive technique and to compare the performance of the new models against the simple heuristics. Approach: The relevant features for training the predictive models are extracted from an initial set of randomly selected ninety static program features, with the help of the GA wrapped with the predictive model using the Support Vector Machine (SVM, a Machine Learning (ML algorithm. Results: The GA-generated feature subsets containing thirty and twenty nine features respectively for the two predictive models when tested on MiBench predict Long Running Hot Methods (LRHM and frequently called hot methods (FCHM with the respective accuracies of 71% and 80% achieving an increase of 19% and 22%. Further, inlining of the predicted LRHM and FCHM improve the program performance by 3% and 5% as against 4% and 6% with Low Level Virtual Machines (LLVM default heuristics. When intra-procedural optimizations (IPO are performed on the predicted hot methods, this system offers a performance improvement of 5% and 4% as against 0% and 3% by LLVM default heuristics on LRHM and FCHM respectively. However, we observe an improvement of 36% in certain individual programs. Conclusion: Overall, the results indicate that the GA wrapped with SVM derived feature reduction improves the hot method prediction accuracy and that the technique of hot method prediction based optimization is potentially useful in selective optimization.

  20. Analysis of the Thermal Characteristics of Machine Tool Feed System Based on Finite Element Method

    Science.gov (United States)

    Mao, Xiaobo; Mao, Kuanmin; Du, Yikang; Wang, Fengyun; Yan, Bo

    2017-09-01

    The loading of mobile heat source and boundary conditions setting are difficult problems in the analysis of thermal characteristics of machine tools. Taking the machine tool feed system as an example, a novel method for loading of mobile heat source was proposed by establishing a function which was constructed by the heat source and time. The convective heat transfer coefficient is the key parameter of boundary conditions, and it varies with the temperature. In this paper, a model of “variable convection heat transfer coefficient” was proposed, and the setting of boundary conditions of thermal analysis was closer to the real situation. Finally, comparing results of above method and experimental data, the accuracy and validity of this method was proved, meanwhile, the simulation calculation and simulation time was reducing greatly.

  1. The modified nodal analysis method applied to the modeling of the thermal circuit of an asynchronous machine

    Science.gov (United States)

    Nedelcu, O.; Salisteanu, C. I.; Popa, F.; Salisteanu, B.; Oprescu, C. V.; Dogaru, V.

    2017-01-01

    The complexity of electrical circuits or of equivalent thermal circuits that were considered to be analyzed and solved requires taking into account the method that is used for their solving. Choosing the method of solving determines the amount of calculation necessary for applying one of the methods. The heating and ventilation systems of electrical machines that have to be modeled result in complex equivalent electrical circuits of large dimensions, which requires the use of the most efficient methods of solving them. The purpose of the thermal calculation of electrical machines is to establish the heating, the overruns of temperatures or over-temperatures in some parts of the machine compared to the temperature of the ambient, in a given operating mode of the machine. The paper presents the application of the modified nodal analysis method for the modeling of the thermal circuit of an asynchronous machine.

  2. Methods, systems and apparatus for optimization of third harmonic current injection in a multi-phase machine

    Science.gov (United States)

    Gallegos-Lopez, Gabriel

    2012-10-02

    Methods, system and apparatus are provided for increasing voltage utilization in a five-phase vector controlled machine drive system that employs third harmonic current injection to increase torque and power output by a five-phase machine. To do so, a fundamental current angle of a fundamental current vector is optimized for each particular torque-speed of operating point of the five-phase machine.

  3. e-Learning Application for Machine Maintenance Process using Iterative Method in XYZ Company

    Science.gov (United States)

    Nurunisa, Suaidah; Kurniawati, Amelia; Pramuditya Soesanto, Rayinda; Yunan Kurnia Septo Hediyanto, Umar

    2016-02-01

    XYZ Company is a company based on manufacturing part for airplane, one of the machine that is categorized as key facility in the company is Millac 5H6P. As a key facility, the machines should be assured to work well and in peak condition, therefore, maintenance process is needed periodically. From the data gathering, it is known that there are lack of competency from the maintenance staff to maintain different type of machine which is not assigned by the supervisor, this indicate that knowledge which possessed by maintenance staff are uneven. The purpose of this research is to create knowledge-based e-learning application as a realization from externalization process in knowledge transfer process to maintain the machine. The application feature are adjusted for maintenance purpose using e-learning framework for maintenance process, the content of the application support multimedia for learning purpose. QFD is used in this research to understand the needs from user. The application is built using moodle with iterative method for software development cycle and UML Diagram. The result from this research is e-learning application as sharing knowledge media for maintenance staff in the company. From the test, it is known that the application make maintenance staff easy to understand the competencies.

  4. A learning-based automatic spinal MRI segmentation

    Science.gov (United States)

    Liu, Xiaoqing; Samarabandu, Jagath; Garvin, Greg; Chhem, Rethy; Li, Shuo

    2008-03-01

    Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances the clinical diagnosis. Although many algorithms have been proposed, it is still challenging to achieve an automatic clinical segmentation which requires speed and robustness. Automatically segmenting the vertebral column in Magnetic Resonance Imaging (MRI) image is extremely challenging as variations in soft tissue contrast and radio-frequency (RF) in-homogeneities cause image intensity variations. Moveover, little work has been done in this area. We proposed a generic slice-independent, learning-based method to automatically segment the vertebrae in spinal MRI images. A main feature of our contributions is that the proposed method is able to segment multiple images of different slices simultaneously. Our proposed method also has the potential to be imaging modality independent as it is not specific to a particular imaging modality. The proposed method consists of two stages: candidate generation and verification. The candidate generation stage is aimed at obtaining the segmentation through the energy minimization. In this stage, images are first partitioned into a number of image regions. Then, Support Vector Machines (SVM) is applied on those pre-partitioned image regions to obtain the class conditional distributions, which are then fed into an energy function and optimized with the graph-cut algorithm. The verification stage applies domain knowledge to verify the segmented candidates and reject unsuitable ones. Experimental results show that the proposed method is very efficient and robust with respect to image slices.

  5. Rotating electrical machines, pt.2: Methods for determining losses and efficiency of rotating electrical machinery form tests (excl. machines for traction vehicles), 1st suppl. Measurement of losses by the calorimetric method

    CERN Document Server

    International Electrotechnical Commission. Geneva

    1974-01-01

    Describes methods for measuring the efficiency of electrical rotating machines either by determining total losses on load or by determination of the segregated losses for air and water cooling mediums. Applies to large generators but may be used for other machines.

  6. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography

    Directory of Open Access Journals (Sweden)

    İlhan Umut

    2016-01-01

    Full Text Available The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM in sleep with the use of the channels except leg electromyography (EMG by analysing polysomnography (PSG data with digital signal processing (DSP and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87% and lower average classification error value (RMSE = 0.2850, multilayer perceptron algorithm had the lowest average classification rate (83.29% and the highest average classification error value (RMSE = 0.3705. Results showed that PLM can be classified with high accuracy (91.87% without leg EMG record being present.

  7. A Novel Bearing Fault Diagnosis Method Based on Gaussian Restricted Boltzmann Machine

    Directory of Open Access Journals (Sweden)

    Xiao-hui He

    2016-01-01

    Full Text Available To realize the fault diagnosis of bearing effectively, this paper presents a novel bearing fault diagnosis method based on Gaussian restricted Boltzmann machine (Gaussian RBM. Vibration signals are firstly resampled to the same equivalent speed. Subsequently, the envelope spectrums of the resampled data are used directly as the feature vectors to represent the fault types of bearing. Finally, in order to deal with the high-dimensional feature vectors based on envelope spectrum, a classifier model based on Gaussian RBM is applied. Gaussian RBM has the ability to provide a closed-form representation of the distribution underlying the training data, and it is very convenient for modeling high-dimensional real-valued data. Experiments on 10 different data sets verify the performance of the proposed method. The superiority of Gaussian RBM classifier is also confirmed by comparing with other classifiers, such as extreme learning machine, support vector machine, and deep belief network. The robustness of the proposed method is also studied in this paper. It can be concluded that the proposed method can realize the bearing fault diagnosis accurately and effectively.

  8. A novel method for machine performance degradation assessment based on fixed cycle features test

    Science.gov (United States)

    Liao, Linxia; Lee, Jay

    2009-10-01

    This paper presents a novel machine performance degradation scheme based on fixed cycle features test (FCFT). Instead of monitoring the machine under constant working load, FCFT introduces a new testing method which obtains data during the transient periods of different working loads. A novel performance assessment method based on those transient data without failure history is proposed. Wavelet packet analysis (WPA) is applied to extract features which capture the dynamic characteristics from the non-stationary vibration data. Principal component analysis (PCA) is used to reduce the dimension of the feature space. Gaussian mixture model (GMM) is utilized to approximate the density distribution of the lower-dimensional feature space which consists of the major principal components. The performance index of the machine is calculated based on the overlap between the distribution of the baseline feature space and that of the testing feature space. Bayesian information criterion (BIC) is used to determine the number of mixtures for the GMM and a density boosting method is applied to achieve better accuracy of the distribution estimation. A case study for a chiller system performance assessment is used as an example to validate the effectiveness of the proposed method.

  9. Research on control method for machining non-cylinder pin hole of piston

    Institute of Scientific and Technical Information of China (English)

    WU Yi-jie; LENG Hong-bin; ZHAO Zhang-rong; CHEN Jun-hua

    2006-01-01

    The control method for machining non-cylinder pin hole of piston was studied systematically. A new method was presented by embedding giant magnetostrictive material (GMM) into the tool bar proper position. The model is established to characterize the relation between control current of coil and deformation of tool rod. A series of tests on deformation of giant magnetostrictive tool bar were done and the results validated the feasibility of the principle. The methods of measuring magnetostrictive coefficient of rare earth GMM were analyzed. The measuring device with the bias field and prestress was designed. A series of experiments were done to test magnetostrictive coefficient. Experimental results supplied accurate characteristic parameter for designing application device of GMM. The constitution of the developed control system made up of displacement detection and temperature detection for thermal deformation compensation was also introduced. The developed machine tool for boring the non-cylinder pin hole of piston has the micron order accuracy. This control method can be applied to other areas for machining precision or complex parts.

  10. Hippocampal shape analysis of Alzheimer disease based on machine learning methods.

    Science.gov (United States)

    Li, S; Shi, F; Pu, F; Li, X; Jiang, T; Xie, S; Wang, Y

    2007-08-01

    Alzheimer disease (AD) is a neurodegenerative disease characterized by progressive dementia. The hippocampus is particularly vulnerable to damage at the very earliest stages of AD. This article seeks to evaluate critical AD-associated regional changes in the hippocampus using machine learning methods. High-resolution MR images were acquired from 19 patients with AD and 20 age- and sex-matched healthy control subjects. Regional changes of bilateral hippocampi were characterized using computational anatomic mapping methods. A feature selection method for support vector machine and leave-1-out cross-validation was introduced to determine regional shape differences that minimized the error rate in the datasets. Patients with AD showed significant deformations in the CA1 region of bilateral hippocampi, as well as the subiculum of the left hippocampus. There were also some changes in the CA2-4 subregions of the left hippocampus among patients with AD. Moreover, the left hippocampal surface showed greater variations than the right compared with those in healthy control subjects. The accuracies of leave-1-out cross-validation and 3-fold cross-validation experiments for assessing the reliability of these subregions were more than 80% in bilateral hippocampi. Subtle and spatially complex deformation patterns of hippocampus between patients with AD and healthy control subjects can be detected by machine learning methods.

  11. Use of maximum entropy method with parallel processing machine. [for x-ray object image reconstruction

    Science.gov (United States)

    Yin, Lo I.; Bielefeld, Michael J.

    1987-01-01

    The maximum entropy method (MEM) and balanced correlation method were used to reconstruct the images of low-intensity X-ray objects obtained experimentally by means of a uniformly redundant array coded aperture system. The reconstructed images from MEM are clearly superior. However, the MEM algorithm is computationally more time-consuming because of its iterative nature. On the other hand, both the inherently two-dimensional character of images and the iterative computations of MEM suggest the use of parallel processing machines. Accordingly, computations were carried out on the massively parallel processor at Goddard Space Flight Center as well as on the serial processing machine VAX 8600, and the results are compared.

  12. A nonparametric Bayesian method of translating machine learning scores to probabilities in clinical decision support.

    Science.gov (United States)

    Connolly, Brian; Cohen, K Bretonnel; Santel, Daniel; Bayram, Ulya; Pestian, John

    2017-08-07

    Probabilistic assessments of clinical care are essential for quality care. Yet, machine learning, which supports this care process has been limited to categorical results. To maximize its usefulness, it is important to find novel approaches that calibrate the ML output with a likelihood scale. Current state-of-the-art calibration methods are generally accurate and applicable to many ML models, but improved granularity and accuracy of such methods would increase the information available for clinical decision making. This novel non-parametric Bayesian approach is demonstrated on a variety of data sets, including simulated classifier outputs, biomedical data sets from the University of California, Irvine (UCI) Machine Learning Repository, and a clinical data set built to determine suicide risk from the language of emergency department patients. The method is first demonstrated on support-vector machine (SVM) models, which generally produce well-behaved, well understood scores. The method produces calibrations that are comparable to the state-of-the-art Bayesian Binning in Quantiles (BBQ) method when the SVM models are able to effectively separate cases and controls. However, as the SVM models' ability to discriminate classes decreases, our approach yields more granular and dynamic calibrated probabilities comparing to the BBQ method. Improvements in granularity and range are even more dramatic when the discrimination between the classes is artificially degraded by replacing the SVM model with an ad hoc k-means classifier. The method allows both clinicians and patients to have a more nuanced view of the output of an ML model, allowing better decision making. The method is demonstrated on simulated data, various biomedical data sets and a clinical data set, to which diverse ML methods are applied. Trivially extending the method to (non-ML) clinical scores is also discussed.

  13. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances

    Directory of Open Access Journals (Sweden)

    Abut F

    2015-08-01

    Full Text Available Fatih Abut, Mehmet Fatih AkayDepartment of Computer Engineering, Çukurova University, Adana, TurkeyAbstract: Maximal oxygen uptake (VO2max indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance

  14. A Novel Cogging Torque Simulation Method for Permanent-Magnet Synchronous Machines

    OpenAIRE

    Chun-Yu Hsiao; Jonq-Chin Hwang; Sheng-Nian Yeh

    2011-01-01

    Cogging torque exists between rotor mounted permanent magnets and stator teeth due to magnetic attraction and this is an undesired phenomenon which produces output ripple, vibration and noise in machines. The purpose of this paper is to study the existence and effects of cogging torque, and to present a novel, rapid, half magnet pole pair technique for forecasting and evaluating cogging torque. The technique uses the finite element method as well as Matlab research and development oriented so...

  15. Comparison between 2D and 3D Modelling of Induction Machine Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    Zelmira Ferkova

    2015-01-01

    Full Text Available The paper compares two different ways (2D and 3D of modelling of two-phase squirrel-cage induction machine using the finite element method (FEM. It focuses mainly on differences between starting characteristics given from both types of the model. It also discusses influence of skew rotor slots on harmonic content in air gap flux density and summarizes some issues of both approaches.

  16. An integrated multidisciplinary design optimization method for computer numerical control machine tool development

    Directory of Open Access Journals (Sweden)

    Zaifang Zhang

    2015-02-01

    Full Text Available Computer numerical control machine tool is a typical complex product related with multidisciplinary fields, complex structure, and high-performance requirements. It is difficult to identify the overall optimal solution of the machine tool structure for their multiple objectives. A new integrated multidisciplinary design optimization method is then proposed by using a Latin hypercube sampling, a Kriging approximate model, and a multi-objective genetic algorithm. Design space and parametric model are built by choosing appropriate design variables and their value ranges. Samples in design space are generated by optimal Latin hypercube method, and design variable contributions for design performance are discussed for aiding the designer’s judgments. The Kriging model is built by using polynomial approximation according to the response outputs of these samples. The multidisciplinary design model is established based on three optimization objectives, that is, setting mass, optimum deformation, and first-order natural frequency, and two constraints, that is, second-order natural frequency and third-order natural frequency. The optimal solution is identified by using a multi-objective genetic algorithm. The proposed method is applied in a multidisciplinary optimization case study for a typical computer numerical control machine tool. In the optimal solution, the mass decreases by 3.35% and the first-order natural frequency increases by 4.34% in contrast to the original solution.

  17. Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.

    Science.gov (United States)

    Polat, Huseyin; Danaei Mehr, Homay; Cetin, Aydin

    2017-04-01

    As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.

  18. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xinrong Ji

    2016-07-01

    Full Text Available In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  19. Optimization of image capturing method of wear particles for condition diagnosis of machine parts

    Institute of Scientific and Technical Information of China (English)

    Yon-Sang CHO; Heung-Sik PARK

    2009-01-01

    Wear particles are inevitably occurred from moving parts, such as a piston-cylinder made from steel or hybrid materials. And a durability of these parts must be evaluated. The wear particle analysis has been known as a very effective method to foreknow and decide a moving situation and a damage of machine parts by using the digital computer image processing. But it is not laid down to calculate shape parameters of wear particle and wear volume. In order to apply image processing method in a durability evaluation of machine parts, it needs to verify the reliability of the calculated data by the image processing and to lay down the number of images and the amount of wear particles in one image. In this work, the lubricated friction experiment was carried out in order to establish the optimum image capture with the 1045 specimen under experiment condition. The wear particle data were calculated differently according to the number of image and the amount of wear particle in one image. The results show that capturing conditions need to he more than 140 wear particles in one image and over 40 images for the reliable data. Thus, the capturing method of wear particles images was optimized for condition diagnosis of machine moving parts.

  20. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    Science.gov (United States)

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  1. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.

    Science.gov (United States)

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-07-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  2. A New Energy-Based Method for 3-D Finite-Element Nonlinear Flux Linkage computation of Electrical Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2011-01-01

    This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....

  3. Multi-Machine Controller Design of Permanent Magnet Wind Generators using Hamiltonian Energy Method

    Directory of Open Access Journals (Sweden)

    Bing Wang

    2013-07-01

    Full Text Available In this paper, the nonlinear control problem of permanent magnet wind generators is investigated based on Hamiltonian energy method. A nonlinear design method is proposed for the multi-machine system, such that the closed-loop system is stable simultaneously. Moreover, in the presence of disturbances, the closed-loop is finite–gain L2 stable under the action of the Hamiltonian controller. In order to illustrate the effectiveness of the proposed method, the simulations are performed which show that the gotten controller can improve the transient property and robustness of the system.  

  4. Investigation of Unbalanced Magnetic Force in Magnetic Geared Machine Using Analytical Methods

    DEFF Research Database (Denmark)

    Zhang, Xiaoxu; Liu, Xiao; Chen, Zhe

    2016-01-01

    The electromagnetic structure of the magnetic geared machine (MGM) may induce a significant unbalanced magnetic force (UMF). However, few methods have been developed to theoretically reveal the essential reasons for this issue in the MGM. In this paper, an analytical method based on an air-gap...... relative permeance theory is first developed to qualitatively study the origins of the UMF in the MGM. By means of formula derivations, three kinds of magnetic field behaviors in the air gaps are found to be the potential sources of UMF. It is also proved that the UMF is possible to avoid by design choices...... the results achieved by the developed analytical methods....

  5. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    Science.gov (United States)

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.

  6. Application of PROMETHEE-GAIA method for non-traditional machining processes selection

    Directory of Open Access Journals (Sweden)

    Prasad Karande

    2012-10-01

    Full Text Available With ever increasing demand for manufactured products of hard alloys and metals with high surface finish and complex shape geometry, more interest is now being paid to non-traditional machining (NTM processes, where energy in its direct form is used to remove material from workpiece surface. Compared to conventional machining processes, NTM processes possess almost unlimited capabilities and there is a strong believe that use of NTM processes would go on increasing in diverse range of applications. Presence of a large number of NTM processes along with complex characteristics and capabilities, and lack of experts in NTM process selection domain compel for development of a structured approach for NTM process selection for a given machining application. Past researchers have already attempted to solve NTM process selection problems using various complex mathematical approaches which often require a profound knowledge in mathematics/artificial intelligence from the part of process engineers. In this paper, four NTM process selection problems are solved using an integrated PROMETHEE (preference ranking organization method for enrichment evaluation and GAIA (geometrical analysis for interactive aid method which would act as a visual decision aid to the process engineers. The observed results are quite satisfactory and exactly match with the expected solutions.

  7. The reduction methods of operator's radiation dose for portable dental X-ray machines

    Directory of Open Access Journals (Sweden)

    Jeong-Yeon Cho

    2012-08-01

    Full Text Available Objectives This study was aimed to investigate the methods to reduce operator's radiation dose when taking intraoral radiographs with portable dental X-ray machines. Materials and Methods Two kinds of portable dental X-ray machines (DX3000, Dexcowin and Rextar, Posdion were used. Operator's radiation dose was measured with an 1,800 cc ionization chamber (RadCal Corp. at the hand level of X-ray tubehead and at the operator's chest and waist levels with and without the backscatter shield. The operator's radiation dose at the hand level was measured with and without lead gloves and with long and short cones. Results The backscatter shield reduced operator's radiation dose at the hand level of X-ray tubehead to 23 - 32%, the lead gloves to 26 - 31%, and long cone to 48 - 52%. And the backscatter shield reduced operator's radiation dose at the operator's chest and waist levels to 0.1 - 37%. Conclusions When portable dental X-ray systems are used, it is recommended to select X-ray machine attached with a backscatter shield and a long cone and to wear the lead gloves.

  8. Improved machine learning method for analysis of gas phase chemistry of peptides

    Directory of Open Access Journals (Sweden)

    Ahn Natalie

    2008-12-01

    Full Text Available Abstract Background Accurate peptide identification is important to high-throughput proteomics analyses that use mass spectrometry. Search programs compare fragmentation spectra (MS/MS of peptides from complex digests with theoretically derived spectra from a database of protein sequences. Improved discrimination is achieved with theoretical spectra that are based on simulating gas phase chemistry of the peptides, but the limited understanding of those processes affects the accuracy of predictions from theoretical spectra. Results We employed a robust data mining strategy using new feature annotation functions of MAE software, which revealed under-prediction of the frequency of occurrence in fragmentation of the second peptide bond. We applied methods of exploratory data analysis to pre-process the information in the MS/MS spectra, including data normalization and attribute selection, to reduce the attributes to a smaller, less correlated set for machine learning studies. We then compared our rule building machine learning program, DataSqueezer, with commonly used association rules and decision tree algorithms. All used machine learning algorithms produced similar results that were consistent with expected properties for a second gas phase mechanism at the second peptide bond. Conclusion The results provide compelling evidence that we have identified underlying chemical properties in the data that suggest the existence of an additional gas phase mechanism for the second peptide bond. Thus, the methods described in this study provide a valuable approach for analyses of this kind in the future.

  9. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  10. Design Method for Fast Switching Seat Valves for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Roemer, Daniel Beck; Johansen, Per; Pedersen, Henrik C.;

    2014-01-01

    Digital Displacement (DD) machines are upcoming technology where the displacement of each pressure chamber is controlled electronically by use of two fast switching seat valves. The effective displacement and operation type (pumping/motoring) may be controlled by manipulating the seat valves...... operation, where switching times must be performed within a few milliseconds. These valve requirements make a simulation based design approach essential, where mechanical strength, thermal dissipation, fluid dynamics and electro-magnetic dynamics must be taken into account. In this paper a complete design...... of the valves. A coupled optimization is finally conducted to optimize the electro-magnetic actuator, leading to a valve design based on the chosen valve topology. The design method is applied to an example DD machine and the resulting valve design fulfilling the requirements is presented....

  11. Machine Learning methods in fitting first-principles total energies for substitutionally disordered solid

    Science.gov (United States)

    Gao, Qin; Yao, Sanxi; Widom, Michael

    2015-03-01

    Density functional theory (DFT) provides an accurate and first-principles description of solid structures and total energies. However, it is highly time-consuming to calculate structures with hundreds of atoms in the unit cell and almost not possible to calculate thousands of atoms. We apply and adapt machine learning algorithms, including compressive sensing, support vector regression and artificial neural networks to fit the DFT total energies of substitutionally disordered boron carbide. The nonparametric kernel method is also included in our models. Our fitted total energy model reproduces the DFT energies with prediction error of around 1 meV/atom. The assumptions of these machine learning models and applications of the fitted total energies will also be discussed. Financial support from McWilliams Fellowship and the ONR-MURI under the Grant No. N00014-11-1-0678 is gratefully acknowledged.

  12. Design Method for Fast Switching Seat Valves for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Roemer, Daniel Beck; Johansen, Per; Pedersen, Henrik C.

    2014-01-01

    Digital Displacement (DD) machines are upcoming technology where the displacement of each pressure chamber is controlled electronically by use of two fast switching seat valves. The effective displacement and operation type (pumping/motoring) may be controlled by manipulating the seat valves...... operation, where switching times must be performed within a few milliseconds. These valve requirements make a simulation based design approach essential, where mechanical strength, thermal dissipation, fluid dynamics and electro-magnetic dynamics must be taken into account. In this paper a complete design...... of the valves. A coupled optimization is finally conducted to optimize the electro-magnetic actuator, leading to a valve design based on the chosen valve topology. The design method is applied to an example DD machine and the resulting valve design fulfilling the requirements is presented....

  13. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    Science.gov (United States)

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  14. Simulation of the Carton Erection for the Rubber Glove Packing Machine Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    Jewsuwun Kawin

    2017-01-01

    Full Text Available The rubber glove packing machine had been designed an important function which worked with folding carton. Each folded paper carton would be pulled to be erected by vacuum cups. Some carton could not completely form because of an unsuitable design of the erector. Cartons were collapsed or buckling while pulled by vacuum cups that cause to sudden stop the packing process and affect to number and cost of rubber glove production. This research aimed to use simulation method to erect the folded carton. Finite element (FE model of the rubber glove carton was created with shell elements. The orthotropic material properties were employed to specify FE model for analysis erection behavior of the folding carton. Vacuum cups number, positions and rotation points were simulated until obtained a good situation of the folding carton erector. Subsequently, finite element analysis results will be used to fabricate erector of the rubber glove packing machine in a further work.

  15. Greedy and Linear Ensembles of Machine Learning Methods Outperform Single Approaches for QSPR Regression Problems.

    Science.gov (United States)

    Kew, William; Mitchell, John B O

    2015-09-01

    The application of Machine Learning to cheminformatics is a large and active field of research, but there exist few papers which discuss whether ensembles of different Machine Learning methods can improve upon the performance of their component methodologies. Here we investigated a variety of methods, including kernel-based, tree, linear, neural networks, and both greedy and linear ensemble methods. These were all tested against a standardised methodology for regression with data relevant to the pharmaceutical development process. This investigation focused on QSPR problems within drug-like chemical space. We aimed to investigate which methods perform best, and how the 'wisdom of crowds' principle can be applied to ensemble predictors. It was found that no single method performs best for all problems, but that a dynamic, well-structured ensemble predictor would perform very well across the board, usually providing an improvement in performance over the best single method. Its use of weighting factors allows the greedy ensemble to acquire a bigger contribution from the better performing models, and this helps the greedy ensemble generally to outperform the simpler linear ensemble. Choice of data preprocessing methodology was found to be crucial to performance of each method too. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Method for Extracting Important Segments from Documents Using Support Vector Machines

    Science.gov (United States)

    Suzuki, Daisuke; Utsumi, Akira

    In this paper we propose an extraction-based method for automatic summarization. The proposed method consists of two processes: important segment extraction and sentence compaction. The process of important segment extraction classifies each segment in a document as important or not by Support Vector Machines (SVMs). The process of sentence compaction then determines grammatically appropriate portions of a sentence for a summary according to its dependency structure and the classification result by SVMs. To test the performance of our method, we conducted an evaluation experiment using the Text Summarization Challenge (TSC-1) corpus of human-prepared summaries. The result was that our method achieved better performance than a segment-extraction-only method and the Lead method, especially for sentences only a part of which was included in human summaries. Further analysis of the experimental results suggests that a hybrid method that integrates sentence extraction with segment extraction may generate better summaries.

  17. Less is more: regularization perspectives on large scale machine learning

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  18. Identifying Structural Flow Defects in Disordered Solids Using Machine-Learning Methods

    Science.gov (United States)

    Cubuk, E. D.; Schoenholz, S. S.; Rieser, J. M.; Malone, B. D.; Rottler, J.; Durian, D. J.; Kaxiras, E.; Liu, A. J.

    2015-03-01

    We use machine-learning methods on local structure to identify flow defects—or particles susceptible to rearrangement—in jammed and glassy systems. We apply this method successfully to two very different systems: a two-dimensional experimental realization of a granular pillar under compression and a Lennard-Jones glass in both two and three dimensions above and below its glass transition temperature. We also identify characteristics of flow defects that differentiate them from the rest of the sample. Our results show it is possible to discern subtle structural features responsible for heterogeneous dynamics observed across a broad range of disordered materials.

  19. Neutron–gamma discrimination based on the support vector machine method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xunzhen [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Key Laboratory of High Energy Density Physics and Technology (Ministry of Education ), Sichuan University, Chengdu 610064 (China); Zhu, Jingjun [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Lin, ShinTed [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Key Laboratory of High Energy Density Physics and Technology (Ministry of Education ), Sichuan University, Chengdu 610064 (China); Wang, Li [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Xing, Haoyang, E-mail: xhy@scu.edu.cn [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Key Laboratory of High Energy Density Physics and Technology (Ministry of Education ), Sichuan University, Chengdu 610064 (China); Zhang, Caixun; Xia, Yuxi; Liu, Shukui [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Yue, Qian [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Wei, Weiwei; Du, Qiang [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Tang, Changjian [School of Physical Science and Technology, Sichuan University, Chengdu 610041, Sichuan (China); Key Laboratory of High Energy Density Physics and Technology (Ministry of Education ), Sichuan University, Chengdu 610064 (China)

    2015-03-21

    In this study, the combination of the support vector machine (SVM) method with the moment analysis method (MAM) is proposed and utilized to perform neutron/gamma (n/γ) discrimination of the pulses from an organic liquid scintillator (OLS). Neutron and gamma events, which can be firmly separated on the scatter plot drawn by the charge comparison method (CCM), are detected to form the training data set and the test data set for the SVM, and the MAM is used to create the feature vectors for individual events in the data sets. Compared to the traditional methods, such as CCM, the proposed method can not only discriminate the neutron and gamma signals, even at lower energy levels, but also provide the corresponding classification accuracy for each event, which is useful in validating the discrimination. Meanwhile, the proposed method can also offer a predication of the classification for the under-energy-limit events.

  20. Thermal Error Modeling of the CNC Machine Tool Based on Data Fusion Method of Kalman Filter

    Directory of Open Access Journals (Sweden)

    Haitong Wang

    2017-01-01

    Full Text Available This paper presents a modeling methodology for the thermal error of machine tool. The temperatures predicted by modified lumped-mass method and the temperatures measured by sensors are fused by the data fusion method of Kalman filter. The fused temperatures, instead of the measured temperatures used in traditional methods, are applied to predict the thermal error. The genetic algorithm is implemented to optimize the parameters in modified lumped-mass method and the covariances in Kalman filter. The simulations indicate that the proposed method performs much better compared with the traditional method of MRA, in terms of prediction accuracy and robustness under a variety of operating conditions. A compensation system is developed based on the controlling system of Siemens 840D. Validated by the compensation experiment, the thermal error after compensation has been reduced dramatically.

  1. A Method for Anomaly Detection of User Behaviors Based on Machine Learning

    Institute of Scientific and Technical Information of China (English)

    TIAN Xin-guang; GAO Li-zhi; SUN Chun-lai; DUAN Mi-yi; ZHANG Er-yang

    2006-01-01

    This paper presents a new anomaly detection method based on machine learning. Applicable to host-based intrusion detection systems, this method uses shell commands as audit data. The method employs shell command sequences of different lengths to characterize behavioral patterns of a network user, and constructs multiple sequence libraries to represent the user's normal behavior profile. In the detection stage, the behavioral patterns in the audit data are mined by a sequence-matching algorithm, and the similarities between the mined patterns and the historical profile are evaluated. These similarities are then smoothed with sliding windows, and the smoothed similarities are used to determine whether the monitored user's behaviors are normal or anomalous. The results of our experience show the method can achieve higher detection accuracy and shorter detection time than the instance-based method presented by Lane T. The method has been successfully applied in practical host-based intrusion detection systems.

  2. The reduction methods of operator's radiation dose for portable dental X-ray machines.

    Science.gov (United States)

    Cho, Jeong-Yeon; Han, Won-Jeong

    2012-08-01

    This study was aimed to investigate the methods to reduce operator's radiation dose when taking intraoral radiographs with portable dental X-ray machines. Two kinds of portable dental X-ray machines (DX3000, Dexcowin and Rextar, Posdion) were used. Operator's radiation dose was measured with an 1,800 cc ionization chamber (RadCal Corp.) at the hand level of X-ray tubehead and at the operator's chest and waist levels with and without the backscatter shield. The operator's radiation dose at the hand level was measured with and without lead gloves and with long and short cones. The backscatter shield reduced operator's radiation dose at the hand level of X-ray tubehead to 23 - 32%, the lead gloves to 26 - 31%, and long cone to 48 - 52%. And the backscatter shield reduced operator's radiation dose at the operator's chest and waist levels to 0.1 - 37%. When portable dental X-ray systems are used, it is recommended to select X-ray machine attached with a backscatter shield and a long cone and to wear the lead gloves.

  3. A NEW GENERATING METHOD FOR THE MACHINING OF A CYLINDRICAL GEAR WITH SYMMETRIC ARCUATE TOOTH TRACE

    Institute of Scientific and Technical Information of China (English)

    马振群; 龚堰珏; 王小椿

    2004-01-01

    Objective To introduce a new generating method for the machining of a cylindrical gear with symmetric arcuate tooth trace. Methods Adopting this method, the key problems of mismatch control and manufacturing of symmetric arcuate tooth trace gears are solved by using suitable cutter tilt and a new generating method with double-edge gear-wheel cutter. The machining principle is analyzed and the mathematical model of generating motion is established. Then the tooth flank equation and differential geometrical parameters are discussed. Results The minim alteration of cutter tilt will make the contact flank area change so as to satisfy the special requirements. It is easy to realize the tip relief of gearing by altering coefficients of every moving axis. Because the tooth has the arc shape, the symmetrical arcuate cylindrical gears have higher overall strength and it is easy to perform the flank grinding for high precision. Conclusion This new generating method has higher productivity. It is easy to get a perfect contact zone and fully give play to the potential bearing capacity of the gears. The symmetrical arcuate cylindrical gears can be used in highly durable and heavy duty gearing applications.

  4. Learning-Based Curriculum Development

    Science.gov (United States)

    Nygaard, Claus; Hojlt, Thomas; Hermansen, Mads

    2008-01-01

    This article is written to inspire curriculum developers to centre their efforts on the learning processes of students. It presents a learning-based paradigm for higher education and demonstrates the close relationship between curriculum development and students' learning processes. The article has three sections: Section "The role of higher…

  5. Learning-Based Curriculum Development

    Science.gov (United States)

    Nygaard, Claus; Hojlt, Thomas; Hermansen, Mads

    2008-01-01

    This article is written to inspire curriculum developers to centre their efforts on the learning processes of students. It presents a learning-based paradigm for higher education and demonstrates the close relationship between curriculum development and students' learning processes. The article has three sections: Section "The role of higher…

  6. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget...

  7. Study of geometric errors detection method for NC machine tools based on non-contact circular track

    Science.gov (United States)

    Yan, Kejun; Liu, Jun; Gao, Feng; Wang, Huan

    2008-12-01

    This paper presents a non-contact measuring method of geometric errors for NC machine tools based on circular track testing method. Let the machine spindle move along a circular path, the position error of every tested position in the circle can be obtained using two laser interferometers. With a volumetric error model, the 12 components of geometric error apart from angular error components can be derived. It has characteristics of wide detection range and high precision. Being obtained geometric errors respectively, it is of great significance for the error compensation of NC machine tools. This method has been tested on a MCV-510 NC machine tool. The experiment result has been proved to be feasible for this method.

  8. Uniform surface polished method of complex holes in abrasive flow machining

    Institute of Scientific and Technical Information of China (English)

    A-Cheng WANG; Lung TSAI; Kuo-Zoo LIANG; Chun-Ho LIU; Shi-Hong WENG

    2009-01-01

    Abrasive flow machining(AFM) is an effective method that can remove the recasting layer produced by wire electrical discharge machining(WEDM). However, the surface roughness will not be easily uniform when a complex hole is polished by this method. CFD numerical method is aided to design good passageways to find the smooth roughness on the complex hole in AFM. Through the present method, it reveals that the shear forces in the polishing process and the flow properties of the medium in AFM play the roles in controlling the roughness on the entire surface. A power law model was firstly set up by utilizing the effect of shear rates on the medium viscosities, and the coefficients of the power law would be found by solving the algebraic equation from the relations between the shear rates and viscosities. Then the velocities, strain rates and shear forces of the medium acting on the surface would be obtained in the constant pressure by CFD software. Finally, the optimal mold core put into the complex hole could be designed after these simulations. The results show that the shear forces and strain rates change sharply on the entire surface if no mold core is inserted into the complex hole, whereas they hardly make any difference when the core shape is similar to the complex hole. Three experimental types of mold core were used. The results demonstrate that the similar shape of the mold core inserted into the hole could find the uniform roughness on the surface.

  9. A divide-and-combine method for large scale nonparallel support vector machines.

    Science.gov (United States)

    Tian, Yingjie; Ju, Xuchan; Shi, Yong

    2016-03-01

    Nonparallel Support Vector Machine (NPSVM) which is more flexible and has better generalization than typical SVM is widely used for classification. Although some methods and toolboxes like SMO and libsvm for NPSVM are used, NPSVM is hard to scale up when facing millions of samples. In this paper, we propose a divide-and-combine method for large scale nonparallel support vector machine (DCNPSVM). In the division step, DCNPSVM divide samples into smaller sub-samples aiming at solving smaller subproblems independently. We theoretically and experimentally prove that the objective function value, solutions, and support vectors solved by DCNPSVM are close to the objective function value, solutions, and support vectors of the whole NPSVM problem. In the combination step, the sub-solutions combined as initial iteration points are used to solve the whole problem by global coordinate descent which converges quickly. In order to balance the accuracy and efficiency, we adopt a multi-level structure which outperforms state-of-the-art methods. Moreover, our DCNPSVM can tackle unbalance problems efficiently by tuning the parameters. Experimental results on lots of large data sets show the effectiveness of our method in memory usage, classification accuracy and time consuming.

  10. Machine learning and statistical methods for the prediction of maximal oxygen uptake: recent advances.

    Science.gov (United States)

    Abut, Fatih; Akay, Mehmet Fatih

    2015-01-01

    Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance.

  11. Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image

    Science.gov (United States)

    Pirotti, F.; Sunar, F.; Piragnolo, M.

    2016-06-01

    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random

  12. An Auto-flag Method of Radio Visibility Data Based on Support Vector Machine

    Science.gov (United States)

    Hui-mei, Dai; Ying, Mei; Wei, Wang; Hui, Deng; Feng, Wang

    2017-01-01

    The Mingantu Ultrawide Spectral Radioheliograph (MUSER) has entered a test observation stage. After the construction of the data acquisition and storage system, it is urgent to automatically flag and eliminate the abnormal visibility data so as to improve the imaging quality. In this paper, according to the observational records, we create a credible visibility set, and further obtain the corresponding flag model of visibility data by using the support vector machine (SVM) technique. The results show that the SVM is a robust approach to flag the MUSER visibility data, and can attain an accuracy of about 86%. Meanwhile, this method will not be affected by solar activities, such as flare eruptions.

  13. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

    Directory of Open Access Journals (Sweden)

    Yukai Yao

    2015-01-01

    Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

  14. Research of Multi-axis NC Machining Method of Cylindrical Cam Based on UG NX

    Directory of Open Access Journals (Sweden)

    Liang Qianhua

    2017-01-01

    Full Text Available We have focused significant efforts on developing solutions for precision machining of cylindrical cams based on UG NX. A variety of processing method are put forward according to the digital model for a cylindrical cam which has been derived through parametric design, all of which are made a detailed comparison, analysis, research, elaborated. Simulation processing, post processing and NC program are carried out though optimized processing scheme. It will provide a reference for the numerical control programming of four-Coordinated axis.

  15. A Comparative Study of Three Machine Learning Methods for Software Fault Prediction

    Institute of Scientific and Technical Information of China (English)

    WANG Qi; ZHU Jie; YU Bo

    2005-01-01

    The contribution of this paper is comparing three popular machine learning methods for software fault prediction. They are classification tree, neural network and case-based reasoning. First, three different classifiers are built based on these three different approaches. Second, the three different classifiers utilize the same product metrics as predictor variables to identify the fault-prone components. Third, the predicting results are compared on two aspects, how good prediction capabilities these models are, and how the models support understanding a process represented by the data.

  16. BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    Directory of Open Access Journals (Sweden)

    F. Pirotti

    2016-06-01

    Full Text Available Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels for testing and validating subsets. The classes used are the following: (i urban (ii sowable areas (iii water (iv tree plantations (v grasslands. Validation is carried out using three different approaches: (i using pixels from the training dataset (train, (ii using pixels from the training dataset and applying cross-validation with the k-fold method (kfold and (iii using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train, the whole control dataset (full and with k-fold cross-validation (kfold with ten folds. Results from validation of predictions of the whole dataset (full show the

  17. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    to utilize a small number of spatially clustered sets of voxels that are particularly suited for clinical interpretation. RVoxM automatically tunes all its free parameters during the training phase, and offers the additional advantage of producing probabilistic prediction outcomes. Experiments on age......This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed...... prediction from structural brain MRI indicate that RVoxM yields biologically meaningful models that provide excellent predictive accuracy....

  18. Enhanced needle localization in ultrasound using beam steering and learning-based segmentation.

    Science.gov (United States)

    Hatt, Charles R; Ng, Gary; Parthasarathy, Vijay

    2015-04-01

    Segmentation of needles in ultrasound images remains a challenging problem. In this paper, we introduce a machine learning-based method for needle segmentation in 2D beam-steered ultrasound images. We used a statistical boosting approach to train a pixel-wise classifier for needle segmentation. The Radon transform was then used to find the needle position and orientation from the segmented image. We validated our method with data from ex vivo specimens and clinical nerve block procedures, and compared the results to those obtained using previously reported needle segmentation methods. Results show improved localization success and accuracy using the proposed method. For the ex vivo datasets, assuming that the needle orientation was known a priori, the needle was successfully localized in 86.2% of the images, with a mean targeting error of 0.48mm. The robustness of the proposed method to a lack of a priori knowledge of needle orientation was also demonstrated. For the clinical datasets, assuming that the needle orientation was closely aligned with the beam steering angle selected by the physician, the needle was successfully localized in 99.8% of the images, with a mean targeting error 0.19mm. These results indicate that the learning-based segmentation method may allow for increased targeting accuracy and enhanced visualization during ultrasound-guided needle procedures.

  19. An Active Instance-based Machine Learning method for Stellar Population Studies

    CERN Document Server

    Solorio, T; Terlevich, R J; Terlevich, E; Solorio, Thamar; Fuentes, Olac; Terlevich, Roberto; Terlevich, Elena

    2005-01-01

    We have developed a method for fast and accurate stellar population parameters determination in order to apply it to high resolution galaxy spectra. The method is based on an optimization technique that combines active learning with an instance-based machine learning algorithm. We tested the method with the retrieval of the star-formation history and dust content in "synthetic" galaxies with a wide range of S/N ratios. The "synthetic" galaxies where constructed using two different grids of high resolution theoretical population synthesis models. The results of our controlled experiment shows that our method can estimate with good speed and accuracy the parameters of the stellar populations that make up the galaxy even for very low S/N input. For a spectrum with S/N=5 the typical average deviation between the input and fitted spectrum is less than 10**{-5}. Additional improvements are achieved using prior knowledge.

  20. Study of the machining process of nano-electrical discharge machining based on combined atomistic-continuum modeling method

    Science.gov (United States)

    Zhang, Guojun; Guo, Jianwen; Ming, Wuyi; Huang, Yu; Shao, Xinyu; Zhang, Zhen

    2014-01-01

    Nano-electrical discharge machining (nano-EDM) is an attractive measure to manufacture parts with nanoscale precision, however, due to the incompleteness of its theories, the development of more advanced nano-EDM technology is impeded. In this paper, a computational simulation model combining the molecular dynamics simulation model and the two-temperature model for single discharge process in nano-EDM is constructed to study the machining mechanism of nano-EDM from the thermal point of view. The melting process is analyzed. Before the heated material gets melted, thermal compressive stress higher than 3 GPa is induced. After the material gets melted, the compressive stress gets relieved. The cooling and solidifying processes are also analyzed. It is found that during the cooling process of the melted material, tensile stress higher than 3 GPa arises, which leads to the disintegration of material. The formation of the white layer is attributed to the homogeneous solidification, and additionally, the resultant residual stress is analyzed.

  1. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  2. Building blocks for automated elucidation of metabolites: Machine learning methods for NMR prediction

    Science.gov (United States)

    Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph

    2008-01-01

    Background Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. Results A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. Conclusion NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites. PMID:18817546

  3. Building blocks for automated elucidation of metabolites: Machine learning methods for NMR prediction

    Directory of Open Access Journals (Sweden)

    Neumann Steffen

    2008-09-01

    Full Text Available Abstract Background Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. Results A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. Conclusion NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.

  4. A New Method for Machining Concave Profile of the Worms' Thread

    Directory of Open Access Journals (Sweden)

    Tareq Abu Shreehah

    2010-09-01

    Full Text Available Research and development of wormgear drives have significantly focused on their geometrical accuracy, loadability tests, and their wear resistance and efficiency. The research has been going on in several directions. Individual method of problem solving is development new worm-gear sets and tools for their manufacturing. In the present study, worms with concave profile of their thread have been considered. To avoid the technological difficulties relevant to the application of special cutting tools for machining such gear sets, a rigid incongruent generating pair consisting of a standard hob and toroidal tool has been developed for processing the concave worm profile. The generating surface of the developed toroidal tool, which is essential for the tool manufacturing, was modeled on the basis of hob-toroidal tool interaction. The proposed method of modeling w as divided into three steps: first, the common surface for both hobbing and toroidal tools has been found in terms of hyperboloid of revolution of one sheet, then, the matrix method of transforming the coordinates, from the hob-axis reference frame to the toroidal tool-axis reference frame, has been utilized, and finally, an equation described the generating surface of the toroidal tool has been derived and presented. By using the proposed model and the obtained final equation the worm thread surface machined by the mentioned tool can be defined and experimented.

  5. An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.

    Science.gov (United States)

    Liu, Yuzhe; Gopalakrishnan, Vanathi

    2017-03-01

    Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.

  6. Dynamic Allocation Method For Efficient Load Balancing In Virtual Machines For Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Bhaskar. R

    2012-10-01

    Full Text Available This paper proposes a Dynamic resource allocation method for Cloud computing. Cloud computing is amodel for delivering information technology services in which resources are retrieved from the internetthrough web-based tools and applications, rather than a direct connection to a server. Users can set upand boot the required resources and they have to pay only for the required resources. Thus, in thefuture providing a mechanism for efficient resource management and assignment will be an importantobjective of Cloud computing. In this project we propose a method, dynamic scheduling andconsolidation mechanism that allocate resources based on the load of Virtual Machines (VMs onInfrastructure as a service (IaaS. This method enables users to dynamically add and/or delete one ormore instances on the basis of the load and the conditions specified by the user.Our objective is to develop an effective load balancing algorithm using Virtual Machine Monitoring tomaximize or minimize different performance parameters(throughput for example for the Clouds ofdifferent sizes (virtual topology de-pending on the application requirement.

  7. Asset Analysis Method for the Cyber Security of Man Machine Interface System

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung Kon; Kim, Hun Hee; Shin, Yeong Cheol [Korea Hydro and Nuclear Power, Daejeon (Korea, Republic of)

    2010-10-15

    As digital MMIS (Man Machine Interface System) is applied in Nuclear Power Plant (NPP), cyber security is becoming more and more important. Regulatory guide (KINS/GT-N27) requires that implementation plan for cyber security be prepared in NPP. Regulatory guide recommends the following 4 processes: 1) an asset analysis of MMIS, 2) a vulnerability analysis of MMIS, 3) establishment of countermeasures, and 4) establishment of operational guideline for cyber security. Conventional method for the asset analysis is mainly performed with a table form for each asset. Conventional method requires a lot of efforts due to the duplication of information. This paper presents an asset analysis method using object oriented approach for the NPP

  8. Cutting heat dissipation in high-speed machining of carbon steel based on the calorimetric method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The cutting heat dissipation in chips,workpiece,tool and surroundings during the high-speed machining of carbon steel is quantitatively investigated based on the calorimetric method.Water is used as the medium to absorb the cutting heat;a self-designed container suitable for the high-speed lathe is used to collect the chips,and two other containers are adopted to absorb the cutting heat dissipated in the workpiece and tool,respectively.The temperature variations of the water,chips,workpiece,tool and surroundings during the closed high-speed machining are then measured.Thus,the cutting heat dissipated in each component of the cutting system,total cutting heat and heat flux are calculated.Moreover,the power resulting from the main cutting force is obtained according to the measured cutting force and predetermined cutting speed.The accuracy of cutting heat measurement by the calorimetric method is finally evaluated by comparing the total cutting heat flux with the power resulting from the main cutting force.

  9. BacHbpred: Support Vector Machine Methods for the Prediction of Bacterial Hemoglobin-Like Proteins

    Directory of Open Access Journals (Sweden)

    MuthuKrishnan Selvaraj

    2016-01-01

    Full Text Available The recent upsurge in microbial genome data has revealed that hemoglobin-like (HbL proteins may be widely distributed among bacteria and that some organisms may carry more than one HbL encoding gene. However, the discovery of HbL proteins has been limited to a small number of bacteria only. This study describes the prediction of HbL proteins and their domain classification using a machine learning approach. Support vector machine (SVM models were developed for predicting HbL proteins based upon amino acid composition (AC, dipeptide composition (DC, hybrid method (AC + DC, and position specific scoring matrix (PSSM. In addition, we introduce for the first time a new prediction method based on max to min amino acid residue (MM profiles. The average accuracy, standard deviation (SD, false positive rate (FPR, confusion matrix, and receiver operating characteristic (ROC were analyzed. We also compared the performance of our proposed models in homology detection databases. The performance of the different approaches was estimated using fivefold cross-validation techniques. Prediction accuracy was further investigated through confusion matrix and ROC curve analysis. All experimental results indicate that the proposed BacHbpred can be a perspective predictor for determination of HbL related proteins. BacHbpred, a web tool, has been developed for HbL prediction.

  10. High Accuracy On-line Measurement Method of Motion Error on Machine Tools Straight-going Parts

    Institute of Scientific and Technical Information of China (English)

    苏恒; 洪迈生; 魏元雷; 李自军

    2003-01-01

    Harmonic suppression, non-periodic and non-closing in straightness profile error that will bring about harmonic component distortion in measurement result are analyzed. The countermeasure-a novel accurate two-probe method in time domain is put forward to measure straight-going component motion error in machine tools based on the frequency domain 3-point method after symmetrical continuation of probes' primitive signal. Both straight-going component motion error in machine tools and the profile error in workpiece that is manufactured on this machine can be measured at the same time. The information is available to diagnose the fault origin of machine tools. The analysis result is proved to be correct by the experiment.

  11. Finger milling-cutter CNC generating hypoid pinion tooth surfaces based on modified-roll method and machining simulation

    Science.gov (United States)

    Li, Genggeng; Deng, Xiaozhong; Wei, Bingyang; Lei, Baozhen

    2011-05-01

    The two coordinate systems of cradle-type hypoid generator and free-form CNC machine tool by application disc milling-cutter to generate hypoid pinion tooth surfaces based on the modified-roll method were set up, respectively, and transformation principle and method for machine-tool settings between the two coordinate systems was studied. It was presented that finger milling-cutter was mounted on imagined disc milling-cutter and its motion was controlled directly by CNC shafts to replace disc milling-cutter blades effective cutting motion. Finger milling-cutter generation accomplished by ordered circular interpolation was determined, and interpolation center, starting and ending were worked out. Finally, a hypoid pinion was virtually machined by using CNC machining simulation software VERICUT.

  12. Optimization MRR Of Stainless Steel 403 In Abrasive Water Jet Machining UsingAnova And Taguchi Method

    Directory of Open Access Journals (Sweden)

    Ramprasad,

    2015-05-01

    Full Text Available Stainlesssteel 403 is high-alloysteelwith good corrosion resistance and it’svery hard material. Abrasive water jet is an effective method for machining, cutting and drilling of stainlesssteel 403. In thispaperweoptimize the metalremoval rate of stainlesssteel 403 in abrasive water jet machining. The MRRisoptimize by usingthreeparameters water pressure, abrasive flow rate and stand-off distance and L9 orthogonal array of Taguchimethod use to analyse the result. 9 experimentalrunsbased on L9 orthogonal array of Taguchimethod.

  13. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  14. Logic Learning Machine and standard supervised methods for Hodgkin's lymphoma prognosis using gene expression data and clinical variables.

    Science.gov (United States)

    Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco

    2016-06-27

    This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms (k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene (XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.

  15. Method of Automatic Ontology Mapping through Machine Learning and Logic Mining

    Institute of Scientific and Technical Information of China (English)

    王英林

    2004-01-01

    Ontology mapping is the bottleneck of handling conflicts among heterogeneous ontologies and of implementing reconfiguration or interoperability of legacy systems. We proposed an ontology mapping method by using machine learning, type constraints and logic mining techniques. This method is able to find concept correspondences through instances and the result is optimized by using an error function; it is able to find attribute correspondence between two equivalent concepts and the mapping accuracy is enhanced by combining together instances learning, type constraints and the logic relations that are imbedded in instances; moreover, it solves the most common kind of categorization conflicts. We then proposed a merging algorithm to generate the shared ontology and proposed a reconfigurable architecture for interoperation based on multi agents. The legacy systems are encapsulated as information agents to participate in the integration system. Finally we give a simplified case study.

  16. A machine learning approach to the potential-field method for implicit modeling of geological structures

    Science.gov (United States)

    Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe

    2017-06-01

    Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.

  17. An Evaluation of Machine Learning Methods to Detect Malicious SCADA Communications

    Energy Technology Data Exchange (ETDEWEB)

    Beaver, Justin M [ORNL; Borges, Raymond Charles [ORNL; Buckner, Mark A [ORNL

    2013-01-01

    Critical infrastructure Supervisory Control and Data Acquisition (SCADA) systems were designed to operate on closed, proprietary networks where a malicious insider posed the greatest threat potential. The centralization of control and the movement towards open systems and standards has improved the efficiency of industrial control, but has also exposed legacy SCADA systems to security threats that they were not designed to mitigate. This work explores the viability of machine learning methods in detecting the new threat scenarios of command and data injection. Similar to network intrusion detection systems in the cyber security domain, the command and control communications in a critical infrastructure setting are monitored, and vetted against examples of benign and malicious command traffic, in order to identify potential attack events. Multiple learning methods are evaluated using a dataset of Remote Terminal Unit communications, which included both normal operations and instances of command and data injection attack scenarios.

  18. Flame image recognition of alumina rotary kiln by artificial neural network and support vector machine methods

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hong-liang; ZOU Zhong; LI Jie; CHEN Xiang-tao

    2008-01-01

    Based on the Fourier transform, a new shape descriptor was proposed to represent the flame image. By employing the shape descriptor as the input, the flame image recognition was studied by the methods of the artificial neural network(ANN) and the support vector machine(SVM) respectively. And the recognition experiments were carried out by using flame image data sampled from an alumina rotary kiln to evaluate their effectiveness. The results show that the two recognition methods can achieve good results, which verify the effectiveness of the shape descriptor. The highest recognition rate is 88.83% for SVM and 87.38% for ANN, which means that the performance of the SVM is better than that of the ANN.

  19. Predicting China’s SME Credit Risk in Supply Chain Finance Based on Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    You Zhu

    2016-05-01

    Full Text Available We propose a new integrated ensemble machine learning (ML method, i.e., RS-RAB (Random Subspace-Real AdaBoost, for predicting the credit risk of China’s small and medium-sized enterprise (SME in supply chain finance (SCF. The sample of empirical analysis is comprised of two data sets on a quarterly basis during the period of 2012–2013: one includes 48 listed SMEs obtained from the SME Board of Shenzhen Stock Exchange; the other one consists of three listed core enterprises (CEs and six listed CEs that are respectively collected from the Main Board of Shenzhen Stock Exchange and Shanghai Stock Exchange. The experimental results show that RS-RAB possesses an outstanding prediction performance and is very suitable for forecasting the credit risk of China’s SME in SCF by comparison with the other three ML methods.

  20. An Illustration of New Methods in Machine Condition Monitoring, Part II: Adaptive outlier detection

    Science.gov (United States)

    Antoniadou, I.; Worden, K.; Marchesiello, S.; Mba, C.; Garibaldi, L.

    2017-05-01

    There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The second paper in the pair will deal with novelty detection. Although there has been considerable progress in the use of outlier analysis for novelty detection, most of the papers produced so far have suffered from the fact that simple algorithms break down if multiple outliers are present or if damage is already present in a training set. The objective of the current paper is to illustrate the use of phase-space thresholding; an algorithm which has the ability to detect multiple outliers inclusively in a data set.

  1. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  2. Support vector machine as an alternative method for lithology classification of crystalline rocks

    Science.gov (United States)

    Deng, Chengxiang; Pan, Heping; Fang, Sinan; Amara Konaté, Ahmed; Qin, Ruidong

    2017-03-01

    With the expansion of machine learning algorithms, automatic lithology classification that uses well logging data is becoming significant in formation evaluation and reservoir characterization. In fact, the complicated composition and structural variations of metamorphic rocks result in more nonlinear features in well logging data and elevate requirements to algorithms. Herein, the application of the support vector machine (SVM) in classifying crystalline rocks from Chinese Continental Scientific Drilling Main Hole (CCSD-MH) data was reported. We found that the SVM performs poorly on the lithology classification of crystalline rocks when training samples are imbalanced. The fact is that training samples are generally limited and imbalanced as cores cannot be obtained balanced and at 100 percent. In this paper, we introduced the synthetic minority over-sampling technique (SMOTE) and Borderline-SMOTE to deal with imbalanced data. After experiments generating different quantities of training samples by SMOTE and Borderline-SMOTE, the most suitable classifier was selected to overcome the disadvantage of the SVM. Then, the popular supervised classifier back-propagation neural networks (BPNN), which has been proved competent for lithology classification of crystalline rocks in previous studies, was compared to evaluate the performance of the SVM. Results show that Borderline-SMOTE can improve the SVM with substantially increased accuracy even for minority classes in a reasonable manner, while the SVM outperforms BPNN in aspects of lithology prediction and CCSD-MH data generalization. We demonstrate the potential of the SVM as an alternative to current methods for lithology identification of crystalline rocks.

  3. Concept of automatic programming of NC machine for metal plate cutting by genetic algorithm method

    Directory of Open Access Journals (Sweden)

    B. Vaupotic

    2005-12-01

    Full Text Available Purpose: In this paper the concept of automatic programs of the NC machine for metal plate cutting by genetic algorithm method has been presented.Design/methodology/approach: The paper was limited to automatic creation of NC programs for two-dimensional cutting of material by means of adaptive heuristic search algorithms.Findings: Automatic creation of NC programs in laser cutting of materials combines the CAD concepts, the recognition of features and creation and optimization of NC programs. The proposed intelligent system is capable to recognize automatically the nesting of products in the layout, to determine the incisions and sequences of cuts forming the laid out products. Position of incisions is determined at the relevant places on the cut. The system is capable to find the shortest path between individual cuts and to record the NC program.Research limitations/implications: It would be appropriate to orient future researches towards conceiving an improved system for three-dimensional cutting with optional determination of positions of incisions, with the capability to sense collisions and with optimization of the speed and acceleration during cutting.Practical implications: The proposed system assures automatic preparation of NC program without NC programer.Originality/value: The proposed concept shows a high degree of universality, efficiency and reliability and it can be simply adapted to other NC-machines.

  4. A Machine Learning Nowcasting Method based on Real-time Reanalysis Data

    CERN Document Server

    Han, Lei; Zhang, Wei; Xiu, Yuanyuan; Feng, Hailei; Lin, Yinjing

    2016-01-01

    Despite marked progress over the past several decades, convective storm nowcasting remains a challenge because most nowcasting systems are based on linear extrapolation of radar reflectivity without much consideration for other meteorological fields. The variational Doppler radar analysis system (VDRAS) is an advanced convective-scale analysis system capable of providing analysis of 3-D wind, temperature, and humidity by assimilating Doppler radar observations. Although potentially useful, it is still an open question as to how to use these fields to improve nowcasting. In this study, we present results from our first attempt at developing a Support Vector Machine (SVM) Box-based nOWcasting (SBOW) method under the machine learning framework using VDRAS analysis data. The key design points of SBOW are as follows: 1) The study domain is divided into many position-fixed small boxes and the nowcasting problem is transformed into one question, i.e., will a radar echo > 35 dBZ appear in a box in 30 minutes? 2) Box-...

  5. A Novel Gravity Compensation Method for High Precision Free-INS Based on "Extreme Learning Machine".

    Science.gov (United States)

    Zhou, Xiao; Yang, Gongliu; Cai, Qingzhong; Wang, Jing

    2016-11-29

    In recent years, with the emergency of high precision inertial sensors (accelerometers and gyros), gravity compensation has become a major source influencing the navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper presents preliminary results concerning the effect of gravity disturbance on INS. Meanwhile, this paper proposes a novel gravity compensation method for high-precision INS, which estimates the gravity disturbance on the track using the extreme learning machine (ELM) method based on measured gravity data on the geoid and processes the gravity disturbance to the height where INS has an upward continuation, then compensates the obtained gravity disturbance into the error equations of INS to restrain the INS error propagation. The estimation accuracy of the gravity disturbance data is verified by numerical tests. The root mean square error (RMSE) of the ELM estimation method can be improved by 23% and 44% compared with the bilinear interpolation method in plain and mountain areas, respectively. To further validate the proposed gravity compensation method, field experiments with an experimental vehicle were carried out in two regions. Test 1 was carried out in a plain area and Test 2 in a mountain area. The field experiment results also prove that the proposed gravity compensation method can significantly improve the positioning accuracy. During the 2-h field experiments, the positioning accuracy can be improved by 13% and 29% respectively, in Tests 1 and 2, when the navigation scheme is compensated by the proposed gravity compensation method.

  6. Methods, systems and apparatus for controlling third harmonic voltage when operating a multi-space machine in an overmodulation region

    Energy Technology Data Exchange (ETDEWEB)

    Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel

    2014-06-03

    Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.

  7. A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits

    Science.gov (United States)

    Moradi, Behzad; Mirzaei, Abdolreza

    2016-11-01

    A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.

  8. Thutmose - Investigation of Machine Learning-Based Intrusion Detection Systems

    Science.gov (United States)

    2016-06-01

    by PRA Lab to detect server side attacks against web applications. Uses an ensemble of Hidden Markov Models to detect anomalies. TotalADS The Total ...Defense, Security, and Sensing. International Society for Optics and Photonics. (2012) Corona , I., Tronci, R., and Giacinto, G. "SuStorID: A multiple...A.Kantchelian, et al., "Approaches to Adversarial Drift", in AISec’13, New York: ACM, 2013, pp. 99-110. B. Biggio, I. Corona , D. Maiorca, B. Nelson

  9. Counter-forensics in machine learning based forgery detection

    Science.gov (United States)

    Marra, Francesco; Poggi, Giovanni; Roli, Fabio; Sansone, Carlo; Verdoliva, Luisa

    2015-03-01

    With the powerful image editing tools available today, it is very easy to create forgeries without leaving visible traces. Boundaries between host image and forgery can be concealed, illumination changed, and so on, in a naive form of counter-forensics. For this reason, most modern techniques for forgery detection rely on the statistical distribution of micro-patterns, enhanced through high-level filtering, and summarized in some image descriptor used for the final classification. In this work we propose a strategy to modify the forged image at the level of micro-patterns to fool a state-of-the-art forgery detector. Then, we investigate on the effectiveness of the proposed strategy as a function of the level of knowledge on the forgery detection algorithm. Experiments show this approach to be quite effective especially if a good prior knowledge on the detector is available.

  10. Predicting metabolic syndrome using decision tree and support vector machine methods

    Science.gov (United States)

    Karimi-Alavijeh, Farzaneh; Jalili, Saeed; Sadeghi, Masoumeh

    2016-01-01

    BACKGROUND Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. METHODS This study aims to employ decision tree and support vector machine (SVM) to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP), diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs), total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. RESULTS SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758), 0.74 (0.72) and 0.757 (0.739) in SVM (decision tree) method. CONCLUSION The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most important feature in

  11. A control system for and a method of controlling a superconductive rotating electrical machine

    DEFF Research Database (Denmark)

    2014-01-01

    This invention relates to a method of controlling and a control system (100) for a superconductive rotating electric machine (200) comprising at least one superconductive winding (102; 103), where the control system (100) is adapted to control a power unit (101) supplying during use the at least...... one superconductive winding (102; 103) with power or receiving during use power from the at least one superconductive winding (102; 103), wherein the control system (100) is further adapted to, for at least one superconductive winding (102; 103), dynamically receive one or more representations of one...... superconductive winding (102; 103) by the power unit (101) where the one or more electrical current values is/are derived taking into account the received one or more actual values (110, 111). In this way,greater flexibility and more precise control of the performance of the superconducting rotating electrical...

  12. Water Quantity Prediction Using Least Squares Support Vector Machines (LS-SVM Method

    Directory of Open Access Journals (Sweden)

    Nian Zhang

    2014-08-01

    Full Text Available The impact of reliable estimation of stream flows at highly urbanized areas and the associated receiving waters is very important for water resources analysis and design. We used the least squares support vector machine (LS-SVM based algorithm to forecast the future streamflow discharge. A Gaussian Radial Basis Function (RBF kernel framework was built on the data set to optimize the tuning parameters and to obtain the moderated output. The training process of LS-SVM was designed to select both kernel parameters and regularization constants. The USGS real-time water data were used as time series input. 50% of the data were used for training, and 50% were used for testing. The experimental results showed that the LS-SVM algorithm is a reliable and efficient method for streamflow prediction, which has an important impact to the water resource management field.

  13. Prediction of Student Dropout in E-Learning Program Through the Use of Machine Learning Method

    Directory of Open Access Journals (Sweden)

    Mingjie Tan

    2015-02-01

    Full Text Available The high rate of dropout is a serious problem in E-learning program. Thus it has received extensive concern from the education administrators and researchers. Predicting the potential dropout students is a workable solution to prevent dropout. Based on the analysis of related literature, this study selected student’s personal characteristic and academic performance as input attributions. Prediction models were developed using Artificial Neural Network (ANN, Decision Tree (DT and Bayesian Networks (BNs. A large sample of 62375 students was utilized in the procedures of model training and testing. The results of each model were presented in confusion matrix, and analyzed by calculating the rates of accuracy, precision, recall, and F-measure. The results suggested all of the three machine learning methods were effective in student dropout prediction, and DT presented a better performance. Finally, some suggestions were made for considerable future research.

  14. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Felix F. Gonzalez-Navarro

    2016-10-01

    Full Text Available Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  15. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods

    Science.gov (United States)

    Gonzalez-Navarro, Felix F.; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A.; Flores-Rios, Brenda L.; Ibarra-Esquer, Jorge E.

    2016-01-01

    Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization. PMID:27792165

  16. ANNz2 - Photometric redshift and probability density function estimation using machine learning methods

    CERN Document Server

    Sadeh, Iftach; Lahav, Ofer

    2015-01-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...

  17. New machining and testing method of large angle infrared wedge mirror parts

    Science.gov (United States)

    Su, Ying; Guo, Rui; Zhang, Fumei; Zhang, Zheng; Liu, Xuanmin; Zengqi, Xu; Li, Wenting; Zhang, Feng

    2016-10-01

    Large angle wedge parts were widely used in the optical system that was used for achieving a wide range of scanning. Due to the parts having the characteristic of large difference in the thickness of both ends and high density, the accuracy of the wedge angle was hard to ensure to reach second level in optical processing. Generally, wedge mirror angle was measured by contact comparison method which was easy to damage the surface. In view of the existence of two practical problems, in this paper, based on theoretical analysis, by taking three key measures that were the accurate positioning for the central position of the large angle wedge part, the accuracy control of angle precision machined of wedge mirror and fast and non destructive laser assisted absolute measurement of large angle wedge, the qualified rate of parts were increased to 100%, a feasible, controllable and efficient process route for large angle infrared wedge parts was found out.

  18. Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images.

    Science.gov (United States)

    Zhang, Jianfeng; Xu, Jiatuo; Hu, Xiaojuan; Chen, Qingguang; Tu, Liping; Huang, Jingbin; Cui, Ji

    2017-01-01

    Objective. The purpose of this research is to develop a diagnostic method of diabetes based on standardized tongue image using support vector machine (SVM). Methods. Tongue images of 296 diabetic subjects and 531 nondiabetic subjects were collected by the TDA-1 digital tongue instrument. Tongue body and tongue coating were separated by the division-merging method and chrominance-threshold method. With extracted color and texture features of the tongue image as input variables, the diagnostic model of diabetes with SVM was trained. After optimizing the combination of SVM kernel parameters and input variables, the influences of the combinations on the model were analyzed. Results. After normalizing parameters of tongue images, the accuracy rate of diabetes predication was increased from 77.83% to 78.77%. The accuracy rate and area under curve (AUC) were not reduced after reducing the dimensions of tongue features with principal component analysis (PCA), while substantially saving the training time. During the training for selecting SVM parameters by genetic algorithm (GA), the accuracy rate of cross-validation was grown from 72% or so to 83.06%. Finally, we compare with several state-of-the-art algorithms, and experimental results show that our algorithm has the best predictive accuracy. Conclusions. The diagnostic method of diabetes on the basis of tongue images in Traditional Chinese Medicine (TCM) is of great value, indicating the feasibility of digitalized tongue diagnosis.

  19. Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images

    Science.gov (United States)

    Hu, Xiaojuan; Chen, Qingguang; Tu, Liping; Huang, Jingbin; Cui, Ji

    2017-01-01

    Objective. The purpose of this research is to develop a diagnostic method of diabetes based on standardized tongue image using support vector machine (SVM). Methods. Tongue images of 296 diabetic subjects and 531 nondiabetic subjects were collected by the TDA-1 digital tongue instrument. Tongue body and tongue coating were separated by the division-merging method and chrominance-threshold method. With extracted color and texture features of the tongue image as input variables, the diagnostic model of diabetes with SVM was trained. After optimizing the combination of SVM kernel parameters and input variables, the influences of the combinations on the model were analyzed. Results. After normalizing parameters of tongue images, the accuracy rate of diabetes predication was increased from 77.83% to 78.77%. The accuracy rate and area under curve (AUC) were not reduced after reducing the dimensions of tongue features with principal component analysis (PCA), while substantially saving the training time. During the training for selecting SVM parameters by genetic algorithm (GA), the accuracy rate of cross-validation was grown from 72% or so to 83.06%. Finally, we compare with several state-of-the-art algorithms, and experimental results show that our algorithm has the best predictive accuracy. Conclusions. The diagnostic method of diabetes on the basis of tongue images in Traditional Chinese Medicine (TCM) is of great value, indicating the feasibility of digitalized tongue diagnosis. PMID:28133611

  20. Time and spectral analysis methods with machine learning for the authentication of digital audio recordings.

    Science.gov (United States)

    Korycki, Rafal

    2013-07-10

    This paper addresses the problem of tampering detection and discusses new methods that can be used for authenticity analysis of digital audio recordings. Nowadays, the only method referred to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. This article presents the existing methods of time and spectral analysis along with their modifications as proposed by the author involving spectral analysis of residual signal enhanced by machine learning algorithms. The effectiveness of tampering detection methods described in this paper is tested on a predefined music database. The results are compared graphically using ROC-like curves. Furthermore, time-frequency plots are presented and enhanced by reassignment method in purpose of visual inspection of modified recordings. Using this solution, enables analysis of minimal changes of background sounds, which may indicate tampering.

  1. Time-frequency atoms-driven support vector machine method for bearings incipient fault diagnosis

    Science.gov (United States)

    Liu, Ruonan; Yang, Boyuan; Zhang, Xiaoli; Wang, Shibin; Chen, Xuefeng

    2016-06-01

    Bearing plays an essential role in the performance of mechanical system and fault diagnosis of mechanical system is inseparably related to the diagnosis of the bearings. However, it is a challenge to detect weak fault from the complex and non-stationary vibration signals with a large amount of noise, especially at the early stage. To improve the anti-noise ability and detect incipient fault, a novel fault detection method based on a short-time matching method and Support Vector Machine (SVM) is proposed. In this paper, the mechanism of roller bearing is discussed and the impact time frequency dictionary is constructed targeting the multi-component characteristics and fault feature of roller bearing fault vibration signals. Then, a short-time matching method is described and the simulation results show the excellent feature extraction effects in extremely low signal-to-noise ratio (SNR). After extracting the most relevance atoms as features, SVM was trained for fault recognition. Finally, the practical bearing experiments indicate that the proposed method is more effective and efficient than the traditional methods in weak impact signal oscillatory characters extraction and incipient fault diagnosis.

  2. Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images

    Directory of Open Access Journals (Sweden)

    Jianfeng Zhang

    2017-01-01

    Full Text Available Objective. The purpose of this research is to develop a diagnostic method of diabetes based on standardized tongue image using support vector machine (SVM. Methods. Tongue images of 296 diabetic subjects and 531 nondiabetic subjects were collected by the TDA-1 digital tongue instrument. Tongue body and tongue coating were separated by the division-merging method and chrominance-threshold method. With extracted color and texture features of the tongue image as input variables, the diagnostic model of diabetes with SVM was trained. After optimizing the combination of SVM kernel parameters and input variables, the influences of the combinations on the model were analyzed. Results. After normalizing parameters of tongue images, the accuracy rate of diabetes predication was increased from 77.83% to 78.77%. The accuracy rate and area under curve (AUC were not reduced after reducing the dimensions of tongue features with principal component analysis (PCA, while substantially saving the training time. During the training for selecting SVM parameters by genetic algorithm (GA, the accuracy rate of cross-validation was grown from 72% or so to 83.06%. Finally, we compare with several state-of-the-art algorithms, and experimental results show that our algorithm has the best predictive accuracy. Conclusions. The diagnostic method of diabetes on the basis of tongue images in Traditional Chinese Medicine (TCM is of great value, indicating the feasibility of digitalized tongue diagnosis.

  3. Development of a Moodle Course for Schoolchildren's Table Tennis Learning Based on Competence Motivation Theory: Its Effectiveness in Comparison to Traditional Training Method

    Science.gov (United States)

    Zou, Junhua; Liu, Qingtang; Yang, Zongkai

    2012-01-01

    Based on Competence Motivation Theory (CMT), a Moodle course for schoolchildren's table tennis learning was developed (The URL is http://www.bssepp.com, and this course allows guest access). The effects of the course on students' knowledge, perceived competence and interest were evaluated through quantitative methods. The sample of the study…

  4. Development of a Moodle Course for Schoolchildren's Table Tennis Learning Based on Competence Motivation Theory: Its Effectiveness in Comparison to Traditional Training Method

    Science.gov (United States)

    Zou, Junhua; Liu, Qingtang; Yang, Zongkai

    2012-01-01

    Based on Competence Motivation Theory (CMT), a Moodle course for schoolchildren's table tennis learning was developed (The URL is http://www.bssepp.com, and this course allows guest access). The effects of the course on students' knowledge, perceived competence and interest were evaluated through quantitative methods. The sample of the study…

  5. A distortion-correction method for workshop machine vision measurement system

    Science.gov (United States)

    Chen, Ruwen; Huang, Ren; Zhang, Zhisheng; Shi, Jinfei; Chen, Zixin

    2008-12-01

    The application of machine vision measurement system is developing rapidly in industry for its non-contact, high speed, and automation characteristics. However, there are nonlinear distortions in the images which are vital to measuring precision, for the object dimensions are determined by the image properties. People are interested in this problem and put forward some physical model based correction methods which are widely applied in engineering. However, these methods are difficult to be realized in workshop for the images are non-repetitive interfered by the coupled dynamic factors, which means the real imaging is a stochastic process. A new nonlinear distortion correction method based on a VNAR model (Volterra series based nonlinear auto-regressive time series model) is proposed to describe the distorted image edge series. The model parameter vectors are achieved by the laws of data. The distortion-free edges are obtained after model filtering and the image dimensions are transformed to measuring dimensions. Experimental results show that the method is reliable and can be applied to engineering.

  6. A machine learning nowcasting method based on real-time reanalysis data

    Science.gov (United States)

    Han, Lei; Sun, Juanzhen; Zhang, Wei; Xiu, Yuanyuan; Feng, Hailei; Lin, Yinjing

    2017-04-01

    Despite marked progress over the past several decades, convective storm nowcasting remains a challenge because most nowcasting systems are based on linear extrapolation of radar reflectivity without much consideration for other meteorological fields. The variational Doppler radar analysis system (VDRAS) is an advanced convective-scale analysis system capable of providing analysis of 3-D wind, temperature, and humidity by assimilating Doppler radar observations. Although potentially useful, it is still an open question as to how to use these fields to improve nowcasting. In this study, we present results from our first attempt at developing a support vector machine (SVM) box-based nowcasting (SBOW) method under the machine learning framework using VDRAS analysis data. The key design points of SBOW are as follows: (1) The study domain is divided into many position-fixed small boxes, and the nowcasting problem is transformed into one question, i.e., will a radar echo > 35 dBZ appear in a box in 30 min? (2) Box-based temporal and spatial features, which include time trends and surrounding environmental information, are constructed. (3) And the box-based constructed features are used to first train the SVM classifier, and then the trained classifier is used to make predictions. Compared with complicated and expensive expert systems, the above design of SBOW allows the system to be small, compact, straightforward, and easy to maintain and expand at low cost. The experimental results show that although no complicated tracking algorithm is used, SBOW can predict the storm movement trend and storm growth with reasonable skill.

  7. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  8. Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results

    Science.gov (United States)

    Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.

    2014-03-01

    The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.

  9. Fusion of HJ1B and ALOS PALSAR data for land cover classification using machine learning methods

    Science.gov (United States)

    Wang, X. Y.; Guo, Y. G.; He, J.; Du, L. T.

    2016-10-01

    Image classification from remote sensing is becoming increasingly urgent for monitoring environmental changes. Exploring effective algorithms to increase classification accuracy is critical. This paper explores the use of multispectral HJ1B and ALOS (Advanced Land Observing Satellite) PALSAR L-band (Phased Array type L-band Synthetic Aperture Radar) for land cover classification using learning-based algorithms. Pixel-based and object-based image analysis approaches for classifying HJ1B data and the HJ1B and ALOS/PALSAR fused-images were compared using two machine learning algorithms, support vector machine (SVM) and random forest (RF), to test which algorithm can achieve the best classification accuracy in arid and semiarid regions. The overall accuracies of the pixel-based (Fused data: 79.0%; HJ1B data: 81.46%) and object-based classifications (Fused data: 80.0%; HJ1B data: 76.9%) were relatively close when using the SVM classifier. The pixel-based classification achieved a high overall accuracy (85.5%) using the RF algorithm for classifying the fused data, whereas the RF classifier using the object-based image analysis produced a lower overall accuracy (70.2%). The study demonstrates that the pixel-based classification utilized fewer variables and performed relatively better than the object-based classification using HJ1B imagery and the fused data. Generally, the integration of the HJ1B and ALOS/PALSAR imagery can improve the overall accuracy of 5.7% using the pixel-based image analysis and RF classifier.

  10. Soft computing in machine learning

    CERN Document Server

    Park, Jooyoung; Inoue, Atsushi

    2014-01-01

    As users or consumers are now demanding smarter devices, intelligent systems are revolutionizing by utilizing machine learning. Machine learning as part of intelligent systems is already one of the most critical components in everyday tools ranging from search engines and credit card fraud detection to stock market analysis. You can train machines to perform some things, so that they can automatically detect, diagnose, and solve a variety of problems. The intelligent systems have made rapid progress in developing the state of the art in machine learning based on smart and deep perception. Using machine learning, the intelligent systems make widely applications in automated speech recognition, natural language processing, medical diagnosis, bioinformatics, and robot locomotion. This book aims at introducing how to treat a substantial amount of data, to teach machines and to improve decision making models. And this book specializes in the developments of advanced intelligent systems through machine learning. It...

  11. Teamwork: improved eQTL mapping using combinations of machine learning methods.

    Directory of Open Access Journals (Sweden)

    Marit Ackermann

    Full Text Available Expression quantitative trait loci (eQTL mapping is a widely used technique to uncover regulatory relationships between genes. A range of methodologies have been developed to map links between expression traits and genotypes. The DREAM (Dialogue on Reverse Engineering Assessments and Methods initiative is a community project to objectively assess the relative performance of different computational approaches for solving specific systems biology problems. The goal of one of the DREAM5 challenges was to reverse-engineer genetic interaction networks from synthetic genetic variation and gene expression data, which simulates the problem of eQTL mapping. In this framework, we proposed an approach whose originality resides in the use of a combination of existing machine learning algorithms (committee. Although it was not the best performer, this method was by far the most precise on average. After the competition, we continued in this direction by evaluating other committees using the DREAM5 data and developed a method that relies on Random Forests and LASSO. It achieved a much higher average precision than the DREAM best performer at the cost of slightly lower average sensitivity.

  12. Bayesian zero-failure reliability modeling and assessment method for multiple numerical control (NC) machine tools

    Institute of Scientific and Technical Information of China (English)

    阚英男; 杨兆军; 李国发; 何佳龙; 王彦鹍; 李洪洲

    2016-01-01

    A new problem that classical statistical methods are incapable of solving is reliability modeling and assessment when multiple numerical control machine tools (NCMTs) reveal zero failures after a reliability test. Thus, the zero-failure data form and corresponding Bayesian model are developed to solve the zero-failure problem of NCMTs, for which no previous suitable statistical model has been developed. An expert−judgment process that incorporates prior information is presented to solve the difficulty in obtaining reliable prior distributions of Weibull parameters. The equations for the posterior distribution of the parameter vector and the Markov chain Monte Carlo (MCMC) algorithm are derived to solve the difficulty of calculating high-dimensional integration and to obtain parameter estimators. The proposed method is applied to a real case; a corresponding programming code and trick are developed to implement an MCMC simulation in WinBUGS, and a mean time between failures (MTBF) of 1057.9 h is obtained. Given its ability to combine expert judgment, prior information, and data, the proposed reliability modeling and assessment method under the zero failure of NCMTs is validated.

  13. Comparison of Machine Learning Methods for the Purpose Of Human Fall Detection

    Directory of Open Access Journals (Sweden)

    Strémy Maximilián

    2014-12-01

    Full Text Available According to several studies, the European population is rapidly aging far over last years. It is therefore important to ensure that aging population is able to live independently without the support of working-age population. In accordance with the studies, fall is the most dangerous and frequent accident in the everyday life of aging population. In our paper, we present a system to track the human fall by a visual detection, i.e. using no wearable equipment. For this purpose, we used a Kinect sensor, which provides the human body position in the Cartesian coordinates. It is possible to directly capture a human body because the Kinect sensor has a depth and also an infrared camera. The first step in our research was to detect postures and classify the fall accident. We experimented and compared the selected machine learning methods including Naive Bayes, decision trees and SVM method to compare the performance in recognizing the human postures (standing, sitting and lying. The highest classification accuracy of over 93.3% was achieved by the decision tree method.

  14. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    Science.gov (United States)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  15. On plant detection of intact tomato fruits using image analysis and machine learning methods.

    Science.gov (United States)

    Yamamoto, Kyosuke; Guo, Wei; Yoshioka, Yosuke; Ninomiya, Seishi

    2014-07-09

    Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.

  16. On Plant Detection of Intact Tomato Fruits Using Image Analysis and Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Kyosuke Yamamoto

    2014-07-01

    Full Text Available Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.

  17. A newly conceived cylinder measuring machine and methods that eliminate the spindle errors

    OpenAIRE

    Vissiere, Alain; Nouira, H; Damak, Mohamed; Gibaru, Olivier; David, Jean-Marie

    2012-01-01

    International audience; Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reduci...

  18. Semi-Supervised Learning Based on Manifold in BCI

    Institute of Scientific and Technical Information of China (English)

    Ji-Ying Zhong; Xu Lei; De-Zhong Yao

    2009-01-01

    A Laplacian support vector machine (LapSVM) algorithm,a semi-supervised learning based on manifold,is introduced to brain-computer interface (BCI) to raise the classification precision and reduce the subjects' training complexity.The data are collected from three subjects in a three-task mental imagery experiment.LapSVM and transductive SVM (TSVM) are trained with a few labeled samples and a large number of unlabeled samples.The results confirm that LapSVM has a much better classification than TSVM.

  19. A Hybrid Prediction Method of Thermal Extension Error for Boring Machine Based on PCA and LS-SVM

    Directory of Open Access Journals (Sweden)

    Cheng Qiang

    2017-01-01

    Full Text Available Thermal extension error of boring bar in z-axis is one of the key factors that have a bad influence on the machining accuracy of boring machine, so how to exactly establish the relationship between the thermal extension length and temperature and predict the changing rule of thermal error are the premise of thermal extension error compensation. In this paper, a prediction method of thermal extension length of boring bar in boring machine is proposed based on principal component analysis (PCA and least squares support vector machine (LS-SVM model. In order to avoid the multiple correlation and coupling among the great amount temperature input variables, firstly, PCA is introduced to extract the principal components of temperature data samples. Then, LS-SVM is used to predict the changing tendency of the thermally induced thermal extension error of boring bar. Finally, experiments are conducted on a boring machine, the application results show that Boring bar axial thermal elongation error residual value dropped below 5 μm and minimum residual error is only 0.5 μm. This method not only effectively improve the efficiency of the temperature data acquisition and analysis, and improve the modeling accuracy and robustness.

  20. Machine learning methods to predict child posttraumatic stress: a proof of concept study.

    Science.gov (United States)

    Saxe, Glenn N; Ma, Sisi; Ren, Jiwen; Aliferis, Constantin

    2017-07-10

    The care of traumatized children would benefit significantly from accurate predictive models for Posttraumatic Stress Disorder (PTSD), using information available around the time of trauma. Machine Learning (ML) computational methods have yielded strong results in recent applications across many diseases and data types, yet they have not been previously applied to childhood PTSD. Since these methods have not been applied to this complex and debilitating disorder, there is a great deal that remains to be learned about their application. The first step is to prove the concept: Can ML methods - as applied in other fields - produce predictive classification models for childhood PTSD? Additionally, we seek to determine if specific variables can be identified - from the aforementioned predictive classification models - with putative causal relations to PTSD. ML predictive classification methods - with causal discovery feature selection - were applied to a data set of 163 children hospitalized with an injury and PTSD was determined three months after hospital discharge. At the time of hospitalization, 105 risk factor variables were collected spanning a range of biopsychosocial domains. Seven percent of subjects had a high level of PTSD symptoms. A predictive classification model was discovered with significant predictive accuracy. A predictive model constructed based on subsets of potentially causally relevant features achieves similar predictivity compared to the best predictive model constructed with all variables. Causal Discovery feature selection methods identified 58 variables of which 10 were identified as most stable. In this first proof-of-concept application of ML methods to predict childhood Posttraumatic Stress we were able to determine both predictive classification models for childhood PTSD and identify several causal variables. This set of techniques has great potential for enhancing the methodological toolkit in the field and future studies should seek to

  1. Three Phase Motor Centrifugal Machines Speed Control Using Pid Fuzzy Method

    Directory of Open Access Journals (Sweden)

    Trio Yus Peristiaferi

    2015-03-01

    Full Text Available Induction motor speed settings are still done manually by changing the position of the shaft or the size of the puli engine centrifugal. This method resulted in an arrangement with the speed of the motor will be difficult to control as expected. Inappropriate speed settings can also lead  to  less  sugar  production  results.  It  is  therefore  necessary  to maintain the control method of motor speed when load is added while experiencing the process of starting, spinning and breaking. The controller that is used is the PID Fuzzy. In a using simulation and implementation of  using  controller PID Fuzzy  having  the  averages  error  when  processing starting, spinning and breaking a dising about 0.51 % and about 1.06 %. So this final project hoped can help increase the efficiency of the centrifugal on sugar factory machine.

  2. Enhanced computation method of topological smoothing on shared memory parallel machines

    Directory of Open Access Journals (Sweden)

    Mahmoudi Ramzi

    2011-01-01

    Full Text Available Abstract To prepare images for better segmentation, we need preprocessing applications, such as smoothing, to reduce noise. In this paper, we present an enhanced computation method for smoothing 2D object in binary case. Unlike existing approaches, proposed method provides a parallel computation and better memory management, while preserving the topology (number of connected components of the original image by using homotopic transformations defined in the framework of digital topology. We introduce an adapted parallelization strategy called split, distribute and merge (SDM strategy which allows efficient parallelization of a large class of topological operators. To achieve a good speedup and better memory allocation, we cared about task scheduling and managing. Distributed work during smoothing process is done by a variable number of threads. Tests on 2D grayscale image (512*512, using shared memory parallel machine (SMPM with 8 CPU cores (2× Xeon E5405 running at frequency of 2 GHz, showed an enhancement of 5.2 with cache success rate of 70%.

  3. Direct Surge Margin Control for Aeroengines Based on Improved SVR Machine and LQR Method

    Directory of Open Access Journals (Sweden)

    Haibo Zhang

    2013-01-01

    Full Text Available A novel scheme of high stability engine control (HISTEC on the basis of an improved linear quadratic regulator (ILQR, called direct surge margin control, is derived for super-maneuver flights. Direct surge margin control, which is different from conventional control scheme, puts surge margin into the engine closed-loop system and takes surge margin as controlled variable directly. In this way, direct surge margin control can exploit potential performance of engine more effectively with a decrease of engine stability margin which usually happened in super-maneuver flights. For conquering the difficulty that aeroengine surge margin is undetectable, an approach based on improved support vector regression (SVR machine is proposed to construct a surge margin prediction model. The surge margin modeling contains two parts: a baseline model under no inlet distortion states and the calculation for surge margin loss under supermaneuvering flight conditions. The previous one is developed using neural network method, the inputs of which are selected by a weighted feature selection algorithm. Considering the hysteresis between pilot input and angle of attack output, an online scrolling window least square support vector regression (LSSVR method is employed to firstly estimate inlet distortion index and further compute surge margin loss via some empirical look-up tables.

  4. Parametric Optimization of Wire Electrical Discharge Machining of Powder Metallurgical Cold Worked Tool Steel using Taguchi Method

    Science.gov (United States)

    Sudhakara, Dara; Prasanthi, Guvvala

    2016-08-01

    Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.

  5. Comparison of two different methods for the uncertainty estimation of circle diameter measurements using an optical coordinate measuring machine

    DEFF Research Database (Denmark)

    Morace, Renata Erica; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2005-01-01

    This paper deals with the uncertainty estimation of measurements performed on optical coordinate measuring machines (CMMs). Two different methods were used to assess the uncertainty of circle diameter measurements using an optical CMM: the sensitivity analysis developing an uncertainty budget and...

  6. A Bayesian least-squares support vector machine method for predicting the remaining useful life of a microwave component

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-01-01

    Full Text Available Rapid and accurate lifetime prediction of critical components in a system is important to maintaining the system’s reliable operation. To this end, many lifetime prediction methods have been developed to handle various failure-related data collected in different situations. Among these methods, machine learning and Bayesian updating are the most popular ones. In this article, a Bayesian least-squares support vector machine method that combines least-squares support vector machine with Bayesian inference is developed for predicting the remaining useful life of a microwave component. A degradation model describing the change in the component’s power gain over time is developed, and the point and interval remaining useful life estimates are obtained considering a predefined failure threshold. In our case study, the radial basis function neural network approach is also implemented for comparison purposes. The results indicate that the Bayesian least-squares support vector machine method is more precise and stable in predicting the remaining useful life of this type of components.

  7. Bibliography of papers, reports, and presentations related to point-sample dimensional measurement methods for machined part evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems

    1996-04-01

    The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.

  8. Parameter Identification of Ship Maneuvering Models Using Recursive Least Square Method Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Man Zhu

    2017-03-01

    Full Text Available Determination of ship maneuvering models is a tough task of ship maneuverability prediction. Among several prime approaches of estimating ship maneuvering models, system identification combined with the full-scale or free- running model test is preferred. In this contribution, real-time system identification programs using recursive identification method, such as the recursive least square method (RLS, are exerted for on-line identification of ship maneuvering models. However, this method seriously depends on the objects of study and initial values of identified parameters. To overcome this, an intelligent technology, i.e., support vector machines (SVM, is firstly used to estimate initial values of the identified parameters with finite samples. As real measured motion data of the Mariner class ship always involve noise from sensors and external disturbances, the zigzag simulation test data include a substantial quantity of Gaussian white noise. Wavelet method and empirical mode decomposition (EMD are used to filter the data corrupted by noise, respectively. The choice of the sample number for SVM to decide initial values of identified parameters is extensively discussed and analyzed. With de-noised motion data as input-output training samples, parameters of ship maneuvering models are estimated using RLS and SVM-RLS, respectively. The comparison between identification results and true values of parameters demonstrates that both the identified ship maneuvering models from RLS and SVM-RLS have reasonable agreements with simulated motions of the ship, and the increment of the sample for SVM positively affects the identification results. Furthermore, SVM-RLS using data de-noised by EMD shows the highest accuracy and best convergence.

  9. Real-time optical path control method that utilizes multiple support vector machines for traffic prediction

    Science.gov (United States)

    Kawase, Hiroshi; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi

    2016-02-01

    An effective solution to the continuous Internet traffic expansion is to offload traffic to lower layers such as the L2 or L1 optical layers. One possible approach is to introduce dynamic optical path operations such as adaptive establishment/tear down according to traffic variation. Path operations cannot be done instantaneously; hence, traffic prediction is essential. Conventional prediction techniques need optimal parameter values to be determined in advance by averaging long-term variations from the past. However, this does not allow adaptation to the ever-changing short-term variations expected to be common in future networks. In this paper, we propose a real-time optical path control method based on a machinelearning technique involving support vector machines (SVMs). A SVM learns the most recent traffic characteristics, and so enables better adaptation to temporal traffic variations than conventional techniques. The difficulty lies in determining how to minimize the time gap between optical path operation and buffer management at the originating points of those paths. The gap makes the required learning data set enormous and the learning process costly. To resolve the problem, we propose the adoption of multiple SVMs running in parallel, trained with non-overlapping subsets of the original data set. The maximum value of the outputs of these SVMs will be the estimated number of necessary paths. Numerical experiments prove that our proposed method outperforms a conventional prediction method, the autoregressive moving average method with optimal parameter values determined by Akaike's information criterion, and reduces the packet-loss ratio by up to 98%.

  10. High aspect ratio microstructuring of transparent dielectrics using femtosecond laser pulses: method for optimization of the machining throughput

    Science.gov (United States)

    Hendricks, F.; der Au, J. Aus; Matylitsky, V. V.

    2014-10-01

    High average power, high repetition rate femtosecond lasers with μJ pulse energies are increasingly used for material processing applications. The unique advantage of material processing with sub-picosecond lasers is efficient, fast and localized energy deposition, which leads to high ablation efficiency and accuracy in nearly all kinds of solid materials. This work focuses on the machining of high aspect ratio structures in transparent dielectrics, in particular chemically strengthened Xensation™ glass from Schott using multi-pass ablative material removal. For machining of high aspect ratio structures, among others needed for cutting applications, a novel method to determine the best relation between kerf width and number of overscans is presented. The importance of this relation for optimization of the machining throughput will be demonstrated.

  11. Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining.

    Science.gov (United States)

    Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin

    2016-11-16

    Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.

  12. Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining

    Directory of Open Access Journals (Sweden)

    Qiaokang Liang

    2016-11-01

    Full Text Available Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.

  13. Prediction of Aerosol Optical Depth in West Asia: Machine Learning Methods versus Numerical Models

    Science.gov (United States)

    Omid Nabavi, Seyed; Haimberger, Leopold; Abbasi, Reyhaneh; Samimi, Cyrus

    2017-04-01

    Dust-prone areas of West Asia are releasing increasingly large amounts of dust particles during warm months. Because of the lack of ground-based observations in the region, this phenomenon is mainly monitored through remotely sensed aerosol products. The recent development of mesoscale Numerical Models (NMs) has offered an unprecedented opportunity to predict dust emission, and, subsequently Aerosol Optical Depth (AOD), at finer spatial and temporal resolutions. Nevertheless, the significant uncertainties in input data and simulations of dust activation and transport limit the performance of numerical models in dust prediction. The presented study aims to evaluate if machine-learning algorithms (MLAs), which require much less computational expense, can yield the same or even better performance than NMs. Deep blue (DB) AOD, which is observed by satellites but also predicted by MLAs and NMs, is used for validation. We concentrate our evaluations on the over dry Iraq plains, known as the main origin of recently intensified dust storms in West Asia. Here we examine the performance of four MLAs including Linear regression Model (LM), Support Vector Machine (SVM), Artificial Neural Network (ANN), Multivariate Adaptive Regression Splines (MARS). The Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) and the Dust REgional Atmosphere Model (DREAM) are included as NMs. The MACC aerosol re-analysis of European Centre for Medium-range Weather Forecast (ECMWF) is also included, although it has assimilated satellite-based AOD data. Using the Recursive Feature Elimination (RFE) method, nine environmental features including soil moisture and temperature, NDVI, dust source function, albedo, dust uplift potential, vertical velocity, precipitation and 9-month SPEI drought index are selected for dust (AOD) modeling by MLAs. During the feature selection process, we noticed that NDVI and SPEI are of the highest importance in MLAs predictions. The data set was divided

  14. Prediction of Backbreak in Open-Pit Blasting Operations Using the Machine Learning Method

    Science.gov (United States)

    Khandelwal, Manoj; Monjezi, M.

    2013-03-01

    Backbreak is an undesirable phenomenon in blasting operations. It can cause instability of mine walls, falling down of machinery, improper fragmentation, reduced efficiency of drilling, etc. The existence of various effective parameters and their unknown relationships are the main reasons for inaccuracy of the empirical models. Presently, the application of new approaches such as artificial intelligence is highly recommended. In this paper, an attempt has been made to predict backbreak in blasting operations of Soungun iron mine, Iran, incorporating rock properties and blast design parameters using the support vector machine (SVM) method. To investigate the suitability of this approach, the predictions by SVM have been compared with multivariate regression analysis (MVRA). The coefficient of determination (CoD) and the mean absolute error (MAE) were taken as performance measures. It was found that the CoD between measured and predicted backbreak was 0.987 and 0.89 by SVM and MVRA, respectively, whereas the MAE was 0.29 and 1.07 by SVM and MVRA, respectively.

  15. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  16. Application of symbolic representation method to the analysis of machine errors

    Science.gov (United States)

    Chen, Cha'o.-Kuang; Wu, Tzong-Mou

    1993-09-01

    jSYlIJIJOliC FepVCSCIltaLiOfl of rnachiiie errors for the opetied loop chain aJl(1 closed J():)fJ Cllaifl iii positioii and orientation is presented. i]iis representatioL1 (foes away with CtJJ11l)CrSOUC natrix rnuitiplicaiioiis and is able tO Ofilit 1ie zero value of multiplication of matrix. A program is also (leVeIolJC(I by I''iogran syrti holic rcj:ncseii lation method which is apjilicable to the analysis of nialiiiie Crrors. An example is given to illustrate the use of this prograiii for the analysis of machue errors. it is hoped that the itietliod presented if! this study will provide an easy and powerful tool for the analysis of machine errors. In iroduc Lion lfoclianism are commonly used in . i specified pOSitiOfl and orienLation in two or Ldimensional space. In accuracies introduced by clearances in the mechanism connections and errors j manufacturing are one of the prin SP1E Vol. 2101 Measurement Technology and Intelligent Instruments (1993)! 155

  17. Machine vision method for online surface inspection of easy open can ends

    Science.gov (United States)

    Mariño, Perfecto; Pastoriza, Vicente; Santamaría, Miguel

    2006-10-01

    Easy open can end manufacturing process in the food canning sector currently makes use of a manual, non-destructive testing procedure to guarantee can end repair coating quality. This surface inspection is based on a visual inspection made by human inspectors. Due to the high production rate (100 to 500 ends per minute) only a small part of each lot is verified (statistical sampling), then an automatic, online, inspection system, based on machine vision, has been developed to improve this quality control. The inspection system uses a fuzzy model to make the acceptance/rejection decision for each can end from the information obtained by the vision sensor. In this work, the inspection method is presented. This surface inspection system checks the total production, classifies the ends in agreement with an expert human inspector, supplies interpretability to the operators in order to find out the failure causes and reduce mean time to repair during failures, and allows to modify the minimum can end repair coating quality.

  18. Advanced three-dimensional scan methods in the nanopositioning and nanomeasuring machine

    Science.gov (United States)

    Hausotte, T.; Percle, B.; Jäger, G.

    2009-08-01

    The nanopositioning and nanomeasuring machine developed at the Ilmenau University of Technology was originally designed for surface measurements within a measuring volume of 25 mm × 25 mm × 5 mm. The interferometric length measuring and drive systems make it possible to move the stage with a resolution of 0.1 nm and a positioning uncertainty of less than 10 nm in all three axes. Various measuring tasks are possible depending on the installed probe system. Most of the sensors utilized are one-dimensional surface probes; however, some tasks require measuring sidewalls and other three-dimensional features. A new control system, based on the I++ DME specification, was implemented in the device. The I++ DME scan functions were improved and special scan functions added to allow advanced three-dimensional scan methods, further fulfilling the demands of scanning force microscopy and micro-coordinate measurements. This work gives an overview of these new functions and the application of them for several different measurements.

  19. Comparison of machine-learning methods for above-ground biomass estimation based on Landsat imagery

    Science.gov (United States)

    Wu, Chaofan; Shen, Huanhuan; Shen, Aihua; Deng, Jinsong; Gan, Muye; Zhu, Jinxia; Xu, Hongwei; Wang, Ke

    2016-07-01

    Biomass is one significant biophysical parameter of a forest ecosystem, and accurate biomass estimation on the regional scale provides important information for carbon-cycle investigation and sustainable forest management. In this study, Landsat satellite imagery data combined with field-based measurements were integrated through comparisons of five regression approaches [stepwise linear regression, K-nearest neighbor, support vector regression, random forest (RF), and stochastic gradient boosting] with two different candidate variable strategies to implement the optimal spatial above-ground biomass (AGB) estimation. The results suggested that RF algorithm exhibited the best performance by 10-fold cross-validation with respect to R2 (0.63) and root-mean-square error (26.44 ton/ha). Consequently, the map of estimated AGB was generated with a mean value of 89.34 ton/ha in northwestern Zhejiang Province, China, with a similar pattern to the distribution mode of local forest species. This research indicates that machine-learning approaches associated with Landsat imagery provide an economical way for biomass estimation. Moreover, ensemble methods using all candidate variables, especially for Landsat images, provide an alternative for regional biomass simulation.

  20. Analysis of machinable structures and their wettability of rotary ultrasonic texturing method

    Science.gov (United States)

    Xu, Shaolin; Shimada, Keita; Mizutani, Masayoshi; Kuriyagawa, Tsunemoto

    2016-10-01

    Tailored surface textures at the micro- or nanoscale dimensions are widely used to get required functional performances. Rotary ultrasonic texturing (RUT) technique has been proved to be capable of fabricating periodic micro- and nanostructures. In the present study, diamond tools with geometrically defined cutting edges were designed for fabricating different types of tailored surface textures using the RUT method. Surface generation mechanisms and machinable structures of the RUT process are analyzed and simulated with a 3D-CAD program. Textured surfaces generated by using a triangular pyramid cutting tip are constructed. Different textural patterns from several micrometers to several tens of micrometers with few burrs were successfully fabricated, which proved that tools with a proper two-rake-face design are capable of removing cutting chips efficiently along a sinusoidal cutting locus in the RUT process. Technical applications of the textured surfaces are also discussed. Wetting properties of textured aluminum surfaces were evaluated by combining the test of surface roughness features. The results show that the real surface area of the textured aluminum surfaces almost doubled by comparing with that of a flat surface, and anisotropic wetting properties were obtained due to the obvious directional textural features.

  1. Scale effects and a method for similarity evaluation in micro electrical discharge machining

    Science.gov (United States)

    Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua

    2016-08-01

    Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.

  2. A Sensor-less Method for Online Thermal Monitoring of Switched Reluctance Machine

    DEFF Research Database (Denmark)

    Wang, Chao; Liu, Hui; Liu, Xiao

    2015-01-01

    Stator winding is one of the most vulnerable parts in Switched Reluctance Machine (SRM), especially under thermal stresses during frequently changing operation circumstances and susceptible heat dissipation conditions. Thus real-time online thermal monitoring of the stator winding is of great......, neither machine parameters nor thermal impedance parameters are required in the scheme. Simulation results under various operating conditions confirm the proposed sensor-less online thermal monitoring approach....

  3. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation.

    Science.gov (United States)

    Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-08-04

    Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the

  4. Optimization of Surface Finish in Turning Operation by Considering the Machine Tool Vibration using Taguchi Method

    Directory of Open Access Journals (Sweden)

    Muhammad Munawar

    2012-01-01

    Full Text Available Optimization of surface roughness has been one of the primary objectives in most of the machining operations. Poor control on the desired surface roughness generates non conforming parts and results into increase in cost and loss of productivity due to rework or scrap. Surface roughness value is a result of several process variables among which machine tool condition is one of the significant variables. In this study, experimentation was carried out to investigate the effect of machine tool condition on surface roughness. Variable used to represent machine tool\\'s condition was vibration amplitude. Input parameters used, besides vibration amplitude, were feed rate and insert nose radius. Cutting speed and depth of cut were kept constant. Based on Taguchi orthogonal array, a series of experimentation was designed and performed on AISI 1040 carbon steel bar at default and induced machine tool\\'s vibration amplitudes. ANOVA (Analysis of Variance, revealed that vibration amplitude and feed rate had moderate effect on the surface roughness and insert nose radius had the highest significant effect on the surface roughness. It was also found that a machine tool with low vibration amplitude produced better surface roughness. Insert with larger nose radius produced better surface roughness at low feed rate.

  5. Using Standard-Sole Cost Method for Performance Gestion Accounting and Calculation Cost in the Machine Building Industry

    Directory of Open Access Journals (Sweden)

    Cleopatra Sendroiu

    2006-07-01

    Full Text Available The main purpose of improving and varying cost calculation methods in the machine building industry is to make them more operational and efficient in supplying the information necessary to the management in taking its decisions. The present cost calculation methods used in the machine building plants - global method and the method per orders - by which a historical cost is determined a posteriori used in deducting and post factum justification of manufacturing expenses does not offer the management the possibility to fully satisfy its need for information. We are talking about a change of conception in applying certain systems, methods and work techniques, according to the needs of efficient administration of production and the plant seen as a whole. The standard-cost method best answers to the needs of the effective management of the value side of the manufacturing process and raising economic efficiency. We consider that, in the machine building industry, these objectives can be achieved by using the standard - sole cost alternative of the standard-cost method.

  6. Using Standard-Sole Cost Method for Performance Gestion Accounting and Calculation Cost in the Machine Building Industry

    Directory of Open Access Journals (Sweden)

    Aureliana Geta Roman

    2006-09-01

    Full Text Available The main purpose of improving and varying cost calculation methods in the machine building industry is to make them more operational and efficient in supplying the information necessary to the management in taking its decisions. The present cost calculation methods used in the machine building plants – global method and the method per orders – by which a historical cost is determined a posteriori used in deducting and post factum justification of manufacturing expenses does not offer the management the possibility to fully satisfy its need for information. We are talking about a change of conception in applying certain systems, methods and work techniques, according to the needs of efficient administration of production and the plant seen as a whole. The standard-cost method best answers to the needs of the effective management of the value side of the manufacturing process and raising economic efficiency. We consider that, in the machine building industry, these objectives can be achieved by using the standard - sole cost alternative of the standard-cost method.

  7. Methodes de compensation des erreurs d'usinage utilisant la mesure sur machines-outils

    Science.gov (United States)

    Guiassa, Rachid

    On-machine measurement process is used to inspect the part immediately after the cut without part removal and additional setups. It detects the machining defects visible to the machine tool. The system machine-tool-part deflection and the cutting tool dimension inaccuracy are the most important sources of these defects. The machined part can be inspected, at the semi-finishing cut level to identify systematic defects that may occur later at the finishing cut. Therefore, corrective actions can be derived to anticipate the expected error in order to produce a part with acceptable accuracy. For industrial profitability, the measurement and the compensation tasks must be done under the closed door machining requirement without human interventions. This thesis aims to develop mathematical models that use the data inspection of previous cuts to formulate the compensation of the finishing-cut. The goal of the compensation is to anticipate the expected error which is identified under two components. One is independent on the depth of cut and is related to the cutting tool dimension such as the wear. The other is dependent on the cutting depth such as the deflection. A general model is presented which relies solely on-machine probing data from semi-finishing cuts to compensate the final cut. A variable cutting compliance coefficient relates the total system deflection to the depth of cut in multi-cut process. It is used to estimate the compensation of the tool path. The model is able to take into account the effect of the cutting depth variation and the material removal in the estimation of the error at the finishing-cut. In order to generate the continuous compensated tool path from discrete measurements, a B-Spline deformation technique is adapted to the available data and applied to compute the compensated tool path according to a restricted number of discrete compensation vectors. The results show that the on-machine probed errors can be significantly reduced using the

  8. On the Use of Machine Learning Methods for Characterization of Contaminant Source Zone Architecture

    Science.gov (United States)

    Zhang, H.; Mendoza-Sanchez, I.; Christ, J.; Miller, E. L.; Abriola, L. M.

    2011-12-01

    Recent research has identified the importance of DNAPL mass distribution in the evolution of down-gradient contaminant plumes and the control of source zone remediation effectiveness. Advances in the management of sites containing DNAPL source zones, however, are currently limited by the difficulty associated with characterizing subsurface DNAPL source zone 'architecture'. Specifically, knowledge of the ganglia to pool ratio (GTP) has been demonstrated useful in the assessment and prediction of system behavior. In this paper, we present an approach to the estimation of a quantity related to GTP, the pool fraction (PF), defined as the percentage of the source zone volume occupied by pools, based on observations of plume concentrations. Here we discuss the development and initial validation of an approach for PF estimation based on machine learning method. The algorithm is constructed in a way that, when given new concentration data, prediction of the PF of the associated source zone is attained. An ideal solution would make use of the concentration signals to estimate a single value for PF. Unfortunately, this problem is not well-posed given the data at our disposal. Thus, we relax the regression approach to one of classification. We quantize pool fraction (i.e., the interval between zero and one) into a number of intervals and employ machine learning methods to use the concentration data to determine the interval containing the PF for a given set of data. This approach is predicated on the assumption that quantities (i.e., features) derived from the concentration data of evolving plumes with similar source zone PFs will in fact be similar to one another. Thus, within the training process we must determine a suitable collection of features and build methods for evaluating and optimizing similarity in features space that results in high accuracy in terms of predicting the correct PF interval. Moreover, the number and boundaries of these intervals must also be

  9. Big data analysis using modern statistical and machine learning methods in medicine.

    Science.gov (United States)

    Yoo, Changwon; Ramirez, Luis; Liuzzi, Juan

    2014-06-01

    In this article we introduce modern statistical machine learning and bioinformatics approaches that have been used in learning statistical relationships from big data in medicine and behavioral science that typically include clinical, genomic (and proteomic) and environmental variables. Every year, data collected from biomedical and behavioral science is getting larger and more complicated. Thus, in medicine, we also need to be aware of this trend and understand the statistical tools that are available to analyze these datasets. Many statistical analyses that are aimed to analyze such big datasets have been introduced recently. However, given many different types of clinical, genomic, and environmental data, it is rather uncommon to see statistical methods that combine knowledge resulting from those different data types. To this extent, we will introduce big data in terms of clinical data, single nucleotide polymorphism and gene expression studies and their interactions with environment. In this article, we will introduce the concept of well-known regression analyses such as linear and logistic regressions that has been widely used in clinical data analyses and modern statistical models such as Bayesian networks that has been introduced to analyze more complicated data. Also we will discuss how to represent the interaction among clinical, genomic, and environmental data in using modern statistical models. We conclude this article with a promising modern statistical method called Bayesian networks that is suitable in analyzing big data sets that consists with different type of large data from clinical, genomic, and environmental data. Such statistical model form big data will provide us with more comprehensive understanding of human physiology and disease.

  10. A MACHINE-LEARNING METHOD TO INFER FUNDAMENTAL STELLAR PARAMETERS FROM PHOTOMETRIC LIGHT CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Miller, A. A. [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, MS 169-506, Pasadena, CA 91109 (United States); Bloom, J. S.; Richards, J. W.; Starr, D. L. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Lee, Y. S. [Department of Astronomy and Space Science, Chungnam National University, Daejeon 305-764 (Korea, Republic of); Butler, N. R. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85281 (United States); Tokarz, S. [Smithsonian Astrophysical Observatory, Cambridge, MA 02138 (United States); Smith, N.; Eisner, J. A., E-mail: amiller@astro.caltech.edu [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States)

    2015-01-10

    A fundamental challenge for wide-field imaging surveys is obtaining follow-up spectroscopic observations: there are >10{sup 9} photometrically cataloged sources, yet modern spectroscopic surveys are limited to ∼few× 10{sup 6} targets. As we approach the Large Synoptic Survey Telescope era, new algorithmic solutions are required to cope with the data deluge. Here we report the development of a machine-learning framework capable of inferring fundamental stellar parameters (T {sub eff}, log g, and [Fe/H]) using photometric-brightness variations and color alone. A training set is constructed from a systematic spectroscopic survey of variables with Hectospec/Multi-Mirror Telescope. In sum, the training set includes ∼9000 spectra, for which stellar parameters are measured using the SEGUE Stellar Parameters Pipeline (SSPP). We employed the random forest algorithm to perform a non-parametric regression that predicts T {sub eff}, log g, and [Fe/H] from photometric time-domain observations. Our final optimized model produces a cross-validated rms error (RMSE) of 165 K, 0.39 dex, and 0.33 dex for T {sub eff}, log g, and [Fe/H], respectively. Examining the subset of sources for which the SSPP measurements are most reliable, the RMSE reduces to 125 K, 0.37 dex, and 0.27 dex, respectively, comparable to what is achievable via low-resolution spectroscopy. For variable stars this represents a ≈12%-20% improvement in RMSE relative to models trained with single-epoch photometric colors. As an application of our method, we estimate stellar parameters for ∼54,000 known variables. We argue that this method may convert photometric time-domain surveys into pseudo-spectrographic engines, enabling the construction of extremely detailed maps of the Milky Way, its structure, and history.

  11. Discrimination of Maize Haploid Seeds from Hybrid Seeds Using Vis Spectroscopy and Support Vector Machine Method.

    Science.gov (United States)

    Liu, Jin; Guo, Ting-ting; Li, Hao-chuan; Jia, Shi-qiang; Yan, Yan-lu; An, Dong; Zhang, Yao; Chen, Shao-jiang

    2015-11-01

    Doubled haploid (DH) lines are routinely applied in the hybrid maize breeding programs of many institutes and companies for their advantages of complete homozygosity and short breeding cycle length. A key issue in this approach is an efficient screening system to identify haploid kernels from the hybrid kernels crossed with the inducer. At present, haploid kernel selection is carried out manually using the"red-crown" kernel trait (the haploid kernel has a non-pigmented embryo and pigmented endosperm) controlled by the R1-nj gene. Manual selection is time-consuming and unreliable. Furthermore, the color of the kernel embryo is concealed by the pericarp. Here, we establish a novel approach for identifying maize haploid kernels based on visible (Vis) spectroscopy and support vector machine (SVM) pattern recognition technology. The diffuse transmittance spectra of individual kernels (141 haploid kernels and 141 hybrid kernels from 9 genotypes) were collected using a portable UV-Vis spectrometer and integrating sphere. The raw spectral data were preprocessed using smoothing and vector normalization methods. The desired feature wavelengths were selected based on the results of the Kolmogorov-Smirnov test. The wavelengths with p values above 0. 05 were eliminated because the distributions of absorbance data in these wavelengths show no significant difference between haploid and hybrid kernels. Principal component analysis was then performed to reduce the number of variables. The SVM model was evaluated by 9-fold cross-validation. In each round, samples of one genotype were used as the testing set, while those of other genotypes were used as the training set. The mean rate of correct discrimination was 92.06%. This result demonstrates the feasibility of using Vis spectroscopy to identify haploid maize kernels. The method would help develop a rapid and accurate automated screening-system for haploid kernels.

  12. Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods

    Science.gov (United States)

    Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.

    2016-12-01

    Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.

  13. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  14. Comparison of Machine Learning methods for incipient motion in gravel bed rivers

    Science.gov (United States)

    Valyrakis, Manousos

    2013-04-01

    Soil erosion and sediment transport of natural gravel bed streams are important processes which affect both the morphology as well as the ecology of earth's surface. For gravel bed rivers at near incipient flow conditions, particle entrainment dynamics are highly intermittent. This contribution reviews the use of modern Machine Learning (ML) methods implemented for short term prediction of entrainment instances of individual grains exposed in fully developed near boundary turbulent flows. Results obtained by network architectures of variable complexity based on two different ML methods namely the Artificial Neural Network (ANN) and the Adaptive Neuro-Fuzzy Inference System (ANFIS) are compared in terms of different error and performance indices, computational efficiency and complexity as well as predictive accuracy and forecast ability. Different model architectures are trained and tested with experimental time series obtained from mobile particle flume experiments. The experimental setup consists of a Laser Doppler Velocimeter (LDV) and a laser optics system, which acquire data for the instantaneous flow and particle response respectively, synchronously. The first is used to record the flow velocity components directly upstream of the test particle, while the later tracks the particle's displacements. The lengthy experimental data sets (millions of data points) are split into the training and validation subsets used to perform the corresponding learning and testing of the models. It is demonstrated that the ANFIS hybrid model, which is based on neural learning and fuzzy inference principles, better predicts the critical flow conditions above which sediment transport is initiated. In addition, it is illustrated that empirical knowledge can be extracted, validating the theoretical assumption that particle ejections occur due to energetic turbulent flow events. Such a tool may find application in management and regulation of stream flows downstream of dams for stream

  15. Machine Learning and Radiology

    Science.gov (United States)

    Wang, Shijun; Summers, Ronald M.

    2012-01-01

    In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077

  16. Preliminary Study on Machining Condition Monitoring System Using 3-Channel Force Sensor Analyzed by I-kaz Multilevel Method

    Directory of Open Access Journals (Sweden)

    Z. Karim

    2016-08-01

    Full Text Available Cutting tool wear is one of the major problems affecting the finished product in term of surface finish quality, dimensional precision and the cost of the defect. This paper discusses the preliminary study on machining condition monitoring system using force data captured using 3-channel force sensor. The data were analyzed by I-kaz multilevel method to monitor the flank wear progression during the machining. The flank wear of the cutting insert was measured using Moticom magnifier under two different operational conditions in turning process. A 3-channel Kistler force sensor was assembled to hold the tool holder to measure the force on the cutting tool in the tangential, radial and feed direction during the machining process. The signals were transmitted to the data acquisition equipment, and finally to the computer system. I-kaz multilevel method was used to identify and characterize the changes in the signals from the sensors under two different experimental set up. The values of I-kaz multilevel coefficients for all channels are strongly correlated with the cutting tool wear condition. This preliminary study can be further developed to efficiently monitor and predict flank wear level which can be used in the real machining industry.

  17. Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods.

    Science.gov (United States)

    Shan, Juan; Alam, S Kaisar; Garra, Brian; Zhang, Yingtao; Ahmed, Tahira

    2016-04-01

    This work identifies effective computable features from the Breast Imaging Reporting and Data System (BI-RADS), to develop a computer-aided diagnosis (CAD) system for breast ultrasound. Computerized features corresponding to ultrasound BI-RADs categories were designed and tested using a database of 283 pathology-proven benign and malignant lesions. Features were selected based on classification performance using a "bottom-up" approach for different machine learning methods, including decision tree, artificial neural network, random forest and support vector machine. Using 10-fold cross-validation on the database of 283 cases, the highest area under the receiver operating characteristic (ROC) curve (AUC) was 0.84 from a support vector machine with 77.7% overall accuracy; the highest overall accuracy, 78.5%, was from a random forest with the AUC 0.83. Lesion margin and orientation were optimum features common to all of the different machine learning methods. These features can be used in CAD systems to help distinguish benign from worrisome lesions.

  18. Forecasting Urban Water Demand via Machine Learning Methods Coupled with a Bootstrap Rank-Ordered Conditional Mutual Information Input Variable Selection Method

    Science.gov (United States)

    Adamowski, J. F.; Quilty, J.; Khalil, B.; Rathinasamy, M.

    2014-12-01

    This paper explores forecasting short-term urban water demand (UWD) (using only historical records) through a variety of machine learning techniques coupled with a novel input variable selection (IVS) procedure. The proposed IVS technique termed, bootstrap rank-ordered conditional mutual information for real-valued signals (brCMIr), is multivariate, nonlinear, nonparametric, and probabilistic. The brCMIr method was tested in a case study using water demand time series for two urban water supply system pressure zones in Ottawa, Canada to select the most important historical records for use with each machine learning technique in order to generate forecasts of average and peak UWD for the respective pressure zones at lead times of 1, 3, and 7 days ahead. All lead time forecasts are computed using Artificial Neural Networks (ANN) as the base model, and are compared with Least Squares Support Vector Regression (LSSVR), as well as a novel machine learning method for UWD forecasting: the Extreme Learning Machine (ELM). Results from one-way analysis of variance (ANOVA) and Tukey Honesty Significance Difference (HSD) tests indicate that the LSSVR and ELM models are the best machine learning techniques to pair with brCMIr. However, ELM has significant computational advantages over LSSVR (and ANN) and provides a new and promising technique to explore in UWD forecasting.

  19. Positioning method of a cylindrical cutter for ruled surface machining based on minimizing one-sided Hausdorff distance

    Institute of Scientific and Technical Information of China (English)

    Cao Lixin; Dong Lei

    2015-01-01

    Motivated by the definition of the machining errors induced by tool path planning methods, a mapping curve of the tool axis of a cylindrical cutter is constructed on the tool surface. The mapping curve is a typical one that can be used to express the closeness between the tool surface and the surface to be machined. A novel tool path planning method is proposed for flank or plunge milling ruled surfaces based on the minimization of the one-sided Hausdorff distance (HD) from the mapping curve to the surface to be machined. It is a nonlinear optimization problem in best uniform approximation (BUA) or Chebyshev sense. A mathematical programming model for computing the minimum one-sided HD is proposed. The linearization method of the programming model is provided and the final optimal solutions are obtained by simplex method. The effectiveness of the proposed BUA method is verified by two numerical examples and compared with the least squares (LS) and double point offset (DPO) methods. The variation in tool orientation induced by the optimization of the tool positions is also evaluated.

  20. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    Science.gov (United States)

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  1. Methods, systems and apparatus for controlling operation of two alternating current (AC) machines

    Science.gov (United States)

    Gallegos-Lopez, Gabriel; Nagashima, James M.; Perisic, Milun; Hiti, Silva

    2012-02-14

    A system is provided for controlling two AC machines. The system comprises a DC input voltage source that provides a DC input voltage, a voltage boost command control module (VBCCM), a five-phase PWM inverter module coupled to the two AC machines, and a boost converter coupled to the inverter module and the DC input voltage source. The boost converter is designed to supply a new DC input voltage to the inverter module having a value that is greater than or equal to a value of the DC input voltage. The VBCCM generates a boost command signal (BCS) based on modulation indexes from the two AC machines. The BCS controls the boost converter such that the boost converter generates the new DC input voltage in response to the BCS. When the two AC machines require additional voltage that exceeds the DC input voltage required to meet a combined target mechanical power required by the two AC machines, the BCS controls the boost converter to drive the new DC input voltage generated by the boost converter to a value greater than the DC input voltage.

  2. Multipolar electrostatics based on the Kriging machine learning method: an application to serine.

    Science.gov (United States)

    Yuan, Yongna; Mills, Matthew J L; Popelier, Paul L A

    2014-04-01

    A multipolar, polarizable electrostatic method for future use in a novel force field is described. Quantum Chemical Topology (QCT) is used to partition the electron density of a chemical system into atoms, then the machine learning method Kriging is used to build models that relate the multipole moments of the atoms to the positions of their surrounding nuclei. The pilot system serine is used to study both the influence of the level of theory and the set of data generator methods used. The latter consists of: (i) sampling of protein structures deposited in the Protein Data Bank (PDB), or (ii) normal mode distortion along either (a) Cartesian coordinates, or (b) redundant internal coordinates. Wavefunctions for the sampled geometries were obtained at the HF/6-31G(d,p), B3LYP/apc-1, and MP2/cc-pVDZ levels of theory, prior to calculation of the atomic multipole moments by volume integration. The average absolute error (over an independent test set of conformations) in the total atom-atom electrostatic interaction energy of serine, using Kriging models built with the three data generator methods is 11.3 kJ mol⁻¹ (PDB), 8.2 kJ mol⁻¹ (Cartesian distortion), and 10.1 kJ mol⁻¹ (redundant internal distortion) at the HF/6-31G(d,p) level. At the B3LYP/apc-1 level, the respective errors are 7.7 kJ mol⁻¹, 6.7 kJ mol⁻¹, and 4.9 kJ mol⁻¹, while at the MP2/cc-pVDZ level they are 6.5 kJ mol⁻¹, 5.3 kJ mol⁻¹, and 4.0 kJ mol⁻¹. The ranges of geometries generated by the redundant internal coordinate distortion and by extraction from the PDB are much wider than the range generated by Cartesian distortion. The atomic multipole moment and electrostatic interaction energy predictions for the B3LYP/apc-1 and MP2/cc-pVDZ levels are similar, and both are better than the corresponding predictions at the HF/6-31G(d,p) level.

  3. ENVELOPING THEORY BASED METHOD FOR THE DETERMINATION OF PATH INTERVAL AND TOOL PATH OPTIMIZATION FOR SURFACE MACHINING

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    An enveloping theory based method for the determination of path interval in three-axis NC machining of free form surface is presented, and a practical algorithm and the measures for improving the calculating efficiency of the algorithm are given. Not only the given algorithm can be used for ball end cutter, flat end cutter, torus cutter and drum cutter, but also the proposed method can be extended to arbitrary milling cutters. Thus, the problem how to strictly calculate path interval in the occasion of three-axis NC machining of free form surfaces with non-ball end cutters has been resolved effectively. On this basis, the factors that affect path interval are analyzed, and the methods for optimizing tool path are explored.

  4. SU-D-204-06: Integration of Machine Learning and Bioinformatics Methods to Analyze Genome-Wide Association Study Data for Rectal Bleeding and Erectile Dysfunction Following Radiotherapy in Prostate Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Oh, J; Deasy, J [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Kerns, S [University of Rochester Medical Center, Rochester, NY (United States); Ostrer, H [Albert Einstein College of Medicine, Bronx, NY (United States); Rosenstein, B [Mount Sinai School of Medicine, New York, NY (United States)

    2016-06-15

    Purpose: We investigated whether integration of machine learning and bioinformatics techniques on genome-wide association study (GWAS) data can improve the performance of predictive models in predicting the risk of developing radiation-induced late rectal bleeding and erectile dysfunction in prostate cancer patients. Methods: We analyzed a GWAS dataset generated from 385 prostate cancer patients treated with radiotherapy. Using genotype information from these patients, we designed a machine learning-based predictive model of late radiation-induced toxicities: rectal bleeding and erectile dysfunction. The model building process was performed using 2/3 of samples (training) and the predictive model was tested with 1/3 of samples (validation). To identify important single nucleotide polymorphisms (SNPs), we computed the SNP importance score, resulting from our random forest regression model. We performed gene ontology (GO) enrichment analysis for nearby genes of the important SNPs. Results: After univariate analysis on the training dataset, we filtered out many SNPs with p>0.001, resulting in 749 and 367 SNPs that were used in the model building process for rectal bleeding and erectile dysfunction, respectively. On the validation dataset, our random forest regression model achieved the area under the curve (AUC)=0.70 and 0.62 for rectal bleeding and erectile dysfunction, respectively. We performed GO enrichment analysis for the top 25%, 50%, 75%, and 100% SNPs out of the select SNPs in the univariate analysis. When we used the top 50% SNPs, more plausible biological processes were obtained for both toxicities. An additional test with the top 50% SNPs improved predictive power with AUC=0.71 and 0.65 for rectal bleeding and erectile dysfunction. A better performance was achieved with AUC=0.67 when age and androgen deprivation therapy were added to the model for erectile dysfunction. Conclusion: Our approach that combines machine learning and bioinformatics techniques

  5. Reinforcement learning based sensing policy optimization for energy efficient cognitive radio networks

    CERN Document Server

    Oksanen, Jan; Koivunen, Visa

    2011-01-01

    This paper introduces a machine learning based collaborative multi band spectrum sensing policy for cognitive radios. The proposed sensing policy guides secondary users to focus the search of unused radio spectrum to those frequencies that persistently provide them high data rate. The proposed policy is based on machine learning, which makes it adaptive with the temporally and spatially varying radio spectrum. Furthermore, there is no need for dynamic modeling of the primary activity since it is implicitly learned over time. Energy efficiency is achieved by minimizing the number of assigned sensors per each subband under a constraint on miss detection probability. It is important to control the missed detections because they cause collisions with primary transmissions and lead to retransmissions at both the primary and secondary user. The minimization of the number of active sensors is formulated as a binary integer programming problem. Simulations show that the proposed machine learning based sensing policy ...

  6. Extensions and applications of ensemble-of-trees methods in machine learning

    Science.gov (United States)

    Bleich, Justin

    Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of

  7. Identifying essential genes in bacterial metabolic networks with machine learning methods

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2010-05-01

    Full Text Available Abstract Background Identifying essential genes in bacteria supports to identify potential drug targets and an understanding of minimal requirements for a synthetic cell. However, experimentally assaying the essentiality of their coding genes is resource intensive and not feasible for all bacterial organisms, in particular if they are infective. Results We developed a machine learning technique to identify essential genes using the experimental data of genome-wide knock-out screens from one bacterial organism to infer essential genes of another related bacterial organism. We used a broad variety of topological features, sequence characteristics and co-expression properties potentially associated with essentiality, such as flux deviations, centrality, codon frequencies of the sequences, co-regulation and phyletic retention. An organism-wise cross-validation on bacterial species yielded reliable results with good accuracies (area under the receiver-operator-curve of 75% - 81%. Finally, it was applied to drug target predictions for Salmonella typhimurium. We compared our predictions to the viability of experimental knock-outs of S. typhimurium and identified 35 enzymes, which are highly relevant to be considered as potential drug targets. Specifically, we detected promising drug targets in the non-mevalonate pathway. Conclusions Using elaborated features characterizing network topology, sequence information and microarray data enables to predict essential genes from a bacterial reference organism to a related query organism without any knowledge about the essentiality of genes of the query organism. In general, such a method is beneficial for inferring drug targets when experimental data about genome-wide knockout screens is not available for the investigated organism.

  8. Investigating Effect of Machining Parameters of CNC Milling on Surface Finish by Taguchi Method

    Directory of Open Access Journals (Sweden)

    Amit Joshi

    2012-08-01

    Full Text Available CNC End milling is a unique adaption of the conventional milling process which uses an end mill tool for the machining process. CNC Vertical End Milling Machining is a widely accepted material removal process used to manufacture components with complicated shapes and profiles. During the End milling process, the material is removed by the end mill cutter. The effects of various parameters of end milling process like spindle speed, depth of cut, feed rate have been investigated to reveal their Impact on surface finish using Taguchi Methodology. Experimental plan is performed by a Standard Orthogonal Array. The results of analysis of variance (ANOVA indicate that the feed Rate is most influencing factor for modeling surface finish. The graph of S-N Ratio indicates the optimal setting of the machining parameter which gives the optimum value of surface finish. The optimal set of process parameters has also been predicted to maximize the surface finish.

  9. Investigating Effect of Machining Parameters of CNC Milling on Surface Finish by Taguchi Method

    Directory of Open Access Journals (Sweden)

    Amit Joshi

    2013-08-01

    Full Text Available CNC End milling is a unique adaption of the conventional milling process which uses an end mill tool for the machining process. CNC Vertical End Milling Machining is a widely accepted material removal process used to manufacture components with complicated shapes and profiles. During the End milling process, the material is removed by the end mill cutter. The effects of various parameters of end milling process like spindle speed, depth of cut, feed rate have been investigated to reveal their Impact on surface finish using Taguchi Methodology. Experimental plan is performed by a Standard Orthogonal Array. The results of analysis of variance (ANOVA indicate that the feed Rate is most influencing factor for modelling surface finish. The graph of S-N Ratio indicates the optimal setting of the machining parameter which gives the optimum value of surface finish. The optimal set of process parameters has also been predicted to maximize the surface finish.

  10. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  11. A decoupling method for turbo-machines aero-acoustic simulations; Une methode de decouplage pour des simulations aeroacoustiques de turbomachines

    Energy Technology Data Exchange (ETDEWEB)

    Couaillier, V.; Rahier, G.; Fotso, P.; Greffeuille, G.

    2002-07-01

    A decoupling method, of flow calculation in upstream of the compressor or the blower, from the whole calculation in a turbo-machine is proposed for the forecast of the acoustic propagation in the air inlet. The interest of this approach is the capacity to slip the calculations in the non linear domain of the flow. The decoupling technic is described in details. Its validation in case of stationary and non stationary flows is presented, as an application example to a turbo-machine. (A.L.B.)

  12. Effect of dielectric fluid with surfactant and graphite powder on Electrical Discharge Machining of titanium alloy using Taguchi method

    Directory of Open Access Journals (Sweden)

    Murahari Kolli

    2015-12-01

    Full Text Available In this paper, Taguchi method was employed to optimize the surfactant and graphite powder concentration in dielectric fluid for the machining of Ti-6Al-4V using Electrical Discharge Machining (EDM. The process parameters such as discharge current, surfactant concentration and powder concentration were changed to explore their effects on Material Removal Rate (MRR, Surface Roughness (SR, Tool wear rate (TWR and Recast Layer Thickness (RLT. Detailed analysis of structural features of machined surface was carried out using Scanning Electron Microscope (SEM to observe the influence of surfactant and graphite powder on the machining process. It was observed from the experimental results that the graphite powder and surfactant added dielectric fluid significantly improved the MRR, reduces the SR, TWR and RLT at various conditions. Analysis of Variance (ANOVA and F-test of experimental data values related to the important process parameters of EDM revealed that discharge current and surfactant concentration has more percentage of contribution on the MRR and TWR whereas the SR, and RLT were found to be affected greatly by the discharge current and graphite powder concentration.

  13. Thermal Error Modeling Method with the Jamming of Temperature-Sensitive Points' Volatility on CNC Machine Tools

    Science.gov (United States)

    MIAO, Enming; LIU, Yi; XU, Jianguo; LIU, Hui

    2017-03-01

    Aiming at the deficiency of the robustness of thermal error compensation models of CNC machine tools, the mechanism of improving the models' robustness is studied by regarding the Leaderway-V450 machining center as the object. Through the analysis of actual spindle air cutting experimental data on Leaderway-V450 machine, it is found that the temperature-sensitive points used for modeling is volatility, and this volatility directly leads to large changes on the collinear degree among modeling independent variables. Thus, the forecasting accuracy of multivariate regression model is severely affected, and the forecasting robustness becomes poor too. To overcome this effect, a modeling method of establishing thermal error models by using single temperature variable under the jamming of temperature-sensitive points' volatility is put forward. According to the actual data of thermal error measured in different seasons, it is proved that the single temperature variable model can reduce the loss of forecasting accuracy resulted from the volatility of temperature-sensitive points, especially for the prediction of cross quarter data, the improvement of forecasting accuracy is about 5 μm or more. The purpose that improving the robustness of the thermal error models is realized, which can provide a reference for selecting the modeling independent variable in the application of thermal error compensation of CNC machine tools.

  14. Applying machine learning to identify autistic adults using imitation: An exploratory study.

    Science.gov (United States)

    Li, Baihua; Sharma, Arjun; Meng, James; Purushwalkam, Senthil; Gowen, Emma

    2017-01-01

    Autism spectrum condition (ASC) is primarily diagnosed by behavioural symptoms including social, sensory and motor aspects. Although stereotyped, repetitive motor movements are considered during diagnosis, quantitative measures that identify kinematic characteristics in the movement patterns of autistic individuals are poorly studied, preventing advances in understanding the aetiology of motor impairment, or whether a wider range of motor characteristics could be used for diagnosis. The aim of this study was to investigate whether data-driven machine learning based methods could be used to address some fundamental problems with regard to identifying discriminative test conditions and kinematic parameters to classify between ASC and neurotypical controls. Data was based on a previous task where 16 ASC participants and 14 age, IQ matched controls observed then imitated a series of hand movements. 40 kinematic parameters extracted from eight imitation conditions were analysed using machine learning based methods. Two optimal imitation conditions and nine most significant kinematic parameters were identified and compared with some standard attribute evaluators. To our knowledge, this is the first attempt to apply machine learning to kinematic movement parameters measured during imitation of hand movements to investigate the identification of ASC. Although based on a small sample, the work demonstrates the feasibility of applying machine learning methods to analyse high-dimensional data and suggest the potential of machine learning for identifying kinematic biomarkers that could contribute to the diagnostic classification of autism.

  15. Cost-Sensitive Support Vector Machine Using Randomized Dual Coordinate Descent Method for Big Class-Imbalanced Data Classification

    Directory of Open Access Journals (Sweden)

    Mingzhu Tang

    2014-01-01

    Full Text Available Cost-sensitive support vector machine is one of the most popular tools to deal with class-imbalanced problem such as fault diagnosis. However, such data appear with a huge number of examples as well as features. Aiming at class-imbalanced problem on big data, a cost-sensitive support vector machine using randomized dual coordinate descent method (CSVM-RDCD is proposed in this paper. The solution of concerned subproblem at each iteration is derived in closed form and the computational cost is decreased through the accelerating strategy and cheap computation. The four constrained conditions of CSVM-RDCD are derived. Experimental results illustrate that the proposed method increases recognition rates of positive class and reduces average misclassification costs on real big class-imbalanced data.

  16. The Librarian Leading the Machine: A Reassessment of Library Instruction Methods

    Science.gov (United States)

    Greer, Katie; Hess, Amanda Nichols; Kraemer, Elizabeth W.

    2016-01-01

    This article builds on the 2007 College and Research Libraries article, "The Librarian, the Machine, or a Little of Both." Since that time, Oakland University Libraries implemented changes to its instruction program that reflect larger trends in teaching and assessment throughout the profession; following these revisions, librarians…

  17. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  18. Determination of optimal parameters in drilling composite materials to minimize the machining temperature using the Taguchi method

    OpenAIRE

    Lopes, Ana C.; Fernandes, Maria G.A.; Ribeiro, J. E.; Fonseca, E.M.M.

    2016-01-01

    Dental implant is used to replace the natural dental root. The process to fix the dental implant in the maxillary bone needs a previous drilling operation. This machining operation involves the increasing of temperature in the drilled region which can reach values higher than 47°C and for this temperature is possible to occur the osseous necrosis [I]. The main goal of this work is to implement an optimization method to define the optimal drilling parameters that cou...

  19. A Simple ERP Method for Quantitative Analysis of Cognitive Workload in Myoelectric Prosthesis Control and Human-Machine Interaction

    OpenAIRE

    Sean Deeny; Caitlin Chicoine; Levi Hargrove; Todd Parrish; Arun Jayaraman

    2014-01-01

    Common goals in the development of human-machine interface (HMI) technology are to reduce cognitive workload and increase function. However, objective and quantitative outcome measures assessing cognitive workload have not been standardized for HMI research. The present study examines the efficacy of a simple event-related potential (ERP) measure of cortical effort during myoelectric control of a virtual limb for use as an outcome tool. Participants trained and tested on two methods of contro...

  20. A robust morphological classification of high-redshift galaxies using support vector machines on seeing limited images. I Method description

    CERN Document Server

    Huertas-Company, M; Tasca, L; Soucail, G; Le Fèvre, O

    2007-01-01

    We present a new non-parametric method to quantify morphologies of galaxies based on a particular family of learning machines called support vector machines. The method, that can be seen as a generalization of the classical CAS classification but with an unlimited number of dimensions and non-linear boundaries between decision regions, is fully automated and thus particularly well adapted to large cosmological surveys. The source code is available for download at http://www.lesia.obspm.fr/~huertas/galsvm.html To test the method, we use a seeing limited near-infrared ($K_s$ band, $2,16\\mu m$) sample observed with WIRCam at CFHT at a median redshift of $z\\sim0.8$. The machine is trained with a simulated sample built from a local visually classified sample from the SDSS chosen in the high-redshift sample's rest-frame (i band, $0.77\\mu m$) and artificially redshifted to match the observing conditions. We use a 12-dimensional volume, including 5 morphological parameters and other caracteristics of galaxies such as...

  1. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  2. Comparing machine learning and logistic regression methods for predicting hypertension using a combination of gene expression and next-generation sequencing data.

    Science.gov (United States)

    Held, Elizabeth; Cape, Joshua; Tintle, Nathan

    2016-01-01

    Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.

  3. Analytical method for coupled transmission error of helical gear system with machining errors, assembly errors and tooth modifications

    Science.gov (United States)

    Lin, Tengjiao; He, Zeyin

    2017-07-01

    We present a method for analyzing the transmission error of helical gear system with errors. First a finite element method is used for modeling gear transmission system with machining errors, assembly errors, modifications and the static transmission error is obtained. Then the bending-torsional-axial coupling dynamic model of the transmission system based on the lumped mass method is established and the dynamic transmission error of gear transmission system is calculated, which provides error excitation data for the analysis and control of vibration and noise of gear system.

  4. "Comparison of some Structural Analyses Methods used for the Test Pavement in the Danish Road Testing Machine

    DEFF Research Database (Denmark)

    Baltzer, S.; Zhang, W.; Macdonald, R.;

    1998-01-01

    A flexible test pavement, instrumented to measure stresses and strains in the three primary axes with the upper 400 mm of the subgrade, has been constructed and load tested in the Danish Road Testing Machine (RTM). One objective of this research, which is part of the International Pavement Subgrade...... methods used for the RTM test pavement with data from FWD testing undertaken after the construction and loading programmes. Multilayer linear elastic forward and backcalculation methods, a finite element program and MS Excel spreadsheet based methods are compared....

  5. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    coverage and organ at risk sparing and were superior over plans produced with fixed sets of common beam angles. The great majority of the automatic plans (93%) were approved as clinically acceptable by three radiation therapy specialists. Conclusions: The results demonstrated the feasibility of utilizing a learning-based approach for automatic selection of beam angles in thoracic IMRT planning. The proposed method may assist in reducing the manual planning workload, while sustaining plan quality.

  6. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

    Directory of Open Access Journals (Sweden)

    Pontil Massimiliano

    2009-10-01

    Full Text Available Abstract Background Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (ΔΔG measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots" at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. Results We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which ΔΔG ≥ 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. Conclusion We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been

  7. Exposure assessment method for products containing carbon nanotubes inside a test chamber with a Taber abrasion machine

    Science.gov (United States)

    Matsui, Yasuto; Nagaya, Taiki; Nobuyuki, Kato; Ishibashi, Tomonori; Yoneda, Minoru

    2017-06-01

    Polymer/carbon nanotube (CNT) composites exhibit distinguished properties, but more quantitative risk assessments on CNTs are necessary as research and development advances. One method to assess the exact risk is to evaluate the characteristics of nanoparticles generated from CNT composites during sanding or Taber abrasion tests. Some researchers have applied loads to CNT composites using Taber machines and analysed the particles using aerosol-measuring instruments and electron microscopes. However, employing aerosol-measuring instruments is challenging due to the small amount of generated particles. Additionally, the presence of abundant background nanoparticles in testing environments creates issues in quantitative measurements. Our research strives to develop an examination method to measure even very small amounts of nanoparticles generated by Taber abrasion. In this study, a Taber abrasion machine is miniaturised so that it fits inside a small chamber. A high-efficiency particulate air filter is attached to the chamber to eliminate background nanoparticles. Then CNT composites are abraded with the miniaturised Taber abrasion machine inside the chamber and the generated particles are analysed.

  8. Programming Methods and Application for Numerical Control Machining%数控加工编程方法及应用

    Institute of Scientific and Technical Information of China (English)

    贾利晓; 黄广霞

    2015-01-01

    Programming is the base of numerical control machining and choosing a suitable programming method is fundamental for improving working efficiency . The three programming methods were illustrated in this paper and it was indicated that manual programming is mainly suitable for machining the parts which have simple profiles because of their less calculation and their shorter program; automatic programming is mainly suitable for machining the parts which have complicated profiles and have fussy process because of its powerful digital processing function and its convenient error correction function , parametric programming which makes the manual programming more simple and more facility .%数控编程是数控加工的基础,选择合适的编程方法是提高加工效率根本方法.本文通过对三种常用编程方法的进行论述和对比,指出手工编程主要用于加工形状简单、计算量小,程序较短的零件;自动编程由于具有数字处理能力强、方便纠错和自检等优点,主要加工用于形状复杂、工艺繁琐的零件;参数化编程使手工编程得到简化,更加便于使用.

  9. A novel local learning based approach with application to breast cancer diagnosis

    Science.gov (United States)

    Xu, Songhua; Tourassi, Georgia

    2012-03-01

    In this paper, we introduce a new local learning based approach and apply it for the well-studied problem of breast cancer diagnosis using BIRADS-based mammographic features. To learn from our clinical dataset the latent relationship between these features and the breast biopsy result, our method first dynamically partitions the whole sample population into multiple sub-population groups through stochastically searching the sample population clustering space. Each encountered clustering scheme in our online searching process is then used to create a certain sample population partition plan. For every resultant sub-population group identified according to a partition plan, our method then trains a dedicated local learner to capture the underlying data relationship. In our study, we adopt the linear logistic regression model as our local learning method's base learner. Such a choice is made both due to the well-understood linear nature of the problem, which is compellingly revealed by a rich body of prior studies, and the computational efficiency of linear logistic regression--the latter feature allows our local learning method to more effectively perform its search in the sample population clustering space. Using a database of 850 biopsy-proven cases, we compared the performance of our method with a large collection of publicly available state-of-the-art machine learning methods and successfully demonstrated its performance advantage with statistical significance.

  10. Research on Design Method of Intelligent Vending Machine for Cupped Beverage

    Directory of Open Access Journals (Sweden)

    Xiaowei Jiang

    2014-09-01

    Full Text Available The purpose of this study is to design an intelligent vending machine for cupped beverage, specifically researching its humanized design, shape design, color design and the main mechanism design including the beverage powder transporting mechanism, paper cups detaching mechanism and paper cups slide mechanism. The study elaborates that the design of beverage powder transporting mechanism is mainly the selection of electromagnet and the determination of electromagnet stroke, requiring that the electromagnet stroke and the maximum weight that the electromagnet could bear should have rationality, to ensure its safe operation; the design of paper cups detaching mechanism mainly includes selecting electric motor and V belt; the design of paper cups slide mechanism mainly presents the design of slide structure. And then the design of control modules of the intelligent vending machine for cupped beverage is introduced, based on which the conclusion has been reached.

  11. System and method for smoothing a salient rotor in electrical machines

    Energy Technology Data Exchange (ETDEWEB)

    Raminosoa, Tsarafidy; Alexander, James Pellegrino; El-Refaie, Ayman Mohamed Fawzi; Torrey, David A.

    2016-12-13

    An electrical machine exhibiting reduced friction and windage losses is disclosed. The electrical machine includes a stator and a rotor assembly configured to rotate relative to the stator, wherein the rotor assembly comprises a rotor core including a plurality of salient rotor poles that are spaced apart from one another around an inner hub such that an interpolar gap is formed between each adjacent pair of salient rotor poles, with an opening being defined by the rotor core in each interpolar gap. Electrically non-conductive and non-magnetic inserts are positioned in the gaps formed between the salient rotor poles, with each of the inserts including a mating feature formed an axially inner edge thereof that is configured to mate with a respective opening being defined by the rotor core, so as to secure the insert to the rotor core against centrifugal force experienced during rotation of the rotor assembly.

  12. Comparison between Genetic Algorithms and Particle Swarm Optimization Methods on Standard Test Functions and Machine Design

    DEFF Research Database (Denmark)

    Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika

    2013-01-01

    , genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application....

  13. Comparison between Genetic Algorithms and Particle Swarm Optimization Methods on Standard Test Functions and Machine Design

    DEFF Research Database (Denmark)

    Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika

    2013-01-01

    , genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application....

  14. Big Data Analysis Using Modern Statistical and Machine Learning Methods in Medicine

    OpenAIRE

    Yoo, Changwon; Ramirez, Luis; Liuzzi, Juan

    2014-01-01

    In this article we introduce modern statistical machine learning and bioinformatics approaches that have been used in learning statistical relationships from big data in medicine and behavioral science that typically include clinical, genomic (and proteomic) and environmental variables. Every year, data collected from biomedical and behavioral science is getting larger and more complicated. Thus, in medicine, we also need to be aware of this trend and understand the statistical tools that are...

  15. Support patient search on pathology reports with interactive online learning based data extraction

    Directory of Open Access Journals (Sweden)

    Shuai Zheng

    2015-01-01

    Full Text Available Background: Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user′s interaction with minimal human effort. Methods : We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system′s data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users′ corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. Results: We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of

  16. Estimating Corn Yield in the United States with Modis Evi and Machine Learning Methods

    Science.gov (United States)

    Kuwata, K.; Shibasaki, R.

    2016-06-01

    Satellite remote sensing is commonly used to monitor crop yield in wide areas. Because many parameters are necessary for crop yield estimation, modelling the relationships between parameters and crop yield is generally complicated. Several methodologies using machine learning have been proposed to solve this issue, but the accuracy of county-level estimation remains to be improved. In addition, estimating county-level crop yield across an entire country has not yet been achieved. In this study, we applied a deep neural network (DNN) to estimate corn yield. We evaluated the estimation accuracy of the DNN model by comparing it with other models trained by different machine learning algorithms. We also prepared two time-series datasets differing in duration and confirmed the feature extraction performance of models by inputting each dataset. As a result, the DNN estimated county-level corn yield for the entire area of the United States with a determination coefficient (R2) of 0.780 and a root mean square error (RMSE) of 18.2 bushels/acre. In addition, our results showed that estimation models that were trained by a neural network extracted features from the input data better than an existing machine learning algorithm.

  17. Classification of follicular lymphoma images: a holistic approach with symbol-based machine learning methods.

    Science.gov (United States)

    Zorman, Milan; Sánchez de la Rosa, José Luis; Dinevski, Dejan

    2011-12-01

    It is not very often to see a symbol-based machine learning approach to be used for the purpose of image classification and recognition. In this paper we will present such an approach, which we first used on the follicular lymphoma images. Lymphoma is a broad term encompassing a variety of cancers of the lymphatic system. Lymphoma is differentiated by the type of cell that multiplies and how the cancer presents itself. It is very important to get an exact diagnosis regarding lymphoma and to determine the treatments that will be most effective for the patient's condition. Our work was focused on the identification of lymphomas by finding follicles in microscopy images provided by the Laboratory of Pathology in the University Hospital of Tenerife, Spain. We divided our work in two stages: in the first stage we did image pre-processing and feature extraction, and in the second stage we used different symbolic machine learning approaches for pixel classification. Symbolic machine learning approaches are often neglected when looking for image analysis tools. They are not only known for a very appropriate knowledge representation, but also claimed to lack computational power. The results we got are very promising and show that symbolic approaches can be successful in image analysis applications.

  18. When Machines Design Machines!

    DEFF Research Database (Denmark)

    2011-01-01

    Until recently we were the sole designers, alone in the driving seat making all the decisions. But, we have created a world of complexity way beyond human ability to understand, control, and govern. Machines now do more trades than humans on stock markets, they control our power, water, gas...... and food supplies, manage our elevators, microclimates, automobiles and transport systems, and manufacture almost everything. It should come as no surprise that machines are now designing machines. The chips that power our computers and mobile phones, the robots and commercial processing plants on which we...... depend, all are now largely designed by machines. So what of us - will be totally usurped, or are we looking at a new symbiosis with human and artificial intelligences combined to realise the best outcomes possible. In most respects we have no choice! Human abilities alone cannot solve any of the major...

  19. Characterizing EMG data using machine-learning tools.

    Science.gov (United States)

    Yousefi, Jamileh; Hamilton-Wright, Andrew

    2014-08-01

    Effective electromyographic (EMG) signal characterization is critical in the diagnosis of neuromuscular disorders. Machine-learning based pattern classification algorithms are commonly used to produce such characterizations. Several classifiers have been investigated to develop accurate and computationally efficient strategies for EMG signal characterization. This paper provides a critical review of some of the classification methodologies used in EMG characterization, and presents the state-of-the-art accomplishments in this field, emphasizing neuromuscular pathology. The techniques studied are grouped by their methodology, and a summary of the salient findings associated with each method is presented.

  20. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  1. Design of Demining Machines

    CERN Document Server

    Mikulic, Dinko

    2013-01-01

    In constant effort to eliminate mine danger, international mine action community has been developing safety, efficiency and cost-effectiveness of clearance methods. Demining machines have become necessary when conducting humanitarian demining where the mechanization of demining provides greater safety and productivity. Design of Demining Machines describes the development and testing of modern demining machines in humanitarian demining.   Relevant data for design of demining machines are included to explain the machinery implemented and some innovative and inspiring development solutions. Development technologies, companies and projects are discussed to provide a comprehensive estimate of the effects of various design factors and to proper selection of optimal parameters for designing the demining machines.   Covering the dynamic processes occurring in machine assemblies and their components to a broader understanding of demining machine as a whole, Design of Demining Machines is primarily tailored as a tex...

  2. Representation Learning Based Speech Assistive System for Persons With Dysarthria.

    Science.gov (United States)

    Chandrakala, S; Rajeswari, Natarajan

    2017-09-01

    An assistive system for persons with vocal impairment due to dysarthria converts dysarthric speech to normal speech or text. Because of the articulatory deficits, dysarthric speech recognition needs a robust learning technique. Representation learning is significant for complex tasks such as dysarthric speech recognition. We focus on robust representation for dysarthric speech recognition that involves recognizing sequential patterns of varying length utterances. We propose a hybrid framework that uses a generative learning based data representation with a discriminative learning based classifier. In this hybrid framework, we propose to use Example Specific Hidden Markov Models (ESHMMs) to obtain log-likelihood scores for a dysarthric speech utterance to form fixed dimensional score vector representation. This representation is used as an input to discriminative classifier such as support vector machine.The performance of the proposed approach is evaluatedusingUA-Speechdatabase.The recognitionaccuracy is much better than the conventional hidden Markov model based approach and Deep Neural Network-Hidden Markov Model (DNN-HMM). The efficiency of the discriminative nature of score vector representation is proved for "very low" intelligibility words.

  3. An Android malware detection system based on machine learning

    Science.gov (United States)

    Wen, Long; Yu, Haiyang

    2017-08-01

    The Android smartphone, with its open source character and excellent performance, has attracted many users. However, the convenience of the Android platform also has motivated the development of malware. The traditional method which detects the malware based on the signature is unable to detect unknown applications. The article proposes a machine learning-based lightweight system that is capable of identifying malware on Android devices. In this system we extract features based on the static analysis and the dynamitic analysis, then a new feature selection approach based on principle component analysis (PCA) and relief are presented in the article to decrease the dimensions of the features. After that, a model will be constructed with support vector machine (SVM) for classification. Experimental results show that our system provides an effective method in Android malware detection.

  4. Texture Analysis using The Neutron Diffraction Method on The Non Standardized Austenitic Steel Process by Machining,Annealing, and Rolling

    Directory of Open Access Journals (Sweden)

    Tri Hardi Priyanto

    2016-04-01

    Full Text Available Austenitic steel is one type of stainless steel which is widely used in the industry. Many studies on  austenitic stainless steel have been performed to determine the physicalproperties using various types of equipment and methods. In this study, the neutron diffraction method is used to characterize the materials which have been made from  minerals extracted from the mines in Indonesia. The materials consist of a granular ferro-scrap, nickel, ferro-chrome, ferro-manganese, and ferro-silicon added with a little titanium. Characterization of the materials was carried out in threeprocesses, namely: machining, annealing, and rolling. Experimental results obtained from the machining process generally produces a texture in the 〈100〉direction. From the machining to annealing process, the texture index decreases from 3.0164 to 2.434.Texture strength in the machining process (BA2N sample is  8.13 mrd and it then decreases to 6.99 in the annealing process (A2DO sample. In the annealing process the three-component texture appears, cube-on-edge type texture{110}〈001〉, cube-type texture {001}〈100〉, and brass-type {110}〈112〉. The texture is very strong leading to the direction of orientation {100}〈001〉, while the {011}〈100〉is weaker than that of the {001}, and texture withorientation {110}〈112〉is weak. In the annealing process stress release occurred, and this was shown by more randomly pole compared to stress release by the machining process. In the rolling process a brass-type texture{110}〈112〉with a spread towards the goss-type texture {110}〈001〉 appeared,  and  the  brass  component  is markedly  reinforced  compared  to  the undeformed state (before rolling. Moreover, the presence of an additional {110} component was observed at the center of the (110 pole figure. The pole density of three components increases withthe increasing degree of thickness reduction. By increasing degrees

  5. Ozone Monitoring Using Support Vector Machine and K-Nearest Neighbors Methods

    Directory of Open Access Journals (Sweden)

    FALEH Rabeb

    2017-05-01

    Full Text Available Due to health impacts caused by the pollutant gases, monitoring and controlling air quality is an important field of interest. This paper deals with ozone monitoring in four stations measuring air quality located in many Tunisian cities using numerous measuring instruments and polluting gas analyzers. Prediction of ozone concentrations in two Tunisian cities, Tunis and Sfax is screened based on supervised classification models. The K -Nearest neighbors results reached 98.7 % success rate in the recognition and ozone identification. Support Vector Machines (SVM with the linear, polynomial and RBF kernel were applied to build a classifier and full accuracy (100% was again achieved with the RBF kernel.

  6. A Method for Identifying the Mechanical Parameters in Resistance Spot Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    Mechanical dynamic responses of resistance welding machine have a significant influence on weld quality and electrode service life, it must be considered when the real welding production is carried out or the welding process is stimulated. The mathematical models for characterizing the mechanical...... dynamic responses are normally a few coupled differential equations which can be easily created according to the theories of kinematics and dynamics, however the problem is that the parameters contained in the equations are unavailable and hard to be determined directly due to the complexities...

  7. Learning-Based Visual Saliency Model for Detecting Diabetic Macular Edema in Retinal Image.

    Science.gov (United States)

    Zou, Xiaochun; Zhao, Xinbo; Yang, Yongjia; Li, Na

    2016-01-01

    This paper brings forth a learning-based visual saliency model method for detecting diagnostic diabetic macular edema (DME) regions of interest (RoIs) in retinal image. The method introduces the cognitive process of visual selection of relevant regions that arises during an ophthalmologist's image examination. To record the process, we collected eye-tracking data of 10 ophthalmologists on 100 images and used this database as training and testing examples. Based on analysis, two properties (Feature Property and Position Property) can be derived and combined by a simple intersection operation to obtain a saliency map. The Feature Property is implemented by support vector machine (SVM) technique using the diagnosis as supervisor; Position Property is implemented by statistical analysis of training samples. This technique is able to learn the preferences of ophthalmologist visual behavior while simultaneously considering feature uniqueness. The method was evaluated using three popular saliency model evaluation scores (AUC, EMD, and SS) and three quality measurements (classical sensitivity, specificity, and Youden's J statistic). The proposed method outperforms 8 state-of-the-art saliency models and 3 salient region detection approaches devised for natural images. Furthermore, our model successfully detects the DME RoIs in retinal image without sophisticated image processing such as region segmentation.

  8. A Hybrid Machine Learning Method for Fusing fMRI and Genetic Data: Combining both Improves Classification of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Honghui Yang

    2010-10-01

    Full Text Available We demonstrate a hybrid machine learning method to classify schizophrenia patients and healthy controls, using functional magnetic resonance imaging (fMRI and single nucleotide polymorphism (SNP data. The method consists of four stages: (1 SNPs with the most discriminating information between the healthy controls and schizophrenia patients are selected to construct a support vector machine ensemble (SNP-SVME. (2 Voxels in the fMRI map contributing to classification are selected to build another SVME (Voxel-SVME. (3 Components of fMRI activation obtained with independent component analysis (ICA are used to construct a single SVM classifier (ICA-SVMC. (4 The above three models are combined into a single module using a majority voting approach to make a final decision (Combined SNP-fMRI. The method was evaluated by a fully-validated leave-one-out method using 40 subjects (20 patients and 20 controls. The classification accuracy was: 0.74 for SNP-SVME, 0.82 for Voxel-SVME, 0.83 for ICA-SVMC, and 0.87 for Combined SNP-fMRI. Experimental results show that better classification accuracy was achieved by combining genetic and fMRI data than using either alone, indicating that genetic and brain function representing different, but partially complementary aspects, of schizophrenia etiopathology. This study suggests an effective way to reassess biological classification of individuals with schizophrenia, which is also potentially useful for identifying diagnostically important markers for the disorder.

  9. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data

    Science.gov (United States)

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443

  10. Application of generalized Hough transform for detecting sugar beet plant from weed using machine vision method

    Directory of Open Access Journals (Sweden)

    A Bakhshipour Ziaratgahi

    2017-05-01

    Full Text Available Introduction Sugar beet (Beta vulgaris L. as the second most important world’s sugar source after sugarcane is one of the major industrial crops. The presence of weeds in sugar beet fields, especially at early growth stages, results in a substantial decrease in the crop yield. It is very important to efficiently eliminate weeds at early growing stages. The first step of precision weed control is accurate detection of weeds location in the field. This operation can be performed by machine vision techniques. Hough transform is one of the shape feature extraction methods for object tracking in image processing which is basically used to identify lines or other geometrical shapes in an image. Generalized Hough transform (GHT is a modified version of the Hough transform used not only for geometrical forms, but also for detecting any arbitrary shape. This method is based on a pattern matching principle that uses a set of vectors of feature points (usually object edge points to a reference point to construct a pattern. By comparing this pattern with a set pattern, the desired shape is detected. The aim of this study was to identify the sugar beet plant from some common weeds in a field using the GHT. Materials and Methods Images required for this study were taken at the four-leaf stage of sugar beet as the beginning of the critical period of weed control. A shelter was used to avoid direct sunlight and prevent leaf shadows on each other. The obtained images were then introduced to the Image Processing Toolbox of MATLAB programming software for further processing. Green and Red color components were extracted from primary RGB images. In the first step, binary images were obtained by applying the optimal threshold on the G-R images. A comprehensive study of several sugar beet images revealed that there is a unique feature in sugar beet leaves which makes them differentiable from the weeds. The feature observed in all sugar beet plants at the four

  11. On-machine characterization of moving paper using a photo-emf laser ultrasonics method

    Science.gov (United States)

    Pouet, Bruno F.; Lafond, Emmanuel F.; Pufahl, Brian; Bacher, Gerald D.; Brodeur, Pierre H.; Klein, Marvin B.

    1999-02-01

    Stiffness properties of paper materials can readily be characterized in the laboratory using conventional ultrasonic techniques. For on-line inspection on a paper machine, due to the high translation velocity and the somewhat fragile nature of the moving paper web, contact ultrasonic techniques using piezoelectric transducers are of limited use. To overcome this limitation, non-contact laser- based ultrasonic techniques can be used. Due to the rough surface of the paper, the reflected light is composed of many speckles. For efficient detection, the receiver must be able to process as many speckles as possible. Adaptive receivers using the photorefractive or photo-emf effects are characterized by a large etendue, and thus, are well suited for detection on paper and paperboard. Moreover, the translation velocity of the moving web implies that the detection system must adapt extremely quickly to the changing speckle pattern. In this work, a photo-emf receiver was used to detect Lamb waves excited using a pulsed Nd:YAG laser in moving paper. Experiments were performed using a variable-speed web simulator at speeds much higher than 1 m.s-1. Results corresponding to various translation speeds are shown, demonstrating the feasibility of laser- based ultrasound for on-machine inspection of paper and paperboard during production.

  12. A profilometry-based dentifrice abrasion Method for V8 brushing machines. Part I: Introduction to RDA-PE.

    Science.gov (United States)

    White, Donald J; Schneiderman, Eva; Colón, Ellen; St John, Samuel

    2015-01-01

    This paper describes the development and standardization of a profilometry-based method for assessment of dentifrice abrasivity called Radioactive Dentin Abrasivity - Profilometry Equivalent (RDA-PE). Human dentine substrates are mounted in acrylic blocks of precise standardized dimensions, permitting mounting and brushing in V8 brushing machines. Dentin blocks are masked to create an area of "contact brushing." Brushing is carried out in V8 brushing machines and dentifrices are tested as slurries. An abrasive standard is prepared by diluting the ISO 11609 abrasivity reference calcium pyrophosphate abrasive into carboxymethyl cellulose/glycerin, just as in the RDA method. Following brushing, masked areas are removed and profilometric analysis is carried out on treated specimens. Assessments of average abrasion depth (contact or optical profilometry) are made. Inclusion of standard calcium pyrophosphate abrasive permits a direct RDA equivalent assessment of abrasion, which is characterized with profilometry as Depth test/Depth control x 100. Within the test, the maximum abrasivity standard of 250 can be created in situ simply by including a treatment group of standard abrasive with 2.5x number of brushing strokes. RDA-PE is enabled in large part by the availability of easy-to-use and well-standardized modern profilometers, but its use in V8 brushing machines is enabled by the unique specific conditions described herein. RDA-PE permits the evaluation of dentifrice abrasivity to dentin without the requirement of irradiated teeth and infrastructure for handling them. In direct comparisons, the RDA-PE method provides dentifrice abrasivity assessments comparable to the gold industry standard RDA technique.

  13. Diagnosis of Dementia by Machine learning methods in Epidemiological studies: a pilot exploratory study from south India.

    Science.gov (United States)

    Bhagyashree, Sheshadri Iyengar Raghavan; Nagaraj, Kiran; Prince, Martin; Fall, Caroline H D; Krishna, Murali

    2017-07-11

    There are limited data on the use of artificial intelligence methods for the diagnosis of dementia in epidemiological studies in low- and middle-income country (LMIC) settings. A culture and education fair battery of cognitive tests was developed and validated for population based studies in low- and middle-income countries including India by the 10/66 Dementia Research Group. We explored the machine learning methods based on the 10/66 battery of cognitive tests for the diagnosis of dementia based in a birth cohort study in South India. The data sets for 466 men and women for this study were obtained from the on-going Mysore Studies of Natal effect of Health and Ageing (MYNAH), in south India. The data sets included: demographics, performance on the 10/66 cognitive function tests, the 10/66 diagnosis of mental disorders and population based normative data for the 10/66 battery of cognitive function tests. Diagnosis of dementia from the rule based approach was compared against the 10/66 diagnosis of dementia. We have applied machine learning techniques to identify minimal number of the 10/66 cognitive function tests required for diagnosing dementia and derived an algorithm to improve the accuracy of dementia diagnosis. Of 466 subjects, 27 had 10/66 diagnosis of dementia, 19 of whom were correctly identified as having dementia by Jrip classification with 100% accuracy. This pilot exploratory study indicates that machine learning methods can help identify community dwelling older adults with 10/66 criterion diagnosis of dementia with good accuracy in a LMIC setting such as India. This should reduce the duration of the diagnostic assessment and make the process easier and quicker for clinicians, patients and will be useful for 'case' ascertainment in population based epidemiological studies.

  14. Novel temperature modeling and compensation method for bias of ring laser gyroscope based on least-squares support vector machine

    Institute of Scientific and Technical Information of China (English)

    Xudong Yu; Yu Wang; Guo Wei; Pengfei Zhang; Xingwu Long

    2011-01-01

    Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way. This is an important restraining factor for improving the accuracy of RLG. Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM). Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method. Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method. The results show the maximum error of RLG bias drops by almost two orders of magnitude after static temperature compensation, while bias stability of RLG improves by one order of magnitude after dynamic temperature compensation. Thus, the proposed method reduces the influence of temperature variation on the bias of the RLG effectively and improves the accuracy of the gyro scope considerably.%@@ Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way.This is an important restraining factor for improving the accuracy of RLG.Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM).Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method.Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method.

  15. Precision machine design

    CERN Document Server

    Slocum, Alexander H

    1992-01-01

    This book is a comprehensive engineering exploration of all the aspects of precision machine design - both component and system design considerations for precision machines. It addresses both theoretical analysis and practical implementation providing many real-world design case studies as well as numerous examples of existing components and their characteristics. Fast becoming a classic, this book includes examples of analysis techniques, along with the philosophy of the solution method. It explores the physics of errors in machines and how such knowledge can be used to build an error budget for a machine, how error budgets can be used to design more accurate machines.

  16. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts

  17. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection.

    Science.gov (United States)

    Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho

    2017-03-01

    Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. INTEGRATION OF OVERALL EQUIPMENT EFFECTIVENESS (OEE AND RELIABILITY METHOD FOR MEASURING MACHINE EFFECTIVENESS

    Directory of Open Access Journals (Sweden)

    H. Abdul Samat

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Maintenance is an important process in a manufacturing system. Thus it should be conducted and measured effectively to ensure performance efficiency. A variety of studies have been conducted on maintenance as affected by factors such as productivity, cost, employee skills, resource utilisation, equipment, processes, and maintenance task planning and scheduling [1,2]. According to Coetzee [3], equipment is the most significant factor affecting maintenance performance because it is directly influenced by maintenance activities. This paper proposes an equipment performance and reliability (EPR model for measuring maintenance performance based on machine effectiveness. The model is developed in four phases, using Pareto analysis for machine selection, and failure mode and effect analysis (FMEA for failure analysis processes. Machine effectiveness is measured using the integration of overall equipment effectiveness and the reliability principle. The result is interpreted in terms of maintenance effectiveness, using five health index levels as bases. The model is implemented in a semiconductor company, and the outcomes confirm the practicality of the EPR model as it helps companies to measure maintenance effectiveness.

    AFRIKAANSE OPSOMMING: Instandhouding is ’n belangrike proses in ’n vervaardigingsomgewing. Dit moet dus effektief onderneem en bestuur word met die oog op doeltreffende werkverrrigting. Verskeie studies is reeds onderneem om die impak van faktore soos produktiwiteit, koste, werknemervaardighede, hulpbronbenutting, toerusting, prosesse en instandhoudingsbeplanning en skedulering op instandhouding te bepaal [1,2]. Volgens Coetzee [3] het toerusting die mees betekeninsvolle impak op instandhoudingswerkverrrigting aangesien dit direk beïnvloed word deur instandhoudingsaktiwiteite. Hierdie artikel hou ’n model voor vir toerustingwerkverrigting en betroubaarheid wat gebruik kan word om die

  19. EVALUATION OF MACHINE TOOL QUALITY

    Directory of Open Access Journals (Sweden)

    Ivan Kuric

    2011-12-01

    Full Text Available Paper deals with aspects of quality and accuracy of machine tools. As the accuracy of machine tools has key factor for product quality, it is important to know the methods for evaluation of quality and accuracy of machine tools. Several aspects of diagnostics of machine tools are described, such as aspects of reliability.

  20. Spoke permanent magnet machine with reduced torque ripple and method of manufacturing thereof

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Patel Bhageerath; EL-Refaie, Ayman Mohamed Fawzi; Huh, Kum-Kang; Alexander, James Pellegrino

    2016-03-15

    An internal permanent magnet machine includes a rotor assembly having a shaft comprising a plurality of protrusions extending radially outward from a main shaft body and being formed circumferentially about the main shaft body and along an axial length of the main shaft body. A plurality of stacks of laminations are arranged circumferentially about the shaft to receive the plurality of protrusions therein, with each stack of laminations including a plurality of lamination groups arranged axially along a length of the shaft and with permanent magnets being disposed between the stacks of laminations. Each of the laminations includes a shaft protrusion cut formed therein to receive a respective shaft protrusion and, for each of the stacks of laminations, the shaft protrusion cuts formed in the laminations of a respective lamination group are angularly offset from the shaft protrusion cuts formed in the laminations in an adjacent lamination group.