WorldWideScience

Sample records for learning ml techniques

  1. OpenML : An R package to connect to the machine learning platform OpenML

    NARCIS (Netherlands)

    Casalicchio, G.; Bossek, J.; Lang, M.; Kirchhoff, D.; Kerschke, P.; Hofner, B.; Seibold, H.; Vanschoren, J.; Bischl, B.

    2017-01-01

    OpenML is an online machine learning platform where researchers can easily share data, machine learning tasks and experiments as well as organize them online to work and collaborate more efficiently. In this paper, we present an R package to interface with the OpenML platform and illustrate its

  2. ML Confidential : machine learning on encrypted data

    NARCIS (Netherlands)

    Graepel, T.; Lauter, K.; Naehrig, M.

    2012-01-01

    We demonstrate that by using a recently proposed somewhat homomorphic encryption (SHE) scheme it is possible to delegate the execution of a machine learning (ML) algorithm to a compute service while retaining confidentiality of the training and test data. Since the computational complexity of the

  3. HLS4ML: deploying deep learning on FPGAs for L1 trigger and Data Acquisition

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Machine learning is becoming ubiquitous across HEP. There is great potential to improve trigger and DAQ performances with it. However, the exploration of such techniques within the field in low latency/power FPGAs has just begun. We present HLS4ML, a user-friendly software, based on High-Level Synthesis (HLS), designed to deploy network architectures on FPGAs. As a case study, we use HLS4ML for boosted-jet tagging with deep networks at the LHC. We show how neural networks can be made fit the resources available on modern FPGAs, thanks to network pruning and quantization. We map out resource usage and latency versus network architectures, to identify the typical problem complexity that HLS4ML could deal with. We discuss possible applications in current and future HEP experiments.

  4. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  5. AstroML: "better, faster, cheaper" towards state-of-the-art data mining and machine learning

    Science.gov (United States)

    Ivezic, Zeljko; Connolly, Andrew J.; Vanderplas, Jacob

    2015-01-01

    We present AstroML, a Python module for machine learning and data mining built on numpy, scipy, scikit-learn, matplotlib, and astropy, and distributed under an open license. AstroML contains a growing library of statistical and machine learning routines for analyzing astronomical data in Python, loaders for several open astronomical datasets (such as SDSS and other recent major surveys), and a large suite of examples of analyzing and visualizing astronomical datasets. AstroML is especially suitable for introducing undergraduate students to numerical research projects and for graduate students to rapidly undertake cutting-edge research. The long-term goal of astroML is to provide a community repository for fast Python implementations of common tools and routines used for statistical data analysis in astronomy and astrophysics (see http://www.astroml.org).

  6. Machine learning (ML)-guided OPC using basis functions of polar Fourier transform

    Science.gov (United States)

    Choi, Suhyeong; Shim, Seongbo; Shin, Youngsoo

    2016-03-01

    With shrinking feature size, runtime has become a limitation of model-based OPC (MB-OPC). A few machine learning-guided OPC (ML-OPC) have been studied as candidates for next-generation OPC, but they all employ too many parameters (e.g. local densities), which set their own limitations. We propose to use basis functions of polar Fourier transform (PFT) as parameters of ML-OPC. Since PFT functions are orthogonal each other and well reflect light phenomena, the number of parameters can significantly be reduced without loss of OPC accuracy. Experiments demonstrate that our new ML-OPC achieves 80% reduction in OPC time and 35% reduction in the error of predicted mask bias when compared to conventional ML-OPC.

  7. RED-ML

    DEFF Research Database (Denmark)

    Xiong, Heng; Liu, Dongbing; Li, Qiye

    2017-01-01

    using diverse RNA-seq datasets, we have developed a software tool, RED-ML: RNA Editing Detection based on Machine learning (pronounced as "red ML"). The input to RED-ML can be as simple as a single BAM file, while it can also take advantage of matched genomic variant information when available...... accurately detect novel RNA editing sites without relying on curated RNA editing databases. We have also made this tool freely available via GitHub . We have developed a highly accurate, speedy and general-purpose tool for RNA editing detection using RNA-seq data....... With the availability of RED-ML, it is now possible to conveniently make RNA editing a routine analysis of RNA-seq. We believe this can greatly benefit the RNA editing research community and has profound impact to accelerate our understanding of this intriguing posttranscriptional modification process....

  8. Wind Power Ramp Events Prediction with Hybrid Machine Learning Regression Techniques and Reanalysis Data

    Directory of Open Access Journals (Sweden)

    Laura Cornejo-Bueno

    2017-11-01

    Full Text Available Wind Power Ramp Events (WPREs are large fluctuations of wind power in a short time interval, which lead to strong, undesirable variations in the electric power produced by a wind farm. Its accurate prediction is important in the effort of efficiently integrating wind energy in the electric system, without affecting considerably its stability, robustness and resilience. In this paper, we tackle the problem of predicting WPREs by applying Machine Learning (ML regression techniques. Our approach consists of using variables from atmospheric reanalysis data as predictive inputs for the learning machine, which opens the possibility of hybridizing numerical-physical weather models with ML techniques for WPREs prediction in real systems. Specifically, we have explored the feasibility of a number of state-of-the-art ML regression techniques, such as support vector regression, artificial neural networks (multi-layer perceptrons and extreme learning machines and Gaussian processes to solve the problem. Furthermore, the ERA-Interim reanalysis from the European Center for Medium-Range Weather Forecasts is the one used in this paper because of its accuracy and high resolution (in both spatial and temporal domains. Aiming at validating the feasibility of our predicting approach, we have carried out an extensive experimental work using real data from three wind farms in Spain, discussing the performance of the different ML regression tested in this wind power ramp event prediction problem.

  9. Examining Mobile Learning Trends 2003-2008: A Categorical Meta-Trend Analysis Using Text Mining Techniques

    Science.gov (United States)

    Hung, Jui-Long; Zhang, Ke

    2012-01-01

    This study investigated the longitudinal trends of academic articles in Mobile Learning (ML) using text mining techniques. One hundred and nineteen (119) refereed journal articles and proceedings papers from the SCI/SSCI database were retrieved and analyzed. The taxonomies of ML publications were grouped into twelve clusters (topics) and four…

  10. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions.

    Science.gov (United States)

    Kassahun, Yohannes; Yu, Bingbin; Tibebu, Abraham Temesgen; Stoyanov, Danail; Giannarou, Stamatia; Metzen, Jan Hendrik; Vander Poorten, Emmanuel

    2016-04-01

    Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical

  11. ML at ATLAS&CMS : setting the stage

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    In the early days of the LHC the canonical problems of classification and regression were mostly addressed using simple cut-based techniques. Today, ML techniques (some already pioneered in pre-LHC or non collider experiments) play a fundamental role in the toolbox of any experimentalist. The talk will introduce, through a representative collection of examples, the problems addressed with ML techniques at the LHC. The goal of the talk is to set the stage for a constructive discussion with non-HEP ML practitioners, focusing on the specificities of HEP applications.

  12. A FIRST LOOK AT CREATING MOCK CATALOGS WITH MACHINE LEARNING TECHNIQUES

    International Nuclear Information System (INIS)

    Xu Xiaoying; Ho, Shirley; Trac, Hy; Schneider, Jeff; Ntampaka, Michelle; Poczos, Barnabas

    2013-01-01

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N gal ) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N gal . In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: support vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N gal by training our algorithms on the following six halo properties: number of particles, M 200 , σ v , v max , half-mass radius, and spin. For Millennium, our predicted N gal values have a mean-squared error (MSE) of ∼0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to ∼5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N gal . Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M star , low M star ). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs

  13. A FIRST LOOK AT CREATING MOCK CATALOGS WITH MACHINE LEARNING TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Xu Xiaoying; Ho, Shirley; Trac, Hy; Schneider, Jeff; Ntampaka, Michelle [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Poczos, Barnabas [School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States)

    2013-08-01

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N{sub gal}) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N{sub gal}. In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: support vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N{sub gal} by training our algorithms on the following six halo properties: number of particles, M{sub 200}, {sigma}{sub v}, v{sub max}, half-mass radius, and spin. For Millennium, our predicted N{sub gal} values have a mean-squared error (MSE) of {approx}0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to {approx}5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N{sub gal}. Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M{sub star}, low M{sub star}). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs.

  14. Comparison of machine learning techniques to predict all-cause mortality using fitness data: the Henry ford exercIse testing (FIT) project.

    Science.gov (United States)

    Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H

    2017-12-19

    Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness

  15. The jmzQuantML programming interface and validator for the mzQuantML data standard.

    Science.gov (United States)

    Qi, Da; Krishna, Ritesh; Jones, Andrew R

    2014-03-01

    The mzQuantML standard from the HUPO Proteomics Standards Initiative has recently been released, capturing quantitative data about peptides and proteins, following analysis of MS data. We present a Java application programming interface (API) for mzQuantML called jmzQuantML. The API provides robust bridges between Java classes and elements in mzQuantML files and allows random access to any part of the file. The API provides read and write capabilities, and is designed to be embedded in other software packages, enabling mzQuantML support to be added to proteomics software tools (http://code.google.com/p/jmzquantml/). The mzQuantML standard is designed around a multilevel validation system to ensure that files are structurally and semantically correct for different proteomics quantitative techniques. In this article, we also describe a Java software tool (http://code.google.com/p/mzquantml-validator/) for validating mzQuantML files, which is a formal part of the data standard. © 2014 The Authors. Proteomics published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Machine Learning of Musical Gestures

    OpenAIRE

    Caramiaux, Baptiste; Tanaka, Atau

    2013-01-01

    We present an overview of machine learning (ML) techniques and theirapplication in interactive music and new digital instruments design. We firstgive to the non-specialist reader an introduction to two ML tasks,classification and regression, that are particularly relevant for gesturalinteraction. We then present a review of the literature in current NIMEresearch that uses ML in musical gesture analysis and gestural sound control.We describe the ways in which machine learning is useful for cre...

  17. MACHINE LEARNING TECHNIQUES APPLIED TO LIGNOCELLULOSIC ETHANOL IN SIMULTANEOUS HYDROLYSIS AND FERMENTATION

    Directory of Open Access Journals (Sweden)

    J. Fischer

    Full Text Available Abstract This paper investigates the use of machine learning (ML techniques to study the effect of different process conditions on ethanol production from lignocellulosic sugarcane bagasse biomass using S. cerevisiae in a simultaneous hydrolysis and fermentation (SHF process. The effects of temperature, enzyme concentration, biomass load, inoculum size and time were investigated using artificial neural networks, a C5.0 classification tree and random forest algorithms. The optimization of ethanol production was also evaluated. The results clearly depict that ML techniques can be used to evaluate the SHF (R2 between actual and model predictions higher than 0.90, absolute average deviation lower than 8.1% and RMSE lower than 0.80 and predict optimized conditions which are in close agreement with those found experimentally. Optimal conditions were found to be a temperature of 35 ºC, an SHF time of 36 h, enzymatic load of 99.8%, inoculum size of 29.5 g/L and bagasse concentration of 24.9%. The ethanol concentration and volumetric productivity for these conditions were 12.1 g/L and 0.336 g/L.h, respectively.

  18. International Conference ML4CPS 2016

    CERN Document Server

    Niggemann, Oliver; Kühnert, Christian

    2017-01-01

    The work presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains some selected papers from the international Conference ML4CPS – Machine Learning for Cyber Physical Systems, which was held in Karlsruhe, September 29th, 2016. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments. The Editors Prof. Dr.-Ing. Jürgen Beyerer is Professor at the Department for Interactive Real-Time Systems at the Karlsruhe Institute of Technology. In addition he manages the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB. Prof. Dr. Oliver Niggemann is Professor for Embedded Software Engineering. His research interests are in the field of Di...

  19. Introducing the Collaborative Learning Modeling Language (ColeML)

    DEFF Research Database (Denmark)

    Bundsgaard, Jeppe

    2014-01-01

    in this area, represented by, for example, the Workflow Management Coalition (Hollingsworth, 1995) and the very widespread standard Business Process Modeling and Notation (BPMN), has been criticized on the basis of research in knowledge work processes. Inspiration for ColeML is found in this research area...

  20. CellML, SED-ML, and the Physiome Model Repository

    OpenAIRE

    Nickerson, David

    2016-01-01

    Invited presentation delivered at COMBINE 2016.CellML, SED-ML, and the Physiome Model Repository.David Nickerson, Auckland Bioengineering Institute, University of Auckland, New Zealand.CellML is an XML-based protocol for storing and exchanging computer-based mathematical models in an unambiguous, modular, and reusable manner. In addition to introducing CellML, in this presentation I will provide some of physiological examples that have help drive the development and adoption of CellML. I will...

  1. GeoSciML and EarthResourceML Update, 2012

    Science.gov (United States)

    Richard, S. M.; Commissionthe Management; Application Inte, I.

    2012-12-01

    CGI Interoperability Working Group activities during 2012 include deployment of services using the GeoSciML-Portrayal schema, addition of new vocabularies to support properties added in version 3.0, improvements to server software for deploying services, introduction of EarthResourceML v.2 for mineral resources, and collaboration with the IUSS on a markup language for soils information. GeoSciML and EarthResourceML have been used as the basis for the INSPIRE Geology and Mineral Resources specifications respectively. GeoSciML-Portrayal is an OGC GML simple-feature application schema for presentation of geologic map unit, contact, and shear displacement structure (fault and ductile shear zone) descriptions in web map services. Use of standard vocabularies for geologic age and lithology enables map services using shared legends to achieve visual harmonization of maps provided by different services. New vocabularies have been added to the collection of CGI vocabularies provided to support interoperable GeoSciML services, and can be accessed through http://resource.geosciml.org. Concept URIs can be dereferenced to obtain SKOS rdf or html representations using the SISSVoc vocabulary service. New releases of the FOSS GeoServer application greatly improve support for complex XML feature schemas like GeoSciML, and the ArcGIS for INSPIRE extension implements similar complex feature support for ArcGIS Server. These improved server implementations greatly facilitate deploying GeoSciML services. EarthResourceML v2 adds features for information related to mining activities. SoilML provides an interchange format for soil material, soil profile, and terrain information. Work is underway to add GeoSciML to the portfolio of Open Geospatial Consortium (OGC) specifications.

  2. Machine Learning Techniques in Clinical Vision Sciences.

    Science.gov (United States)

    Caixinha, Miguel; Nunes, Sandrina

    2017-01-01

    This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration

  3. Using VS30 to Estimate Station ML Adjustments (dML)

    Science.gov (United States)

    Yong, A.; Herrick, J.; Cochran, E. S.; Andrews, J. R.; Yu, E.

    2017-12-01

    Currently, new seismic stations added to a regional seismic network cannot be used to calculate local or Richter magnitude (ML) until a revised region-wide amplitude decay function is developed. The new station must record a minimum number of local and regional events that meet specific amplitude requirements prior to re-calibration of the amplitude decay function. Therefore, there can be significant delay between when a new station starts contributing real-time waveform packets and when the data can be included in magnitude estimation. The station component adjustments (dML; Uhrhammer et al., 2011) are calculated after first inverting for a new regional amplitude decay function, constrained by the sum of dML for long-running stations. Here, we propose a method to calculate an initial dML using known or proxy values of seismic site conditions. For site conditions, we use the time-averaged shear-wave velocity (VS) of the upper 30 m (VS30). We solve for dML as described in Equation (1) by Uhrhammer et al. (2011): ML = log (A) - log A0 (r) + dML, where A is the maximum Wood and Anderson (1925) trace amplitude (mm), r is the distance (km), and dML is the station adjustment. Measured VS30 and estimated dML data are comprised of records from 887 horizontal components (east-west and north-south orientations) from 93 seismic monitoring stations in the California Integrated Seismic Network. VS30 values range from 202 m/s to 1464 m/s and dML range from -1.10 to 0.39. VS30 and dML exhibit a positive correlation coefficient (R = 0.72), indicating that as VS30 increases, dML increases. This implies that greater site amplification (i.e., lower VS30) results in smaller ML. When we restrict VS30 regional network ML estimates immediately without the need to wait until a minimum set of earthquake data has been recorded.

  4. Quantum machine learning: a classical perspective

    Science.gov (United States)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508

  5. Quantum machine learning: a classical perspective.

    Science.gov (United States)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  6. Quantum machine learning: a classical perspective

    Science.gov (United States)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  7. Machine learning in geosciences and remote sensing

    Directory of Open Access Journals (Sweden)

    David J. Lary

    2016-01-01

    Full Text Available Learning incorporates a broad range of complex procedures. Machine learning (ML is a subdivision of artificial intelligence based on the biological learning process. The ML approach deals with the design of algorithms to learn from machine readable data. ML covers main domains such as data mining, difficult-to-program applications, and software applications. It is a collection of a variety of algorithms (e.g. neural networks, support vector machines, self-organizing map, decision trees, random forests, case-based reasoning, genetic programming, etc. that can provide multivariate, nonlinear, nonparametric regression or classification. The modeling capabilities of the ML-based methods have resulted in their extensive applications in science and engineering. Herein, the role of ML as an effective approach for solving problems in geosciences and remote sensing will be highlighted. The unique features of some of the ML techniques will be outlined with a specific attention to genetic programming paradigm. Furthermore, nonparametric regression and classification illustrative examples are presented to demonstrate the efficiency of ML for tackling the geosciences and remote sensing problems.

  8. E-learning systems intelligent techniques for personalization

    CERN Document Server

    Klašnja-Milićević, Aleksandra; Ivanović, Mirjana; Budimac, Zoran; Jain, Lakhmi C

    2017-01-01

    This monograph provides a comprehensive research review of intelligent techniques for personalisation of e-learning systems. Special emphasis is given to intelligent tutoring systems as a particular class of e-learning systems, which support and improve the learning and teaching of domain-specific knowledge. A new approach to perform effective personalization based on Semantic web technologies achieved in a tutoring system is presented. This approach incorporates a recommender system based on collaborative tagging techniques that adapts to the interests and level of students' knowledge. These innovations are important contributions of this monograph. Theoretical models and techniques are illustrated on a real personalised tutoring system for teaching Java programming language. The monograph is directed to, students and researchers interested in the e-learning and personalization techniques. .

  9. Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques

    Science.gov (United States)

    Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili

    2009-01-01

    In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…

  10. Machine-learning techniques for family demography: an application of random forests to the analysis of divorce determinants in Germany

    OpenAIRE

    Arpino, Bruno; Le Moglie, Marco; Mencarini, Letizia

    2018-01-01

    Demographers often analyze the determinants of life-course events with parametric regression-type approaches. Here, we present a class of nonparametric approaches, broadly defined as machine learning (ML) techniques, and discuss advantages and disadvantages of a popular type known as random forest. We argue that random forests can be useful either as a substitute, or a complement, to more standard parametric regression modeling. Our discussion of random forests is intuitive and...

  11. Machine learning techniques for optical communication system optimization

    DEFF Research Database (Denmark)

    Zibar, Darko; Wass, Jesper; Thrane, Jakob

    In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction.......In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction....

  12. Storytelling: a teaching-learning technique.

    Science.gov (United States)

    Geanellos, R

    1996-03-01

    Nurses' stories, arising from the practice world, reconstruct the essence of experience as lived and provide vehicles for learning about nursing. The learning process is forwarded by combining storytelling and reflection. Reflection represents an active, purposive, contemplative and deliberative approach to learning through which learners create meaning from the learning experience. The combination of storytelling and reflection allows the creation of links between the materials at hand and prior and future learning. As a teaching-learning technique storytelling engages learners; organizes information; allows exploration of shared lived experiences without the demands, responsibilities and consequences of practice; facilitates remembering; enhances discussion, problem posing and problem solving; and aids understanding of what it is to nurse and to be a nurse.

  13. Less is more: Sampling chemical space with active learning

    Science.gov (United States)

    Smith, Justin S.; Nebgen, Ben; Lubbers, Nicholas; Isayev, Olexandr; Roitberg, Adrian E.

    2018-06-01

    The development of accurate and transferable machine learning (ML) potentials for predicting molecular energetics is a challenging task. The process of data generation to train such ML potentials is a task neither well understood nor researched in detail. In this work, we present a fully automated approach for the generation of datasets with the intent of training universal ML potentials. It is based on the concept of active learning (AL) via Query by Committee (QBC), which uses the disagreement between an ensemble of ML potentials to infer the reliability of the ensemble's prediction. QBC allows the presented AL algorithm to automatically sample regions of chemical space where the ML potential fails to accurately predict the potential energy. AL improves the overall fitness of ANAKIN-ME (ANI) deep learning potentials in rigorous test cases by mitigating human biases in deciding what new training data to use. AL also reduces the training set size to a fraction of the data required when using naive random sampling techniques. To provide validation of our AL approach, we develop the COmprehensive Machine-learning Potential (COMP6) benchmark (publicly available on GitHub) which contains a diverse set of organic molecules. Active learning-based ANI potentials outperform the original random sampled ANI-1 potential with only 10% of the data, while the final active learning-based model vastly outperforms ANI-1 on the COMP6 benchmark after training to only 25% of the data. Finally, we show that our proposed AL technique develops a universal ANI potential (ANI-1x) that provides accurate energy and force predictions on the entire COMP6 benchmark. This universal ML potential achieves a level of accuracy on par with the best ML potentials for single molecules or materials, while remaining applicable to the general class of organic molecules composed of the elements CHNO.

  14. Modern machine learning techniques and their applications in cartoon animation research

    CERN Document Server

    Yu, Jun

    2013-01-01

    The integration of machine learning techniques and cartoon animation research is fast becoming a hot topic. This book helps readers learn the latest machine learning techniques, including patch alignment framework; spectral clustering, graph cuts, and convex relaxation; ensemble manifold learning; multiple kernel learning; multiview subspace learning; and multiview distance metric learning. It then presents the applications of these modern machine learning techniques in cartoon animation research. With these techniques, users can efficiently utilize the cartoon materials to generate animations

  15. Prostate Cancer Probability Prediction By Machine Learning Technique.

    Science.gov (United States)

    Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena

    2017-11-26

    The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.

  16. Conformal technique dose escalation in prostate cancer: improved cancer control with higher doses in patients with pretreatment PSA {>=} 10 ngm/ml

    Energy Technology Data Exchange (ETDEWEB)

    Hanks, G E; Lee, W R; Hanlon, A L; Kaplan, E; Epstein, B; Schultheiss, T

    1995-07-01

    Purpose: Single institutions and an NCI supported group of institutions have been investigating the value of dose escalation in patients with prostate cancer treated by conformal treatment techniques. Improvement in morbidity has been previously established, while this report identifies the pretreatment PSA level subgroups of patients who benefitted in cancer control from higher dose. Materials and Methods: We report actuarial bNED survival rates for 375 consecutive patients with known pretreatment PSA levels treated with conformal technique between 5/89 and 12/93. The whole pelvis was treated to 45 Gy in 25 fractions in all T2C,3, all Gleason 8, 9, 10 and all patients with pretreatment PSA {>=}20. The prostate {+-} seminal vesicles was boosted at 2.1 Gy/day to the center of the prostate to 65-79 Gy (65-69 N=50), 70-72.49 N=94, 72.5-74.9 N=82, 75-77.49 N=129 and {>=}77.5 N=20). The median followup is 21 mos with a range of 3 to 67 mos. The highest dose patients have the least followup, reducing the impact of the highest dose levels at this time. Patients are analyzed for the entire group divided at 71 Gy and at 73 Gy calculated at the center of the prostate. Each dose group is then subdivided by pretreatment PSA levels <10, 10-19.9, and {>=}20 ngm/ml and dose levels are compared within pretreatment PSA level group. bNED failure is defined as PSA {>=}1.5 ngm/ml and rising on two consecutive values. Results: Table 1 shows the bNED survival rates at 24 and 36 mos for all patients and the three pretreatment PSA level groups. For all patients pooled, there is an overall advantage to using doses {>=}71 Gy (64% vs 85% at 36 mo, p=.006) and {>=}73 Gy (71% vs 86% at 36 mo, p=.07). The subgroup of PSA <10 ngm/ml, however, shows no benefit in bNED survival when using doses over 71 Gy (90% vs 93% at 36 mo) or 73 Gy (91 vs 94% at 36 mo). The subgroup PSA 10 ngm/ml to 19.9 ngm/ml shows improved cancer control when using doses over 71 Gy (61% vs 88% at 36 mo, p=.03) and over 73

  17. On the use of successive data in the ML-EM algorithm in Positron Emission Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Desmedt, P; Lemahieu, I [University of Ghent, ELIS Department, SInt-Pietersnieuwstraat 41, B-9000 Gent, (Belgium)

    1994-12-31

    The Maximum Likelihood-Expectation Maximization (ML-EM) algorithm is the most popular statistical reconstruction technique for Positron Emission Tomography (PET). The ML-EM algorithm is however also renowned for its long reconstruction times. An acceleration technique for this algorithm is studied in this paper. The proposed technique starts the ML-EM algorithm before the measurement process is completed. Since the reconstruction is initiated during the scan of the patient, the time elapsed before a reconstruction becomes available is reduced. Experiments with software phantoms indicate that the quality of the reconstructed image using successive data is comparable to the quality of the reconstruction with the normal ML-EM algorithm. (authors). 7 refs, 3 figs.

  18. m-Learning and holography: Compatible techniques?

    Science.gov (United States)

    Calvo, Maria L.

    2014-07-01

    Since the last decades, cell phones have become increasingly popular and are nowadays ubiquitous. New generations of cell phones are now equipped with text messaging, internet, and camera features. They are now making their way into the classroom. This is creating a new teaching and learning technique, the so called m-Learning (or mobile-Learning). Because of the many benefits that cell phones offer, teachers could easily use them as a teaching and learning tool. However, an additional work from the teachers for introducing their students into the m-Learning in the classroom needs to be defined and developed. As an example, optical techniques, based upon interference and diffraction phenomena, such as holography, appear to be convenient topics for m-Learning. They can be approached with simple examples and experiments within the cell phones performances and classroom accessibility. We will present some results carried out at the Faculty of Physical Sciences in UCM to obtain very simple holographic recordings via cell phones. The activities were carried out inside the course on Optical Coherence and Laser, offered to students in the fourth course of the Grade in Physical Sciences. Some open conclusions and proposals will be presented.

  19. Maximizing Reading Narrative Text Ability by Probing Prompting Learning Technique

    Directory of Open Access Journals (Sweden)

    Wiwied Pratiwi

    2017-12-01

    Full Text Available The objective of this research was to know whether Probing Prompting Learning Technique can be used to get the maximum effect of students’ reading narrative ability in teaching and learning process. This research was applied collaborative action reEsearch, this research was done in two cycle. The subject of this research was 23 students at tenth grade of SMA Kartikatama Metro. The result of the research showed that the Probing Prompting Learning Technique is useful and effective to help students get maximum effect of their reading. Based on the results of the questionnaire obtained an average percentage of 95%, it indicated that application of Probing Prompting Learning Technique in teaching l reading was appropriately applied. In short that students’ responses toward Probing Prompting Learning Technique in teaching reading was positive. In conclusion, Probing Prompting Learning Technique can get maximum effect of students’ reading ability. In relation to the result of the reserach, some suggestion are offered to english teacher, that  the use of Probing Prompting learning Technique in teaching reading will get the maximum effect of students’ reading abilty.

  20. APPLICABILITY OF COOPERATIVE LEARNING TECHNIQUES IN DIFFERENT CLASSROOM CONTEXTS

    Directory of Open Access Journals (Sweden)

    Dr. Issy Yuliasri

    2017-04-01

    Full Text Available This paper is based on the results of pre-test post-test, feedback questionnaire and observation during a community service program entitled ―Training on English Teaching using Cooperative Learning Techniques for Elementary and Junior High School Teachers of Sekolah Alam Arridho Semarang‖. It was an English teaching training program intended to equip the teachers with the knowledge and skills of using the different cooperative learning techniques such as jigsaw, think-pair-share, three-step interview, roundrobin braistorming, three-minute review, numbered heads together, team-pair-solo, circle the sage, dan partners. This program was participated by 8 teachers of different subjects (not only English, but most of them had good mastery of English. The objectives of this program was to improve teachers‘ skills in using the different cooperative learning techniques to vary their teaching, so that students would be more motivated to learn and improve their English skill. Besides, the training also gave the teachers the knowledge and skills to adjust their techniques with the basic competence and learning objectives to be achieved as well as with the teaching materials to be used. This was also done through workshops using cooperative learning techniques, so that the participants had real experiences of using cooperative learning techniques (learning by doing. The participants were also encouraged to explore the applicability of the techniques in their classroom contexts, in different areas of their teaching. This community service program showed very positive results. The pre-test and post-test results showed that before the training program all the participants did not know the nine cooperative techniques to be trained, but after the program they mastered the techniques as shown from the teaching-learning scenarios they developed following the test instructions. In addition, the anonymous questionnaires showed that all the participants

  1. Data Mining Practical Machine Learning Tools and Techniques

    CERN Document Server

    Witten, Ian H; Hall, Mark A

    2011-01-01

    Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place

  2. qcML

    DEFF Research Database (Denmark)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara

    2014-01-01

    provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible...... use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml....

  3. Opportunities to Create Active Learning Techniques in the Classroom

    Science.gov (United States)

    Camacho, Danielle J.; Legare, Jill M.

    2015-01-01

    The purpose of this article is to contribute to the growing body of research that focuses on active learning techniques. Active learning techniques require students to consider a given set of information, analyze, process, and prepare to restate what has been learned--all strategies are confirmed to improve higher order thinking skills. Active…

  4. ML Confidential : machine learning on encrypted data

    NARCIS (Netherlands)

    Graepel, T.; Lauter, K.; Naehrig, M.; Kwon, T.; Lee, M.-K.; Kwon, D.

    2013-01-01

    We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining con¿dentiality of the training and test data. Since the computational complexity of the homomorphic

  5. Machine learning versus knowledge based classification of legal texts

    NARCIS (Netherlands)

    de Maat, E.; Krabben, K.; Winkels, R.; Winkels, R.G.F.

    2010-01-01

    This paper presents results of an experiment in which we used machine learning (ML) techniques to classify sentences in Dutch legislation. These results are compared to the results of a pattern-based classifier. Overall, the ML classifier performs as accurate (>90%) as the pattern based one, but

  6. Active learning techniques for librarians practical examples

    CERN Document Server

    Walsh, Andrew

    2010-01-01

    A practical work outlining the theory and practice of using active learning techniques in library settings. It explains the theory of active learning and argues for its importance in our teaching and is illustrated using a large number of examples of techniques that can be easily transferred and used in teaching library and information skills to a range of learners within all library sectors. These practical examples recognise that for most of us involved in teaching library and information skills the one off session is the norm, so we need techniques that allow us to quickly grab and hold our

  7. The Effect of Group Investigation Learning Model with Brainstroming Technique on Students Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Astiti Kade kAyu

    2018-01-01

    Full Text Available This study aims to determine the effect of group investigation (GI learning model with brainstorming technique on student physics learning outcomes (PLO compared to jigsaw learning model with brainstroming technique. The learning outcome in this research are the results of learning in the cognitive domain. The method used in this research is experiment with Randomised Postest Only Control Group Design. Population in this research is all students of class XI IPA SMA Negeri 9 Kupang year lesson 2015/2016. The selected sample are 40 students of class XI IPA 1 as the experimental class and 38 students of class XI IPA 2 as the control class using simple random sampling technique. The instrument used is 13 items description test. The first hypothesis was tested by using two tailed t-test. From that, it is obtained that H0 rejected which means there are differences of students physics learning outcome. The second hypothesis was tested using one tailed t-test. It is obtained that H0 rejected which means the students PLO in experiment class were higher than control class. Based on the results of this study, researchers recommend the use of GI learning models with brainstorming techniques to improve PLO, especially in the cognitive domain.

  8. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  9. Unintended consequences of machine learning in medicine?

    Science.gov (United States)

    McDonald, Laura; Ramagopalan, Sreeram V; Cox, Andrew P; Oguz, Mustafa

    2017-01-01

    Machine learning (ML) has the potential to significantly aid medical practice. However, a recent article highlighted some negative consequences that may arise from using ML decision support in medicine. We argue here that whilst the concerns raised by the authors may be appropriate, they are not specific to ML, and thus the article may lead to an adverse perception about this technique in particular. Whilst ML is not without its limitations like any methodology, a balanced view is needed in order to not hamper its use in potentially enabling better patient care.

  10. Use of machine learning approaches for novel drug discovery.

    Science.gov (United States)

    Lima, Angélica Nakagawa; Philot, Eric Allison; Trossini, Gustavo Henrique Goulart; Scott, Luis Paulo Barbour; Maltarollo, Vinícius Gonçalves; Honorio, Kathia Maria

    2016-01-01

    The use of computational tools in the early stages of drug development has increased in recent decades. Machine learning (ML) approaches have been of special interest, since they can be applied in several steps of the drug discovery methodology, such as prediction of target structure, prediction of biological activity of new ligands through model construction, discovery or optimization of hits, and construction of models that predict the pharmacokinetic and toxicological (ADMET) profile of compounds. This article presents an overview on some applications of ML techniques in drug design. These techniques can be employed in ligand-based drug design (LBDD) and structure-based drug design (SBDD) studies, such as similarity searches, construction of classification and/or prediction models of biological activity, prediction of secondary structures and binding sites docking and virtual screening. Successful cases have been reported in the literature, demonstrating the efficiency of ML techniques combined with traditional approaches to study medicinal chemistry problems. Some ML techniques used in drug design are: support vector machine, random forest, decision trees and artificial neural networks. Currently, an important application of ML techniques is related to the calculation of scoring functions used in docking and virtual screening assays from a consensus, combining traditional and ML techniques in order to improve the prediction of binding sites and docking solutions.

  11. Machine learning techniques to examine large patient databases.

    Science.gov (United States)

    Meyfroidt, Geert; Güiza, Fabian; Ramon, Jan; Bruynooghe, Maurice

    2009-03-01

    Computerization in healthcare in general, and in the operating room (OR) and intensive care unit (ICU) in particular, is on the rise. This leads to large patient databases, with specific properties. Machine learning techniques are able to examine and to extract knowledge from large databases in an automatic way. Although the number of potential applications for these techniques in medicine is large, few medical doctors are familiar with their methodology, advantages and pitfalls. A general overview of machine learning techniques, with a more detailed discussion of some of these algorithms, is presented in this review.

  12. eLearning techniques supporting problem based learning in clinical simulation.

    Science.gov (United States)

    Docherty, Charles; Hoy, Derek; Topp, Helena; Trinder, Kathryn

    2005-08-01

    This paper details the results of the first phase of a project using eLearning to support students' learning within a simulated environment. The locus was a purpose built clinical simulation laboratory (CSL) where the School's philosophy of problem based learning (PBL) was challenged through lecturers using traditional teaching methods. a student-centred, problem based approach to the acquisition of clinical skills that used high quality learning objects embedded within web pages, substituting for lecturers providing instruction and demonstration. This encouraged student nurses to explore, analyse and make decisions within the safety of a clinical simulation. Learning was facilitated through network communications and reflection on video performances of self and others. Evaluations were positive, students demonstrating increased satisfaction with PBL, improved performance in exams, and increased self-efficacy in the performance of nursing activities. These results indicate that eLearning techniques can help students acquire clinical skills in the safety of a simulated environment within the context of a problem based learning curriculum.

  13. Comparative Performance Analysis of Machine Learning Techniques for Software Bug Detection

    OpenAIRE

    Saiqa Aleem; Luiz Fernando Capretz; Faheem Ahmed

    2015-01-01

    Machine learning techniques can be used to analyse data from different perspectives and enable developers to retrieve useful information. Machine learning techniques are proven to be useful in terms of software bug prediction. In this paper, a comparative performance analysis of different machine learning techniques is explored f or software bug prediction on public available data sets. Results showed most of the mac ...

  14. Social Learning Network Analysis Model to Identify Learning Patterns Using Ontology Clustering Techniques and Meaningful Learning

    Science.gov (United States)

    Firdausiah Mansur, Andi Besse; Yusof, Norazah

    2013-01-01

    Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…

  15. White cell labeling: 20 ML VS 4 ML of blood volume-case reports

    International Nuclear Information System (INIS)

    Imam, S.K.

    1998-01-01

    Full text: Some times, it becomes difficult to draw 20 mL blood from a patient with bad veins. On two occasions, we could collect only about 4 mL of blood, that too with a great deal of struggle, and then we carried out the routine labelling procedure. A labelling efficiency of 98.2% and 95.6% was achieved. The white cell scan was negative in one patient, but positive in the next one. In a third patient, a comparison of labelling efficiency was done between 5 and 20 mLs of blood volumes separately and the results were found to be identical, 98.5% and 98.4%, respectively. As we have achieved the usual pattern of white cell scan with as low as 4-5 mL of blood, it appears that enough number of white cells is present even in the 4-5 mL of blood that is capable of generating a white cell scan and so, it seems rational to reduce the blood volume from 20 mL to 4 or 5 mL. However, further studies are warranted before adopting this modification. The procedure appears to carry the following advantages: ease of blood collection, handling and re-injection and less risk to the patient

  16. Learning Programming Technique through Visual Programming Application as Learning Media with Fuzzy Rating

    Science.gov (United States)

    Buditjahjanto, I. G. P. Asto; Nurlaela, Luthfiyah; Ekohariadi; Riduwan, Mochamad

    2017-01-01

    Programming technique is one of the subjects at Vocational High School in Indonesia. This subject contains theory and application of programming utilizing Visual Programming. Students experience some difficulties to learn textual learning. Therefore, it is necessary to develop media as a tool to transfer learning materials. The objectives of this…

  17. BENCHMARKING MACHINE LEARNING TECHNIQUES FOR SOFTWARE DEFECT DETECTION

    OpenAIRE

    Saiqa Aleem; Luiz Fernando Capretz; Faheem Ahmed

    2015-01-01

    Machine Learning approaches are good in solving problems that have less information. In most cases, the software domain problems characterize as a process of learning that depend on the various circumstances and changes accordingly. A predictive model is constructed by using machine learning approaches and classified them into defective and non-defective modules. Machine learning techniques help developers to retrieve useful information after the classification and enable them to analyse data...

  18. Learning Physics through Project-Based Learning Game Techniques

    Science.gov (United States)

    Baran, Medine; Maskan, Abdulkadir; Yasar, Seyma

    2018-01-01

    The aim of the present study, in which Project and game techniques are used together, is to examine the impact of project-based learning games on students' physics achievement. Participants of the study consist of 34 9th grade students (N = 34). The data were collected using achievement tests and a questionnaire. Throughout the applications, the…

  19. Robustness and prediction accuracy of machine learning for objective visual quality assessment

    OpenAIRE

    HINES, ANDREW

    2014-01-01

    PUBLISHED Lisbon, Portugal Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reli- ability of ML-based techniques within objective quality as- sessment metrics is often questioned. In this study, the ro- bustness of ML in supporting objective quality assessment is investigated, specific...

  20. IoT Security Techniques Based on Machine Learning

    OpenAIRE

    Xiao, Liang; Wan, Xiaoyue; Lu, Xiaozhen; Zhang, Yanyong; Wu, Di

    2018-01-01

    Internet of things (IoT) that integrate a variety of devices into networks to provide advanced and intelligent services have to protect user privacy and address attacks such as spoofing attacks, denial of service attacks, jamming and eavesdropping. In this article, we investigate the attack model for IoT systems, and review the IoT security solutions based on machine learning techniques including supervised learning, unsupervised learning and reinforcement learning. We focus on the machine le...

  1. Machine learning techniques in optical communication

    DEFF Research Database (Denmark)

    Zibar, Darko; Piels, Molly; Jones, Rasmus Thomas

    2015-01-01

    Techniques from the machine learning community are reviewed and employed for laser characterization, signal detection in the presence of nonlinear phase noise, and nonlinearity mitigation. Bayesian filtering and expectation maximization are employed within nonlinear state-space framework...

  2. jmzML, an open-source Java API for mzML, the PSI standard for MS data.

    Science.gov (United States)

    Côté, Richard G; Reisinger, Florian; Martens, Lennart

    2010-04-01

    We here present jmzML, a Java API for the Proteomics Standards Initiative mzML data standard. Based on the Java Architecture for XML Binding and XPath-based XML indexer random-access XML parser, jmzML can handle arbitrarily large files in minimal memory, allowing easy and efficient processing of mzML files using the Java programming language. jmzML also automatically resolves internal XML references on-the-fly. The library (which includes a viewer) can be downloaded from http://jmzml.googlecode.com.

  3. Machine learning techniques in optical communication

    DEFF Research Database (Denmark)

    Zibar, Darko; Piels, Molly; Jones, Rasmus Thomas

    2016-01-01

    Machine learning techniques relevant for nonlinearity mitigation, carrier recovery, and nanoscale device characterization are reviewed and employed. Markov Chain Monte Carlo in combination with Bayesian filtering is employed within the nonlinear state-space framework and demonstrated for parameter...

  4. ROBUSTNESS AND PREDICTION ACCURACY OF MACHINE LEARNING FOR OBJECTIVE VISUAL QUALITY ASSESSMENT

    OpenAIRE

    Hines, Andrew; Kendrick, Paul; Barri, Adriaan; Narwaria, Manish; Redi, Judith A.

    2014-01-01

    Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptim...

  5. ML-o-Scope: A Diagnostic Visualization System for Deep Machine Learning Pipelines

    Science.gov (United States)

    2014-05-16

    Huawei , Intel, Microsoft, NetApp, Pivotal, Splunk, Virdata, VMware, WANdisco and Yahoo!. ML-o-scope: a diagnostic visualization system for deep machine...Facebook, GameOnTalis, Guavus, HP, Huawei , Intel, Microsoft, NetApp, Pivotal, Splunk, Virdata, VMware, WANdisco and Yahoo!. References [1] Bruna, J., and

  6. The application of machine learning techniques in the clinical drug therapy.

    Science.gov (United States)

    Meng, Huan-Yu; Jin, Wan-Lin; Yan, Cheng-Kai; Yang, Huan

    2018-05-25

    The development of a novel drug is an extremely complicated process that includes the target identification, design and manufacture, and proper therapy of the novel drug, as well as drug dose selection, drug efficacy evaluation, and adverse drug reaction control. Due to the limited resources, high costs, long duration, and low hit-to-lead ratio in the development of pharmacogenetics and computer technology, machine learning techniques have assisted novel drug development and have gradually received more attention by researchers. According to current research, machine learning techniques are widely applied in the process of the discovery of new drugs and novel drug targets, the decision surrounding proper therapy and drug dose, and the prediction of drug efficacy and adverse drug reactions. In this article, we discussed the history, workflow, and advantages and disadvantages of machine learning techniques in the processes mentioned above. Although the advantages of machine learning techniques are fairly obvious, the application of machine learning techniques is currently limited. With further research, the application of machine techniques in drug development could be much more widespread and could potentially be one of the major methods used in drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. Three visual techniques to enhance interprofessional learning.

    Science.gov (United States)

    Parsell, G; Gibbs, T; Bligh, J

    1998-07-01

    Many changes in the delivery of healthcare in the UK have highlighted the need for healthcare professionals to learn to work together as teams for the benefit of patients. Whatever the profession or level, whether for postgraduate education and training, continuing professional development, or for undergraduates, learners should have an opportunity to learn about and with, other healthcare practitioners in a stimulating and exciting way. Learning to understand how people think, feel, and react, and the parts they play at work, both as professionals and individuals, can only be achieved through sensitive discussion and exchange of views. Teaching and learning methods must provide opportunities for this to happen. This paper describes three small-group teaching techniques which encourage a high level of learner collaboration and team-working. Learning content is focused on real-life health-care issues and strong visual images are used to stimulate lively discussion and debate. Each description includes the learning objectives of each exercise, basic equipment and resources, and learning outcomes.

  8. Contemporary machine learning: techniques for practitioners in the physical sciences

    Science.gov (United States)

    Spears, Brian

    2017-10-01

    Machine learning is the science of using computers to find relationships in data without explicitly knowing or programming those relationships in advance. Often without realizing it, we employ machine learning every day as we use our phones or drive our cars. Over the last few years, machine learning has found increasingly broad application in the physical sciences. This most often involves building a model relationship between a dependent, measurable output and an associated set of controllable, but complicated, independent inputs. The methods are applicable both to experimental observations and to databases of simulated output from large, detailed numerical simulations. In this tutorial, we will present an overview of current tools and techniques in machine learning - a jumping-off point for researchers interested in using machine learning to advance their work. We will discuss supervised learning techniques for modeling complicated functions, beginning with familiar regression schemes, then advancing to more sophisticated decision trees, modern neural networks, and deep learning methods. Next, we will cover unsupervised learning and techniques for reducing the dimensionality of input spaces and for clustering data. We'll show example applications from both magnetic and inertial confinement fusion. Along the way, we will describe methods for practitioners to help ensure that their models generalize from their training data to as-yet-unseen test data. We will finally point out some limitations to modern machine learning and speculate on some ways that practitioners from the physical sciences may be particularly suited to help. This work was performed by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  10. Machine Learning Techniques for Stellar Light Curve Classification

    Science.gov (United States)

    Hinners, Trisha A.; Tat, Kevin; Thorp, Rachel

    2018-07-01

    We apply machine learning techniques in an attempt to predict and classify stellar properties from noisy and sparse time-series data. We preprocessed over 94 GB of Kepler light curves from the Mikulski Archive for Space Telescopes (MAST) to classify according to 10 distinct physical properties using both representation learning and feature engineering approaches. Studies using machine learning in the field have been primarily done on simulated data, making our study one of the first to use real light-curve data for machine learning approaches. We tuned our data using previous work with simulated data as a template and achieved mixed results between the two approaches. Representation learning using a long short-term memory recurrent neural network produced no successful predictions, but our work with feature engineering was successful for both classification and regression. In particular, we were able to achieve values for stellar density, stellar radius, and effective temperature with low error (∼2%–4%) and good accuracy (∼75%) for classifying the number of transits for a given star. The results show promise for improvement for both approaches upon using larger data sets with a larger minority class. This work has the potential to provide a foundation for future tools and techniques to aid in the analysis of astrophysical data.

  11. Exploration of Machine Learning Approaches to Predict Pavement Performance

    Science.gov (United States)

    2018-03-23

    Machine learning (ML) techniques were used to model and predict pavement condition index (PCI) for various pavement types using a variety of input variables. The primary objective of this research was to develop and assess PCI predictive models for t...

  12. jqcML: an open-source java API for mass spectrometry quality control data in the qcML format.

    Science.gov (United States)

    Bittremieux, Wout; Kelchtermans, Pieter; Valkenborg, Dirk; Martens, Lennart; Laukens, Kris

    2014-07-03

    The awareness that systematic quality control is an essential factor to enable the growth of proteomics into a mature analytical discipline has increased over the past few years. To this aim, a controlled vocabulary and document structure have recently been proposed by Walzer et al. to store and disseminate quality-control metrics for mass-spectrometry-based proteomics experiments, called qcML. To facilitate the adoption of this standardized quality control routine, we introduce jqcML, a Java application programming interface (API) for the qcML data format. First, jqcML provides a complete object model to represent qcML data. Second, jqcML provides the ability to read, write, and work in a uniform manner with qcML data from different sources, including the XML-based qcML file format and the relational database qcDB. Interaction with the XML-based file format is obtained through the Java Architecture for XML Binding (JAXB), while generic database functionality is obtained by the Java Persistence API (JPA). jqcML is released as open-source software under the permissive Apache 2.0 license and can be downloaded from https://bitbucket.org/proteinspector/jqcml .

  13. Precision Learning Assessment: An Alternative to Traditional Assessment Techniques.

    Science.gov (United States)

    Caltagirone, Paul J.; Glover, Christopher E.

    1985-01-01

    A continuous and curriculum-based assessment method, Precision Learning Assessment (PLA), which integrates precision teaching and norm-referenced techniques, was applied to a math computation curriculum for 214 third graders. The resulting districtwide learning curves defining average annual progress through the computation curriculum provided…

  14. The abstract geometry modeling language (AgML): experience and road map toward eRHIC

    International Nuclear Information System (INIS)

    Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor

    2014-01-01

    The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.

  15. Practising What We Teach: Vocational Teachers Learn to Research through Applying Action Learning Techniques

    Science.gov (United States)

    Lasky, Barbara; Tempone, Irene

    2004-01-01

    Action learning techniques are well suited to the teaching of organisation behaviour students because of their flexibility, inclusiveness, openness, and respect for individuals. They are no less useful as a tool for change for vocational teachers, learning, of necessity, to become researchers. Whereas traditional universities have always had a…

  16. Into the Bowels of Depression: Unravelling Medical Symptoms Associated with Depression by Applying Machine-Learning Techniques to a Community Based Population Sample

    Science.gov (United States)

    Dipnall, Joanna F.

    2016-01-01

    Background Depression is commonly comorbid with many other somatic diseases and symptoms. Identification of individuals in clusters with comorbid symptoms may reveal new pathophysiological mechanisms and treatment targets. The aim of this research was to combine machine-learning (ML) algorithms with traditional regression techniques by utilising self-reported medical symptoms to identify and describe clusters of individuals with increased rates of depression from a large cross-sectional community based population epidemiological study. Methods A multi-staged methodology utilising ML and traditional statistical techniques was performed using the community based population National Health and Nutrition Examination Study (2009–2010) (N = 3,922). A Self-organised Mapping (SOM) ML algorithm, combined with hierarchical clustering, was performed to create participant clusters based on 68 medical symptoms. Binary logistic regression, controlling for sociodemographic confounders, was used to then identify the key clusters of participants with higher levels of depression (PHQ-9≥10, n = 377). Finally, a Multiple Additive Regression Tree boosted ML algorithm was run to identify the important medical symptoms for each key cluster within 17 broad categories: heart, liver, thyroid, respiratory, diabetes, arthritis, fractures and osteoporosis, skeletal pain, blood pressure, blood transfusion, cholesterol, vision, hearing, psoriasis, weight, bowels and urinary. Results Five clusters of participants, based on medical symptoms, were identified to have significantly increased rates of depression compared to the cluster with the lowest rate: odds ratios ranged from 2.24 (95% CI 1.56, 3.24) to 6.33 (95% CI 1.67, 24.02). The ML boosted regression algorithm identified three key medical condition categories as being significantly more common in these clusters: bowel, pain and urinary symptoms. Bowel-related symptoms was found to dominate the relative importance of symptoms within the

  17. Into the Bowels of Depression: Unravelling Medical Symptoms Associated with Depression by Applying Machine-Learning Techniques to a Community Based Population Sample.

    Science.gov (United States)

    Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny

    2016-01-01

    Depression is commonly comorbid with many other somatic diseases and symptoms. Identification of individuals in clusters with comorbid symptoms may reveal new pathophysiological mechanisms and treatment targets. The aim of this research was to combine machine-learning (ML) algorithms with traditional regression techniques by utilising self-reported medical symptoms to identify and describe clusters of individuals with increased rates of depression from a large cross-sectional community based population epidemiological study. A multi-staged methodology utilising ML and traditional statistical techniques was performed using the community based population National Health and Nutrition Examination Study (2009-2010) (N = 3,922). A Self-organised Mapping (SOM) ML algorithm, combined with hierarchical clustering, was performed to create participant clusters based on 68 medical symptoms. Binary logistic regression, controlling for sociodemographic confounders, was used to then identify the key clusters of participants with higher levels of depression (PHQ-9≥10, n = 377). Finally, a Multiple Additive Regression Tree boosted ML algorithm was run to identify the important medical symptoms for each key cluster within 17 broad categories: heart, liver, thyroid, respiratory, diabetes, arthritis, fractures and osteoporosis, skeletal pain, blood pressure, blood transfusion, cholesterol, vision, hearing, psoriasis, weight, bowels and urinary. Five clusters of participants, based on medical symptoms, were identified to have significantly increased rates of depression compared to the cluster with the lowest rate: odds ratios ranged from 2.24 (95% CI 1.56, 3.24) to 6.33 (95% CI 1.67, 24.02). The ML boosted regression algorithm identified three key medical condition categories as being significantly more common in these clusters: bowel, pain and urinary symptoms. Bowel-related symptoms was found to dominate the relative importance of symptoms within the five key clusters. This

  18. A Comparative Analysis of Machine Learning Techniques for Credit Scoring

    OpenAIRE

    Nwulu, Nnamdi; Oroja, Shola; İlkan, Mustafa

    2012-01-01

    Abstract Credit Scoring has become an oft researched topic in light of the increasing volatility of the global economy and the recent world financial crisis. Amidst the many methods used for credit scoring, machine learning techniques are becoming increasingly popular due to their efficient and accurate nature and relative simplicity. Furthermore machine learning techniques minimize the risk of human bias and error and maximize speed as they are able to perform computation...

  19. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  20. Machine learning & artificial intelligence in the quantum domain: a review of recent progress.

    Science.gov (United States)

    Dunjko, Vedran; Briegel, Hans J

    2018-03-05

    Quantum information technologies, on the one hand, and intelligent learning systems, on the other, are both emergent technologies that are likely to have a transformative impact on our society in the future. The respective underlying fields of basic research-quantum information versus machine learning (ML) and artificial intelligence (AI)-have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for ML problems, critical in our 'big data' world. Conversely, ML already permeates many cutting-edge technologies and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement-exploring what ML/AI can do for quantum physics and vice versa-researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments and

  1. ML 3.1 developer's guide.

    Energy Technology Data Exchange (ETDEWEB)

    Sala, Marzio; Hu, Jonathan Joseph (Sandia National Laboratories, Livermore, CA); Tuminaro, Raymond Stephen (Sandia National Laboratories, Livermore, CA)

    2004-05-01

    ML development was started in 1997 by Ray Tuminaro and Charles Tong. Currently, there are several full- and part-time developers. The kernel of ML is written in ANSI C, and there is a rich C++ interface for Trilinos users and developers. ML can be customized to run geometric and algebraic multigrid; it can solve a scalar or a vector equation (with constant number of equations per grid node), and it can solve a form of Maxwell's equations. For a general introduction to ML and its applications, we refer to the Users Guide [SHT04], and to the ML web site, http://software.sandia.gov/ml.

  2. Biodegradation of malathion by Bacillus licheniformis strain ML-1

    Directory of Open Access Journals (Sweden)

    Khan Sara

    2016-01-01

    Full Text Available Malathion, a well-known organophosphate pesticide, has been used in agriculture over the last two decades for controlling pests of economically important crops. In the present study, a single bacterium, ML-1, was isolated by soil-enrichment technique and identified as Bacillus licheniformis on the basis of the 16S rRNA technique. The bacterium was grown in carbon-free minimal salt medium (MSM and was found to be very efficient in utilizing malathion as the sole source of carbon. Biodegradation experiments were performed in MSM without carbon source to determine the malathion degradation by the selected strain, and the residues of malathion were determined quantitatively using HPLC techniques. Bacillus licheniformis showed very promising results and efficiently consumed malathion as the sole carbon source via malathion carboxylesterase (MCE, and about 78% malathion was degraded within 5 days. The carboxylesterase activity was determined by using crude extract while using malathion as substrate, and the residues were determined by HPLC. It has been found that the MCE hydrolyzed 87% malathion within 96 h of incubation. Characterization of crude MCE revealed that the enzyme is robust in nature in terms of organic solvents, as it was found to be stable in various concentrations of ethanol and acetonitrile. Similarly, and it can work in a wide pH and temperature range. The results of this study highlighted the potential of Bacillus licheniformis strain ML-1 as a biodegrader that can be used for the bioremediation of malathion-contaminated soil.

  3. Machine learning applications in cancer prognosis and prediction.

    Science.gov (United States)

    Kourou, Konstantina; Exarchos, Themis P; Exarchos, Konstantinos P; Karamouzis, Michalis V; Fotiadis, Dimitrios I

    2015-01-01

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.

  4. mzML2ISA & nmrML2ISA: generating enriched ISA-Tab metadata files from metabolomics XML data.

    Science.gov (United States)

    Larralde, Martin; Lawson, Thomas N; Weber, Ralf J M; Moreno, Pablo; Haug, Kenneth; Rocca-Serra, Philippe; Viant, Mark R; Steinbeck, Christoph; Salek, Reza M

    2017-08-15

    Submission to the MetaboLights repository for metabolomics data currently places the burden of reporting instrument and acquisition parameters in ISA-Tab format on users, who have to do it manually, a process that is time consuming and prone to user input error. Since the large majority of these parameters are embedded in instrument raw data files, an opportunity exists to capture this metadata more accurately. Here we report a set of Python packages that can automatically generate ISA-Tab metadata file stubs from raw XML metabolomics data files. The parsing packages are separated into mzML2ISA (encompassing mzML and imzML formats) and nmrML2ISA (nmrML format only). Overall, the use of mzML2ISA & nmrML2ISA reduces the time needed to capture metadata substantially (capturing 90% of metadata on assay and sample levels), is much less prone to user input errors, improves compliance with minimum information reporting guidelines and facilitates more finely grained data exploration and querying of datasets. mzML2ISA & nmrML2ISA are available under version 3 of the GNU General Public Licence at https://github.com/ISA-tools. Documentation is available from http://2isa.readthedocs.io/en/latest/. reza.salek@ebi.ac.uk or isatools@googlegroups.com. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  5. Prediction of drug synergy in cancer using ensemble-based machine learning techniques

    Science.gov (United States)

    Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder

    2018-04-01

    Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.

  6. Promises of Machine Learning Approaches in Prediction of Absorption of Compounds.

    Science.gov (United States)

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2018-01-01

    The Machine Learning (ML) is one of the fastest developing techniques in the prediction and evaluation of important pharmacokinetic properties such as absorption, distribution, metabolism and excretion. The availability of a large number of robust validation techniques for prediction models devoted to pharmacokinetics has significantly enhanced the trust and authenticity in ML approaches. There is a series of prediction models generated and used for rapid screening of compounds on the basis of absorption in last one decade. Prediction of absorption of compounds using ML models has great potential across the pharmaceutical industry as a non-animal alternative to predict absorption. However, these prediction models still have to go far ahead to develop the confidence similar to conventional experimental methods for estimation of drug absorption. Some of the general concerns are selection of appropriate ML methods and validation techniques in addition to selecting relevant descriptors and authentic data sets for the generation of prediction models. The current review explores published models of ML for the prediction of absorption using physicochemical properties as descriptors and their important conclusions. In addition, some critical challenges in acceptance of ML models for absorption are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. A preclustering-based ensemble learning technique for acute appendicitis diagnoses.

    Science.gov (United States)

    Lee, Yen-Hsien; Hu, Paul Jen-Hwa; Cheng, Tsang-Hsiang; Huang, Te-Chia; Chuang, Wei-Yao

    2013-06-01

    Acute appendicitis is a common medical condition, whose effective, timely diagnosis can be difficult. A missed diagnosis not only puts the patient in danger but also requires additional resources for corrective treatments. An acute appendicitis diagnosis constitutes a classification problem, for which a further fundamental challenge pertains to the skewed outcome class distribution of instances in the training sample. A preclustering-based ensemble learning (PEL) technique aims to address the associated imbalanced sample learning problems and thereby support the timely, accurate diagnosis of acute appendicitis. The proposed PEL technique employs undersampling to reduce the number of majority-class instances in a training sample, uses preclustering to group similar majority-class instances into multiple groups, and selects from each group representative instances to create more balanced samples. The PEL technique thereby reduces potential information loss from random undersampling. It also takes advantage of ensemble learning to improve performance. We empirically evaluate this proposed technique with 574 clinical cases obtained from a comprehensive tertiary hospital in southern Taiwan, using several prevalent techniques and a salient scoring system as benchmarks. The comparative results show that PEL is more effective and less biased than any benchmarks. The proposed PEL technique seems more sensitive to identifying positive acute appendicitis than the commonly used Alvarado scoring system and exhibits higher specificity in identifying negative acute appendicitis. In addition, the sensitivity and specificity values of PEL appear higher than those of the investigated benchmarks that follow the resampling approach. Our analysis suggests PEL benefits from the more representative majority-class instances in the training sample. According to our overall evaluation results, PEL records the best overall performance, and its area under the curve measure reaches 0.619. The

  8. Allelism of Genes in the Ml-a locus

    DEFF Research Database (Denmark)

    Giese, Nanna Henriette; Jensen, Hans Peter; Jørgensen, Jørgen Helms

    1980-01-01

    Seven barley lines or varieties, each with a different gene at the Ml-a locus for resistance to Erysiphe graminis were intercrossed. Progeny testing of the F2s using two different fungal isolates per cross provided evidence that there are two or more loci in the Ml-a region. Apparent recombinants...... were also screened for recombination between the Hor1 and Hor2 loci which are situated either side of the Ml-a locus. The cross between Ricardo and Iso42R (Rupee) yielded one possible recombinant, with Ml-a3 and Ml-a(Rul) in the coupling phase; other recombinants had wild-type genes in the coupling...... phase. Iso20R, derived from Hordeum spontaneum 'H204', carrying Ml-a6, had an additional gene, in close coupling with Ml-a6, tentatively named Ml-aSp2 or Reglv, causing an intermediate infection type with isolate EmA30. It is suggested that Ml-a(Ar) in Emir and Ml-a(Rul), shown to differ from other Ml...

  9. Machine learning techniques for gait biometric recognition using the ground reaction force

    CERN Document Server

    Mason, James Eric; Woungang, Isaac

    2016-01-01

    This book focuses on how machine learning techniques can be used to analyze and make use of one particular category of behavioral biometrics known as the gait biometric. A comprehensive Ground Reaction Force (GRF)-based Gait Biometrics Recognition framework is proposed and validated by experiments. In addition, an in-depth analysis of existing recognition techniques that are best suited for performing footstep GRF-based person recognition is also proposed, as well as a comparison of feature extractors, normalizers, and classifiers configurations that were never directly compared with one another in any previous GRF recognition research. Finally, a detailed theoretical overview of many existing machine learning techniques is presented, leading to a proposal of two novel data processing techniques developed specifically for the purpose of gait biometric recognition using GRF. This book · introduces novel machine-learning-based temporal normalization techniques · bridges research gaps concerning the effect of ...

  10. MACHINE LEARNING TECHNIQUES USED IN BIG DATA

    Directory of Open Access Journals (Sweden)

    STEFANIA LOREDANA NITA

    2016-07-01

    Full Text Available The classical tools used in data analysis are not enough in order to benefit of all advantages of big data. The amount of information is too large for a complete investigation, and the possible connections and relations between data could be missed, because it is difficult or even impossible to verify all assumption over the information. Machine learning is a great solution in order to find concealed correlations or relationships between data, because it runs at scale machine and works very well with large data sets. The more data we have, the more the machine learning algorithm is useful, because it “learns” from the existing data and applies the found rules on new entries. In this paper, we present some machine learning algorithms and techniques used in big data.

  11. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  12. Overview of deep learning in medical imaging.

    Science.gov (United States)

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a

  13. Figure analysis: A teaching technique to promote visual literacy and active Learning.

    Science.gov (United States)

    Wiles, Amy M

    2016-07-08

    Learning often improves when active learning techniques are used in place of traditional lectures. For many of these techniques, however, students are expected to apply concepts that they have already grasped. A challenge, therefore, is how to incorporate active learning into the classroom of courses with heavy content, such as molecular-based biology courses. An additional challenge is that visual literacy is often overlooked in undergraduate science education. To address both of these challenges, a technique called figure analysis was developed and implemented in three different levels of undergraduate biology courses. Here, students learn content while gaining practice in interpreting visual information by discussing figures with their peers. Student groups also make connections between new and previously learned concepts on their own while in class. The instructor summarizes the material for the class only after students grapple with it in small groups. Students reported a preference for learning by figure analysis over traditional lecture, and female students in particular reported increased confidence in their analytical abilities. There is not a technology requirement for this technique; therefore, it may be utilized both in classrooms and in nontraditional spaces. Additionally, the amount of preparation required is comparable to that of a traditional lecture. © 2016 by The International Union of Biochemistry and Molecular Biology, 44(4):336-344, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.

  14. Intra-articular sodium hyaluronate 2 mL versus physiological saline 20 mL versus physiological saline 2 mL for painful knee osteoarthritis: a randomized clinical trial

    DEFF Research Database (Denmark)

    Lundsgaard, C.; Dufour, N.; Fallentin, E.

    2008-01-01

    , Knee Injury and Osteoarthritis Outcome Score (KOOS), Osteoarthritis Research Society International (OARSI) criteria, and global assessment of the patient's condition. Results: The mean age of the patients was 69.4 years; 55% were women. The effects of hyaluronate 2 mL, physiological saline 20 m......Objective: Methodological constraints weaken previous evidence on intra-articular viscosupplementation and physiological saline distention for osteoarthritis. We conducted a randomized, patient- and observer-blind trial to evaluate these interventions in patients with painful knee osteoarthritis....... Methods: We centrally randomized 251 patients with knee ostcoarthritis to four weekly intra-articular injections of sodium hyaluronate 2 mL (Hyalgan(R) 10.3 mg/mL) versus physiological saline 20 mL (distention) versus physiological saline 2 mL (placebo) and followed patients for 26 weeks. Inclusion...

  15. Techniques to Promote Reflective Practice and Empowered Learning.

    Science.gov (United States)

    Nguyen-Truong, Connie Kim Yen; Davis, Andra; Spencer, Cassius; Rasmor, Melody; Dekker, Lida

    2018-02-01

    Health care environments are fraught with fast-paced critical demands and ethical dilemmas requiring decisive nursing actions. Nurse educators must prepare nursing students to practice skills, behaviors, and attitudes needed to meet the challenges of health care demands. Evidence-based, innovative, multimodal techniques with novice and seasoned nurses were incorporated into a baccalaureate (BSN) completion program (RN to-BSN) to deepen learning, complex skill building, reflective practice, teamwork, and compassion toward the experiences of others. Principles of popular education for engaged teaching-learning were applied. Nursing students experience equitable access to content through co-constructing knowledge with four creative techniques. Four creative techniques include poem reading aloud to facilitate connectedness; mindfulness to cultivate self-awareness; string figure activities to demonstrate indigenous knowledge and teamwork; and cartooning difficult subject matter. Nursing school curricula can promote a milieu for developing organizational skills to manage simultaneous priorities, practice reflectively, and develop empathy and the authenticity that effective nursing requires. [J Nurs Educ. 2018;57(2):115-120.]. Copyright 2018, SLACK Incorporated.

  16. MPS and ML

    Science.gov (United States)

    ... individuals about MPS and ML, the National MPS Society has created a central location for more information on MPS. Click here to go to the MPS Library. Share Tweet Our Mission The National MPS Society exists to cure, support and advocate for MPS ...

  17. Higgs Machine Learning Challenge 2014

    CERN Document Server

    Olivier, A-P; Bourdarios, C ; LAL / Orsay; Goldfarb, S ; University of Michigan

    2014-01-01

    High Energy Physics (HEP) has been using Machine Learning (ML) techniques such as boosted decision trees (paper) and neural nets since the 90s. These techniques are now routinely used for difficult tasks such as the Higgs boson search. Nevertheless, formal connections between the two research fields are rather scarce, with some exceptions such as the AppStat group at LAL, founded in 2006. In collaboration with INRIA, AppStat promotes interdisciplinary research on machine learning, computational statistics, and high-energy particle and astroparticle physics. We are now exploring new ways to improve the cross-fertilization of the two fields by setting up a data challenge, following the footsteps of, among others, the astrophysics community (dark matter and galaxy zoo challenges) and neurobiology (connectomics and decoding the human brain). The organization committee consists of ATLAS physicists and machine learning researchers. The Challenge will run from Monday 12th to September 2014.

  18. Comparative exploration of learning styles and teaching techniques between Thai and Vietnamese EFL students and instructors

    Directory of Open Access Journals (Sweden)

    Supalak Nakhornsri

    2016-09-01

    Full Text Available Learning styles have been a particular focus of a number of researchers over the past decades. Findings from various studies researching into how students learn highlight significant relationships between learners’ styles of learning and their language learning processes and achievement. This research focuses on a comparative analysis of the preferences of English learning styles and teaching techniques perceived by students from Thailand and Vietnam, and the teaching styles and techniques practiced by their instructors. The purposes were 1 to investigate the learning styles and teaching techniques students from both countries preferred, 2 to investigate the compatibility of the teaching styles and techniques practiced by instructors and those preferred by the students, 3 to specify the learning styles and teaching techniques students with high level of English proficiency preferred, and 4 to investigate the similarities of Thai and Vietnamese students’ preferences for learning styles and teaching techniques. The sample consisted of two main groups: 1 undergraduate students from King Mongkut’s University of Technology North Bangkok (KMUTNB, Thailand and Thai Nguyen University (TNU, Vietnam and 2 English instructors from both institutions. The instruments employed comprised the Students’ Preferred English Learning Style and Teaching Technique Questionnaire and the Teachers’ Practiced English Teaching Style and Technique Questionnaire. The collected data were analyzed using arithmetic means and standard deviation. The findings can contribute to the curriculum development and assist teachers to teach outside their comfort level to match the students’ preferred learning styles. In addition, the findings could better promote the courses provided for students. By understanding the learning style make-up of the students enrolled in the courses, faculty can adjust their modes of content delivery to match student preferences and maximize

  19. The training and learning process of transseptal puncture using a modified technique.

    Science.gov (United States)

    Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom

    2013-12-01

    As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.

  20. The XBabelPhish MAGE-ML and XML translator.

    Science.gov (United States)

    Maier, Don; Wymore, Farrell; Sherlock, Gavin; Ball, Catherine A

    2008-01-18

    MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large - too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML. We have developed XBabelPhish - an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML. XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.

  1. The difference of contrast effects of myelography in normal dogs: Comparison of iohexol (180 mgI/ml), iohexol (240 mgI/ml) and iotrolan (240 mgI/ml)

    International Nuclear Information System (INIS)

    Shimizu, J.; Yamada, K.; Kishimoto, M.; Iwasaki, T.; Miyake, Y.

    2008-01-01

    The contrast effects of three different contrast media preparations (iohexol 180 mgI/ml, iohexol 240 mgI/ml and iotrolan 240 mgI/ml) in conventional and CT myelography were compared. Three beagle dogs were used and the study employed a cross-over method (total of 9) for each contrast media. The result of CT myelography showed that the contrast effect of iohexol (180 mgI/ml), which had low viscosity, was highest in cranial sites, and the contrast effect of high-viscosity iotrolan (240 mgI/ml) was highest in caudal sites 5 min after injection of the contrast media preparations. This shows that the diffusion of contrast media preparations in the subarachnoid space is influenced by viscosity. The results of conventional myelography also showed that the diffusion of contrast media preparations is influenced by viscosity. Therefore, it is important to identify the location of spinal lesions in veterinary practice, and low viscosity contrast medium preparation with wide spread contrast effects is considered suitable for myelography

  2. QuakeML 2.0: Recent developments

    Science.gov (United States)

    Euchner, Fabian; Kästli, Philipp; Heiniger, Lukas; Saul, Joachim; Schorlemmer, Danijel; Clinton, John

    2016-04-01

    QuakeML is a community-backed data model for seismic event parameter description. Its current version 1.2, released in 2013, has become the gold standard for parametric data dissemination at seismological data centers, and has been adopted as an FDSN standard. It is supported by several popular software products and data services, such as FDSN event web services, QuakePy, and SeisComP3. Work on the successor version 2.0 is under way since 2015. The scope of QuakeML has been expanded beyond event parameter description. Thanks to a modular architecture, many thematic packages have been added, which cover peak ground motion, site and station characterization, hydraulic parameters of borehole injection processes, and macroseismics. The first three packages can be considered near final and implementations of program codes and SQL databases are in productive use at various institutions. A public community review process has been initiated in order to turn them into community-approved standards. The most recent addition is a package for single station quake location, which allows a detailed probabilistic description of event parameters recorded at a single station. This package adds some information elements such as angle of incidence, frequency-dependent phase picks, and dispersion relations. The package containing common data types has been extended with a generic type for probability density functions. While on Earth, single station methods are niche applications, they are of prominent interest in planetary seismology, e.g., the NASA InSight mission to Mars. So far, QuakeML is lacking a description of seismic instrumentation (inventory). There are two existing standards of younger age (FDSN StationXML and SeisComP3 Inventory XML). We discuss their respective strengths, differences, and how they could be combined into an inventory package for QuakeML, thus allowing full interoperability with other QuakeML data types. QuakeML is accompanied by QuakePy, a Python package

  3. Learning-curve estimation techniques for nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year.

  4. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  5. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  6. Competitive debate classroom as a cooperative learning technique for the human resources subject

    Directory of Open Access Journals (Sweden)

    Guillermo A. SANCHEZ PRIETO

    2018-01-01

    Full Text Available The paper shows an academic debate model as a cooperative learning technique for teaching human resources at University. The general objective of this paper is to conclude if academic debate can be included in the category of cooperative learning. The Specific objective it is presenting a model to implement this technique. Thus the first part of the paper shows the concept of cooperative learning and its main characteristics. The second part presents the debate model believed to be labelled as cooperative learning. Last part concludes with the characteristics of the model that match different aspects or not of the cooperative learning.

  7. Analysing CMS transfers using Machine Learning techniques

    CERN Document Server

    Diotalevi, Tommaso

    2016-01-01

    LHC experiments transfer more than 10 PB/week between all grid sites using the FTS transfer service. In particular, CMS manages almost 5 PB/week of FTS transfers with PhEDEx (Physics Experiment Data Export). FTS sends metrics about each transfer (e.g. transfer rate, duration, size) to a central HDFS storage at CERN. The work done during these three months, here as a Summer Student, involved the usage of ML techniques, using a CMS framework called DCAFPilot, to process this new data and generate predictions of transfer latencies on all links between Grid sites. This analysis will provide, as a future service, the necessary information in order to proactively identify and maybe fix latency issued transfer over the WLCG.

  8. CLAS App ML

    NARCIS (Netherlands)

    Maher, Bridget; Hartkopf, Kathleen; Stieger, Lina; Schroeder, Hanna; Sopka, Sasa; Orrego, Carola; Drachsler, Hendrik

    2014-01-01

    This is a multi-language (ML) update of the CLAS App original design by Bridget Maher from the School of Medicine at University College Cork, Ireland. The current version has an improve counting mechanism and has been translated from English to Spanish, Catalan and German languages within the

  9. An introduction and overview of machine learning in neurosurgical care.

    Science.gov (United States)

    Senders, Joeky T; Zaki, Mark M; Karhade, Aditya V; Chang, Bliss; Gormley, William B; Broekman, Marike L; Smith, Timothy R; Arnaout, Omar

    2018-01-01

    Machine learning (ML) is a branch of artificial intelligence that allows computers to learn from large complex datasets without being explicitly programmed. Although ML is already widely manifest in our daily lives in various forms, the considerable potential of ML has yet to find its way into mainstream medical research and day-to-day clinical care. The complex diagnostic and therapeutic modalities used in neurosurgery provide a vast amount of data that is ideally suited for ML models. This systematic review explores ML's potential to assist and improve neurosurgical care. A systematic literature search was performed in the PubMed and Embase databases to identify all potentially relevant studies up to January 1, 2017. All studies were included that evaluated ML models assisting neurosurgical treatment. Of the 6,402 citations identified, 221 studies were selected after subsequent title/abstract and full-text screening. In these studies, ML was used to assist surgical treatment of patients with epilepsy, brain tumors, spinal lesions, neurovascular pathology, Parkinson's disease, traumatic brain injury, and hydrocephalus. Across multiple paradigms, ML was found to be a valuable tool for presurgical planning, intraoperative guidance, neurophysiological monitoring, and neurosurgical outcome prediction. ML has started to find applications aimed at improving neurosurgical care by increasing the efficiency and precision of perioperative decision-making. A thorough validation of specific ML models is essential before implementation in clinical neurosurgical care. To bridge the gap between research and clinical care, practical and ethical issues should be considered parallel to the development of these techniques.

  10. Applying machine learning techniques for forecasting flexibility of virtual power plants

    DEFF Research Database (Denmark)

    MacDougall, Pamela; Kosek, Anna Magdalena; Bindner, Henrik W.

    2016-01-01

    network as well as the multi-variant linear regression. It is found that it is possible to estimate the longevity of flexibility with machine learning. The linear regression algorithm is, on average, able to estimate the longevity with a 15% error. However, there was a significant improvement with the ANN...... approach to investigating the longevity of aggregated response of a virtual power plant using historic bidding and aggregated behaviour with machine learning techniques. The two supervised machine learning techniques investigated and compared in this paper are, multivariate linear regression and single...... algorithm achieving, on average, a 5.3% error. This is lowered 2.4% when learning for the same virtual power plant. With this information it would be possible to accurately offer residential VPP flexibility for market operations to safely avoid causing further imbalances and financial penalties....

  11. Machine learning techniques for persuasion dectection in conversation

    OpenAIRE

    Ortiz, Pedro.

    2010-01-01

    Approved for public release; distribution is unlimited We determined that it is possible to automatically detect persuasion in conversations using three traditional machine learning techniques, naive bayes, maximum entropy, and support vector machine. These results are the first of their kind and serve as a baseline for all future work in this field. The three techniques consistently outperformed the baseline F-score, but not at a level that would be useful for real world applications. The...

  12. A Machine Learning Approach for Hot-Spot Detection at Protein-Protein Interfaces

    NARCIS (Netherlands)

    Melo, Rita; Fieldhouse, Robert; Melo, André; Correia, João D G; Cordeiro, Maria Natália D S; Gümüş, Zeynep H; Costa, Joaquim; Bonvin, Alexandre M J J; de Sousa Moreira, Irina

    2016-01-01

    Understanding protein-protein interactions is a key challenge in biochemistry. In this work, we describe a more accurate methodology to predict Hot-Spots (HS) in protein-protein interfaces from their native complex structure compared to previous published Machine Learning (ML) techniques. Our model

  13. Absorption kinetics of two highly concentrated preparations of growth hormone: 12 IU/ml compared to 56 IU/ml

    DEFF Research Database (Denmark)

    Laursen, Torben; Susgaard, Søren; Jensen, Flemming Steen

    1994-01-01

    was to compare the relative bioavailability of two highly concentrated (12 IU/ml versus 56 IU/ml) formulations of biosynthetic human growth hormone administered subcutaneously. After pretreatment with growth hormone for at least four weeks, nine growth hormone deficient patients with a mean age of 26.2 years......AbstractSend to: Pharmacol Toxicol. 1994 Jan;74(1):54-7. Absorption kinetics of two highly concentrated preparations of growth hormone: 12 IU/ml compared to 56 IU/ml. Laursen T1, Susgaard S, Jensen FS, Jørgensen JO, Christiansen JS. Author information Abstract The purpose of this study...... (range 17-43) were studied two times in a randomized design, the two studies being separated by at least one week. At the start of each study period (7 p.m.), growth hormone was injected subcutaneously in a dosage of 3 IU/m2. The 12 IU/ml preparation of growth hormone was administered on one occasion...

  14. An empirical study on the performance of spectral manifold learning techniques

    DEFF Research Database (Denmark)

    Mysling, Peter; Hauberg, Søren; Pedersen, Kim Steenstrup

    2011-01-01

    In recent years, there has been a surge of interest in spectral manifold learning techniques. Despite the interest, only little work has focused on the empirical behavior of these techniques. We construct synthetic data of variable complexity and observe the performance of the techniques as they ...

  15. Comparative study of induction of labour with Foley’s catheter inflated to 30 mL versus 60 mL

    OpenAIRE

    Indira I; Latha G; Lakshmi Narayanamma V

    2016-01-01

    Background: The ripeness of the cervix is an important determinant of the success of induction of labour. One of the mechanical methods of cervical ripening is the use of a transcervical Foley catheter. In this study we compared the efficacy in induction of labour of two insufflation volumes of Foley catheter bulb 30 mL and 60mL. Methods: This was a randomized, single-blind study conducted in 100 women, randomly allocated to the 30 mL group (n=50) and 60 mL group (n=50). Foley’s cath...

  16. Application of machine learning techniques to lepton energy reconstruction in water Cherenkov detectors

    Science.gov (United States)

    Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.

    2018-04-01

    The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.

  17. An Effective Performance Analysis of Machine Learning Techniques for Cardiovascular Disease

    Directory of Open Access Journals (Sweden)

    Vinitha DOMINIC

    2015-03-01

    Full Text Available Machine learning techniques will help in deriving hidden knowledge from clinical data which can be of great benefit for society, such as reduce the number of clinical trials required for precise diagnosis of a disease of a person etc. Various areas of study are available in healthcare domain like cancer, diabetes, drugs etc. This paper focuses on heart disease dataset and how machine learning techniques can help in understanding the level of risk associated with heart diseases. Initially, data is preprocessed then analysis is done in two stages, in first stage feature selection techniques are applied on 13 commonly used attributes and in second stage feature selection techniques are applied on 75 attributes which are related to anatomic structure of the heart like blood vessels of the heart, arteries etc. Finally, validation of the reduced set of features using an exhaustive list of classifiers is done.In parallel study of the anatomy of the heart is done using the identified features and the characteristics of each class is understood. It is observed that these reduced set of features are anatomically relevant. Thus, it can be concluded that, applying machine learning techniques on clinical data is beneficial and necessary.

  18. Description of the sodium loop ML-3

    International Nuclear Information System (INIS)

    Torre, de la M.; Melches, I; Lapena, J.; Martinez, T.A.; Miguel, de D.; Duran, F.

    1979-01-01

    The sodium loop ML-3 is described. The main objective of this facility is to obtain mechanical property data for LMFBR materials in creep and low cycle fatigue testing in flowing sodium. ML-3 includes 10 test stations for creep and two for fatigue. It is possible to operate simultaneously at three different temperature levels. The maximum operating temperature is 650 deg C at flow velocities up to 5 m/s. The ML-3 loop has been located in a manner that permits the fill/dump tank cover gas and security systems to be shared with an earlier circuit, the ML-1. (author)

  19. E-Learning System Using Segmentation-Based MR Technique for Learning Circuit Construction

    Science.gov (United States)

    Takemura, Atsushi

    2016-01-01

    This paper proposes a novel e-Learning system using the mixed reality (MR) technique for technical experiments involving the construction of electronic circuits. The proposed system comprises experimenters' mobile computers and a remote analysis system. When constructing circuits, each learner uses a mobile computer to transmit image data from the…

  20. RuleML-Based Learning Object Interoperability on the Semantic Web

    Science.gov (United States)

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  1. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  2. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    Directory of Open Access Journals (Sweden)

    Philippe Burlina

    Full Text Available To evaluate the use of ultrasound coupled with machine learning (ML and deep learning (DL techniques for automated or semi-automated classification of myositis.Eighty subjects comprised of 19 with inclusion body myositis (IBM, 14 with polymyositis (PM, 14 with dermatomyositis (DM, and 33 normal (N subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally were acquired. We considered three problems of classification including (A normal vs. affected (DM, PM, IBM; (B normal vs. IBM patients; and (C IBM vs. other types of myositis (DM or PM. We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification.The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A, 86.6% ± 2.4% for (B and 74.8% ± 3.9% for (C, while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A, 84.3% ± 2.3% for (B and 68.9% ± 2.5% for (C.This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  3. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    Science.gov (United States)

    Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima

    2017-01-01

    To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  4. Computer-aided auscultation learning system for nursing technique instruction.

    Science.gov (United States)

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  5. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    Science.gov (United States)

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  6. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    Directory of Open Access Journals (Sweden)

    Jihoon Choi

    2017-09-01

    Full Text Available This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs and the cross-spectral density (CSD, the proposed method employs a modified maximum-likelihood (ML prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  7. Clinical application of l-123 MlBG cardiac imaging

    International Nuclear Information System (INIS)

    Kang, Do Young

    2004-01-01

    Cardiac neurotransmission imaging allows in vivo assessment of presynaptic reuptake, neurotransmitter storage and postsynaptic receptors. Among the various neurotransmitter, I-123 MlBG is most available and relatively well-established. Metaiodobenzylguanidine (MIBG) is an analogue of the false neurotransmitter guanethidine. It is taken up to adrenergic neurons by uptake-1 mechanism as same as norepinephrine. As tagged with I-123, it can be used to image sympathetic function in various organs including heart with planar or SPECT techniques. I-123 MIBG imaging has a unique advantage to evaluate myocardial neuronal activity in which the heart has no significant structural abnormality or even no functional derangement measured with other conventional examination. In patients with cardiomyopathy and heart failure, this imaging has most sensitive technique to predict prognosis and treatment response of betablocker or ACE inhibitor. In diabetic patients, it allow very early detection of autonomic neuropathy. In patients with dangerous arrhythmia such as ventricular tachycardia or fibrillation, MIBG imaging may be only an abnormal result among various exams. In patients with ischemic heart disease, sympathetic derangement may be used as the method of risk stratification. In heart transplanted patients, sympathetic reinnervation is well evaluated. Adriamycin-induced cardiotoxicity is detected earlier than ventricular dysfunction with sympathetic dysfunction. Neurodegenerative disorder such as Parkinson's disease or dementia with Lewy bodies has also cardiac sympathetic dysfunction. Noninvasive assessment of cardiac sympathetic nerve activity with l-123 MlBG imaging may be improve understanding of the pathophysiology of cardiac disease and make a contribution to predict survival and therapy efficacy

  8. Clinical application of l-123 MlBG cardiac imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Do Young [College of Medicine, Donga Univ., Busan (Korea, Republic of)

    2004-10-01

    Cardiac neurotransmission imaging allows in vivo assessment of presynaptic reuptake, neurotransmitter storage and postsynaptic receptors. Among the various neurotransmitter, I-123 MlBG is most available and relatively well-established. Metaiodobenzylguanidine (MIBG) is an analogue of the false neurotransmitter guanethidine. It is taken up to adrenergic neurons by uptake-1 mechanism as same as norepinephrine. As tagged with I-123, it can be used to image sympathetic function in various organs including heart with planar or SPECT techniques. I-123 MIBG imaging has a unique advantage to evaluate myocardial neuronal activity in which the heart has no significant structural abnormality or even no functional derangement measured with other conventional examination. In patients with cardiomyopathy and heart failure, this imaging has most sensitive technique to predict prognosis and treatment response of betablocker or ACE inhibitor. In diabetic patients, it allow very early detection of autonomic neuropathy. In patients with dangerous arrhythmia such as ventricular tachycardia or fibrillation, MIBG imaging may be only an abnormal result among various exams. In patients with ischemic heart disease, sympathetic derangement may be used as the method of risk stratification. In heart transplanted patients, sympathetic reinnervation is well evaluated. Adriamycin-induced cardiotoxicity is detected earlier than ventricular dysfunction with sympathetic dysfunction. Neurodegenerative disorder such as Parkinson's disease or dementia with Lewy bodies has also cardiac sympathetic dysfunction. Noninvasive assessment of cardiac sympathetic nerve activity with l-123 MlBG imaging may be improve understanding of the pathophysiology of cardiac disease and make a contribution to predict survival and therapy efficacy.

  9. Journaling; an active learning technique.

    Science.gov (United States)

    Blake, Tim K

    2005-01-01

    Journaling is a method frequently discussed in nursing literature and educational literature as an active learning technique that is meant to enhance reflective practice. Reflective practice is a means of self-examination that involves looking back over what has happened in practice in an effort to improve, or encourage professional growth. Some of the benefits of reflective practice include discovering meaning, making connections between experiences and the classroom, instilling values of the profession, gaining the perspective of others, reflection on professional roles, and development of critical thinking. A review of theory and research is discussed, as well as suggestions for implementation of journaling into coursework.

  10. Novel Breast Imaging and Machine Learning: Predicting Breast Lesion Malignancy at Cone-Beam CT Using Machine Learning Techniques.

    Science.gov (United States)

    Uhlig, Johannes; Uhlig, Annemarie; Kunze, Meike; Beissbarth, Tim; Fischer, Uwe; Lotz, Joachim; Wienbeck, Susanne

    2018-05-24

    The purpose of this study is to evaluate the diagnostic performance of machine learning techniques for malignancy prediction at breast cone-beam CT (CBCT) and to compare them to human readers. Five machine learning techniques, including random forests, back propagation neural networks (BPN), extreme learning machines, support vector machines, and K-nearest neighbors, were used to train diagnostic models on a clinical breast CBCT dataset with internal validation by repeated 10-fold cross-validation. Two independent blinded human readers with profound experience in breast imaging and breast CBCT analyzed the same CBCT dataset. Diagnostic performance was compared using AUC, sensitivity, and specificity. The clinical dataset comprised 35 patients (American College of Radiology density type C and D breasts) with 81 suspicious breast lesions examined with contrast-enhanced breast CBCT. Forty-five lesions were histopathologically proven to be malignant. Among the machine learning techniques, BPNs provided the best diagnostic performance, with AUC of 0.91, sensitivity of 0.85, and specificity of 0.82. The diagnostic performance of the human readers was AUC of 0.84, sensitivity of 0.89, and specificity of 0.72 for reader 1 and AUC of 0.72, sensitivity of 0.71, and specificity of 0.67 for reader 2. AUC was significantly higher for BPN when compared with both reader 1 (p = 0.01) and reader 2 (p Machine learning techniques provide a high and robust diagnostic performance in the prediction of malignancy in breast lesions identified at CBCT. BPNs showed the best diagnostic performance, surpassing human readers in terms of AUC and specificity.

  11. Effect of active learning techniques on students' choice of approach ...

    African Journals Online (AJOL)

    The purpose of this article is to report on empirical work, related to a techniques module, undertaken with the dental students of the University of the Western Cape, South Africa. I will relate how a range of different active learning techniques (tutorials; question papers and mock tests) assisted students to adopt a deep ...

  12. Instructional Television: Visual Production Techniques and Learning Comprehension.

    Science.gov (United States)

    Silbergleid, Michael Ian

    The purpose of this study was to determine if increasing levels of complexity in visual production techniques would increase the viewer's learning comprehension and the degree of likeness expressed for a college level instructional television program. A total of 119 mass communications students at the University of Alabama participated in the…

  13. QuakeML - An XML Schema for Seismology

    Science.gov (United States)

    Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.

    2004-12-01

    We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.

  14. Using DSM and MDM Methodologies to Analyze Structural SysML Models

    OpenAIRE

    Maisenbacher, S.;Kernschmidt, Konstantin;Kasperek, Daniel;Vogel-Heuser, B.;Maurer, M.

    2014-01-01

    Matrices and graph-based representations are commonly used visual models of system structures. Depending on the objective of the observer, both representations offer different opportunities and advantages. A standardized graph-based modeling language is SysML, while the design structure matrix (DSM) and the multiple domain matrix (MDM) are typical matrices used during the development of complex systems. Although both modeling techniques are wide spread, up to now they are hardly used conjoint...

  15. Machine-Learning-Based Future Received Signal Strength Prediction Using Depth Images for mmWave Communications

    OpenAIRE

    Okamoto, Hironao; Nishio, Takayuki; Nakashima, Kota; Koda, Yusuke; Yamamoto, Koji; Morikura, Masahiro; Asai, Yusuke; Miyatake, Ryo

    2018-01-01

    This paper discusses a machine-learning (ML)-based future received signal strength (RSS) prediction scheme using depth camera images for millimeter-wave (mmWave) networks. The scheme provides the future RSS prediction of any mmWave links within the camera's view, including links where nodes are not transmitting frames. This enables network controllers to conduct network operations before line-of-sight path blockages degrade the RSS. Using the ML techniques, the prediction scheme automatically...

  16. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  17. Proceedings ML Family/OCaml Users and Developers workshops

    OpenAIRE

    Kiselyov, Oleg; Garrigue, Jacques

    2015-01-01

    This volume collects the extended versions of selected papers originally presented at the two ACM SIGPLAN workshops: ML Family Workshop 2014 and OCaml 2014. Both were affiliated with ICFP 2014 and took place on two consecutive days, on September 4 and 5, 2014 in Gothenburg, Sweden. The ML Family workshop aims to recognize the entire extended family of ML and ML-like languages: languages that are Higher-order, Typed, Inferred, and Strict. It provides the forum to discuss common issues, both pr...

  18. BER PERFORMANCE COMPARISON OF MIMO SYSTEMS USING OSTBC WITH ZF AND ML DECODING

    Directory of Open Access Journals (Sweden)

    Zenitha Rehman

    2014-12-01

    Full Text Available Multiple Input Multiple Output (MIMO systems with multiple antenna elements at both transmitter and receiver ends are an efficient solution for wireless communication systems. They provide high data rates by exploiting the spatial domain under the constraints of limited bandwidth and transmit power. Space-Time Block Coding (STBC is a MIMO transmit strategy which exploits transmit diversity and provides high reliability. Implementation of orthogonal space-time block codes (OSTBCs for a two transmitter–two receiver system under AWGN (Additive White Gaussian Noise channel and flat fading channel is performed. Alamouti code is employed for the STBC. The modulation techniques used are BPSK, QPSK and 16-QAM. Decoding is done using the Zero Forcing (ZF algorithm and Maximum Likelihood (ML algorithm. The BER Performance of each modulation scheme is compared with the un-coded version of the same. Performance comparison between the two decoding techniques is also done. It is found that ML detection offers a slightly better performance for BPSK and QPSK system than ZF detection.

  19. Learning curve estimation techniques for the nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  20. Swarm Intelligence: New Techniques for Adaptive Systems to Provide Learning Support

    Science.gov (United States)

    Wong, Lung-Hsiang; Looi, Chee-Kit

    2012-01-01

    The notion of a system adapting itself to provide support for learning has always been an important issue of research for technology-enabled learning. One approach to provide adaptivity is to use social navigation approaches and techniques which involve analysing data of what was previously selected by a cluster of users or what worked for…

  1. Data mining practical machine learning tools and techniques

    CERN Document Server

    Witten, Ian H

    2005-01-01

    As with any burgeoning technology that enjoys commercial attention, the use of data mining is surrounded by a great deal of hype. Exaggerated reports tell of secrets that can be uncovered by setting algorithms loose on oceans of data. But there is no magic in machine learning, no hidden power, no alchemy. Instead there is an identifiable body of practical techniques that can extract useful information from raw data. This book describes these techniques and shows how they work. The book is a major revision of the first edition that appeared in 1999. While the basic core remains the same

  2. Considerations for Task Analysis Methods and Rapid E-Learning Development Techniques

    Directory of Open Access Journals (Sweden)

    Dr. Ismail Ipek

    2014-02-01

    Full Text Available The purpose of this paper is to provide basic dimensions for rapid training development in e-learning courses in education and business. Principally, it starts with defining task analysis and how to select tasks for analysis and task analysis methods for instructional design. To do this, first, learning and instructional technologies as visions of the future were discussed. Second, the importance of task analysis methods in rapid e-learning was considered, with learning technologies as asynchronous and synchronous e-learning development. Finally, rapid instructional design concepts and e-learning design strategies were defined and clarified with examples, that is, all steps for effective task analysis and rapid training development techniques based on learning and instructional design approaches were discussed, such as m-learning and other delivery systems. As a result, the concept of task analysis, rapid e-learning development strategies and the essentials of online course design were discussed, alongside learner interface design features for learners and designers.

  3. A Learning Method for Neural Networks Based on a Pseudoinverse Technique

    Directory of Open Access Journals (Sweden)

    Chinmoy Pal

    1996-01-01

    Full Text Available A theoretical formulation of a fast learning method based on a pseudoinverse technique is presented. The efficiency and robustness of the method are verified with the help of an Exclusive OR problem and a dynamic system identification of a linear single degree of freedom mass–spring problem. It is observed that, compared with the conventional backpropagation method, the proposed method has a better convergence rate and a higher degree of learning accuracy with a lower equivalent learning coefficient. It is also found that unlike the steepest descent method, the learning capability of which is dependent on the value of the learning coefficient ν, the proposed pseudoinverse based backpropagation algorithm is comparatively robust with respect to its equivalent variable learning coefficient. A combination of the pseudoinverse method and the steepest descent method is proposed for a faster, more accurate learning capability.

  4. Model-driven Service Engineering with SoaML

    Science.gov (United States)

    Elvesæter, Brian; Carrez, Cyril; Mohagheghi, Parastoo; Berre, Arne-Jørgen; Johnsen, Svein G.; Solberg, Arnor

    This chapter presents a model-driven service engineering (MDSE) methodology that uses OMG MDA specifications such as BMM, BPMN and SoaML to identify and specify services within a service-oriented architecture. The methodology takes advantage of business modelling practices and provides a guide to service modelling with SoaML. The presentation is case-driven and illuminated using the telecommunication example. The chapter focuses in particular on the use of the SoaML modelling language as a means for expressing service specifications that are aligned with business models and can be realized in different platform technologies.

  5. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.

    Science.gov (United States)

    S K, Somasundaram; P, Alli

    2017-11-09

    The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection

  6. An experimental result of estimating an application volume by machine learning techniques.

    Science.gov (United States)

    Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko

    2015-01-01

    In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka.

  7. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  8. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  9. A SysML-based Integration Framework for the Engineering of Mechatronic Systems

    OpenAIRE

    Chami, Muhammad; Seemüller, Holger; Voos, Holger

    2010-01-01

    The engineering discipline mechatronics is one of the main innovation leader in industry nowadays. With the need for an optimal synergetic integration of the involved disciplines, the engineering process of mechatronic systems is faced with an increasing complexity and the interdisciplinary nature of these systems. New methods and techniques have to be developed to deal with these challenges. This document presents an approach of a SysML-based integration framework that s...

  10. Promoting Cooperative Learning in the Classroom: Comparing Explicit and Implicit Training Techniques

    Directory of Open Access Journals (Sweden)

    Anne Elliott

    2003-07-01

    Full Text Available In this study, we investigated whether providing 4th and 5th-grade students with explicit instruction in prerequisite cooperative-learning skills and techniques would enhance their academic performance and promote in them positive attitudes towards cooperative learning. Overall, students who received explicit training outperformed their peers on both the unit project and test and presented more favourable attitudes towards cooperative learning. The findings of this study support the use of explicitly instructing students about the components of cooperative learning prior to engaging in collaborative activities. Implications for teacher-education are discussed.

  11. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    Science.gov (United States)

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  12. Stacking machine learning classifiers to identify Higgs bosons at the LHC

    International Nuclear Information System (INIS)

    Alves, A.

    2017-01-01

    Machine learning (ML) algorithms have been employed in the problem of classifying signal and background events with high accuracy in particle physics. In this paper, we compare the performance of a widespread ML technique, namely, stacked generalization , against the results of two state-of-art algorithms: (1) a deep neural network (DNN) in the task of discovering a new neutral Higgs boson and (2) a scalable machine learning system for tree boosting, in the Standard Model Higgs to tau leptons channel, both at the 8 TeV LHC. In a cut-and-count analysis, stacking three algorithms performed around 16% worse than DNN but demanding far less computation efforts, however, the same stacking outperforms boosted decision trees. Using the stacked classifiers in a multivariate statistical analysis (MVA), on the other hand, significantly enhances the statistical significance compared to cut-and-count in both Higgs processes, suggesting that combining an ensemble of simpler and faster ML algorithms with MVA tools is a better approach than building a complex state-of-art algorithm for cut-and-count.

  13. Generating a Spanish Affective Dictionary with Supervised Learning Techniques

    Science.gov (United States)

    Bermudez-Gonzalez, Daniel; Miranda-Jiménez, Sabino; García-Moreno, Raúl-Ulises; Calderón-Nepamuceno, Dora

    2016-01-01

    Nowadays, machine learning techniques are being used in several Natural Language Processing (NLP) tasks such as Opinion Mining (OM). OM is used to analyse and determine the affective orientation of texts. Usually, OM approaches use affective dictionaries in order to conduct sentiment analysis. These lexicons are labeled manually with affective…

  14. Exploring machine-learning-based control plane intrusion detection techniques in software defined optical networks

    Science.gov (United States)

    Zhang, Huibin; Wang, Yuqiao; Chen, Haoran; Zhao, Yongli; Zhang, Jie

    2017-12-01

    In software defined optical networks (SDON), the centralized control plane may encounter numerous intrusion threatens which compromise the security level of provisioned services. In this paper, the issue of control plane security is studied and two machine-learning-based control plane intrusion detection techniques are proposed for SDON with properly selected features such as bandwidth, route length, etc. We validate the feasibility and efficiency of the proposed techniques by simulations. Results show an accuracy of 83% for intrusion detection can be achieved with the proposed machine-learning-based control plane intrusion detection techniques.

  15. Using the IGCRA (individual, group, classroom reflective action technique to enhance teaching and learning in large accountancy classes

    Directory of Open Access Journals (Sweden)

    Cristina Poyatos

    2011-02-01

    Full Text Available First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an educational case study approach and examines the implementation of the IGCRA (individual, group, classroom reflective action technique, a Classroom Assessment Technique, on first year accounting students’ learning performance. Building on theoretical frameworks in the areas of cognitive learning, social development, and dialogical learning, the technique uses reports to promote reflection on both learning and teaching. IGCRA was found to promote feedback on the effectiveness of student, as well as teacher satisfaction. Moreover, the results indicated formative feedback can assist to improve the learning and learning environment for a large group of first year accounting students. Clear guidelines for its implementation are provided in the paper.

  16. Sentiment Analysis in Geo Social Streams by using Machine Learning Techniques

    OpenAIRE

    Twanabasu, Bikesh

    2018-01-01

    Treball de Final de Màster Universitari Erasmus Mundus en Tecnologia Geoespacial (Pla de 2013). Codi: SIW013. Curs acadèmic 2017-2018 Massive amounts of sentiment rich data are generated on social media in the form of Tweets, status updates, blog post, reviews, etc. Different people and organizations are using these user generated content for decision making. Symbolic techniques or Knowledge base approaches and Machine learning techniques are two main techniques used for analysis sentiment...

  17. MLitB: machine learning in the browser

    Directory of Open Access Journals (Sweden)

    Edward Meeds

    2015-07-01

    Full Text Available With few exceptions, the field of Machine Learning (ML research has largely ignored the browser as a computational engine. Beyond an educational resource for ML, the browser has vast potential to not only improve the state-of-the-art in ML research, but also, inexpensively and on a massive scale, to bring sophisticated ML learning and prediction to the public at large. This paper introduces MLitB, a prototype ML framework written entirely in Javascript, capable of performing large-scale distributed computing with heterogeneous classes of devices. The development of MLitB has been driven by several underlying objectives whose aim is to make ML learning and usage ubiquitous (by using ubiquitous compute devices, cheap and effortlessly distributed, and collaborative. This is achieved by allowing every internet capable device to run training algorithms and predictive models with no software installation and by saving models in universally readable formats. Our prototype library is capable of training deep neural networks with synchronized, distributed stochastic gradient descent. MLitB offers several important opportunities for novel ML research, including: development of distributed learning algorithms, advancement of web GPU algorithms, novel field and mobile applications, privacy preserving computing, and green grid-computing. MLitB is available as open source software.

  18. Deep Learning Techniques for Top-Quark Reconstruction

    CERN Document Server

    Naderi, Kiarash

    2017-01-01

    Top quarks are unique probes of the standard model (SM) predictions and have the potential to be a window for physics beyond the SM (BSM). Top quarks decay to a $Wb$ pair, and the $W$ can decay in leptons or jets. In a top pair event, assigning jets to their correct source is a challenge. In this study, I studied different methods for improving top reconstruction. The main motivation was to use Deep Learning Techniques in order to enhance the precision of top reconstruction.

  19. Quantum Machine Learning

    OpenAIRE

    Romero García, Cristian

    2017-01-01

    [EN] In a world in which accessible information grows exponentially, the selection of the appropriate information turns out to be an extremely relevant problem. In this context, the idea of Machine Learning (ML), a subfield of Artificial Intelligence, emerged to face problems in data mining, pattern recognition, automatic prediction, among others. Quantum Machine Learning is an interdisciplinary research area combining quantum mechanics with methods of ML, in which quantum properties allow fo...

  20. jmzIdentML API: A Java interface to the mzIdentML standard for peptide and protein identification data.

    Science.gov (United States)

    Reisinger, Florian; Krishna, Ritesh; Ghali, Fawaz; Ríos, Daniel; Hermjakob, Henning; Vizcaíno, Juan Antonio; Jones, Andrew R

    2012-03-01

    We present a Java application programming interface (API), jmzIdentML, for the Human Proteome Organisation (HUPO) Proteomics Standards Initiative (PSI) mzIdentML standard for peptide and protein identification data. The API combines the power of Java Architecture of XML Binding (JAXB) and an XPath-based random-access indexer to allow a fast and efficient mapping of extensible markup language (XML) elements to Java objects. The internal references in the mzIdentML files are resolved in an on-demand manner, where the whole file is accessed as a random-access swap file, and only the relevant piece of XMLis selected for mapping to its corresponding Java object. The APIis highly efficient in its memory usage and can handle files of arbitrary sizes. The APIfollows the official release of the mzIdentML (version 1.1) specifications and is available in the public domain under a permissive licence at http://www.code.google.com/p/jmzidentml/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Driver drowsiness detection using behavioral measures and machine learning techniques: A review of state-of-art techniques

    CSIR Research Space (South Africa)

    Ngxande, Mkhuseli

    2017-11-01

    Full Text Available This paper presents a literature review of driver drowsiness detection based on behavioral measures using machine learning techniques. Faces contain information that can be used to interpret levels of drowsiness. There are many facial features...

  2. Adductor Canal Block With 10 mL Versus 30 mL Local Anesthetics and Quadriceps Strength

    DEFF Research Database (Denmark)

    Jæger, Pia; Koscielniak-Nielsen, Zbigniew J; Hilsted, Karen Lisa

    2015-01-01

    weakness. METHODS: We performed a paired, blinded, randomized trial including healthy men. All subjects received bilateral ACBs with ropivacaine 0.1%; 10 mL in 1 leg and 30 mL in the other leg. The primary outcome was the difference in number of subjects with quadriceps strength reduced by more than 25...... of the predefined time points or in sensory block. The only statistically significant difference between volumes was found in the 30-Second Chair Stand Test at 2 hours (P = 0.02), but this difference had disappeared at 4 hours (P = 0.06). CONCLUSIONS: Varying the volume of ropivacaine 0.1% used for ACB between 10...

  3. PepArML: A Meta-Search Peptide Identification Platform for Tandem Mass Spectra.

    Science.gov (United States)

    Edwards, Nathan J

    2013-12-01

    The PepArML meta-search peptide identification platform for tandem mass spectra provides a unified search interface to seven search engines; a robust cluster, grid, and cloud computing scheduler for large-scale searches; and an unsupervised, model-free, machine-learning-based result combiner, which selects the best peptide identification for each spectrum, estimates false-discovery rates, and outputs pepXML format identifications. The meta-search platform supports Mascot; Tandem with native, k-score and s-score scoring; OMSSA; MyriMatch; and InsPecT with MS-GF spectral probability scores—reformatting spectral data and constructing search configurations for each search engine on the fly. The combiner selects the best peptide identification for each spectrum based on search engine results and features that model enzymatic digestion, retention time, precursor isotope clusters, mass accuracy, and proteotypic peptide properties, requiring no prior knowledge of feature utility or weighting. The PepArML meta-search peptide identification platform often identifies two to three times more spectra than individual search engines at 10% FDR.

  4. The colloquial approach: An active learning technique

    Science.gov (United States)

    Arce, Pedro

    1994-09-01

    This paper addresses the very important problem of the effectiveness of teaching methodologies in fundamental engineering courses such as transport phenomena. An active learning strategy, termed the colloquial approach, is proposed in order to increase student involvement in the learning process. This methodology is a considerable departure from traditional methods that use solo lecturing. It is based on guided discussions, and it promotes student understanding of new concepts by directing the student to construct new ideas by building upon the current knowledge and by focusing on key cases that capture the essential aspects of new concepts. The colloquial approach motivates the student to participate in discussions, to develop detailed notes, and to design (or construct) his or her own explanation for a given problem. This paper discusses the main features of the colloquial approach within the framework of other current and previous techniques. Problem-solving strategies and the need for new textbooks and for future investigations based on the colloquial approach are also outlined.

  5. EnzML: multi-label prediction of enzyme classes using InterPro signatures

    Directory of Open Access Journals (Sweden)

    De Ferrari Luna

    2012-04-01

    Full Text Available Abstract Background Manual annotation of enzymatic functions cannot keep up with automatic genome sequencing. In this work we explore the capacity of InterPro sequence signatures to automatically predict enzymatic function. Results We present EnzML, a multi-label classification method that can efficiently account also for proteins with multiple enzymatic functions: 50,000 in UniProt. EnzML was evaluated using a standard set of 300,747 proteins for which the manually curated Swiss-Prot and KEGG databases have agreeing Enzyme Commission (EC annotations. EnzML achieved more than 98% subset accuracy (exact match of all correct Enzyme Commission classes of a protein for the entire dataset and between 87 and 97% subset accuracy in reannotating eight entire proteomes: human, mouse, rat, mouse-ear cress, fruit fly, the S. pombe yeast, the E. coli bacterium and the M. jannaschii archaebacterium. To understand the role played by the dataset size, we compared the cross-evaluation results of smaller datasets, either constructed at random or from specific taxonomic domains such as archaea, bacteria, fungi, invertebrates, plants and vertebrates. The results were confirmed even when the redundancy in the dataset was reduced using UniRef100, UniRef90 or UniRef50 clusters. Conclusions InterPro signatures are a compact and powerful attribute space for the prediction of enzymatic function. This representation makes multi-label machine learning feasible in reasonable time (30 minutes to train on 300,747 instances with 10,852 attributes and 2,201 class values using the Mulan Binary Relevance Nearest Neighbours algorithm implementation (BR-kNN.

  6. AllerML: markup language for allergens.

    Science.gov (United States)

    Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner

    2011-06-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Geminivirus data warehouse: a database enriched with machine learning approaches.

    Science.gov (United States)

    Silva, Jose Cleydson F; Carvalho, Thales F M; Basso, Marcos F; Deguchi, Michihito; Pereira, Welison A; Sobrinho, Roberto R; Vidigal, Pedro M P; Brustolini, Otávio J B; Silva, Fabyano F; Dal-Bianco, Maximiller; Fontes, Renildes L F; Santos, Anésia A; Zerbini, Francisco Murilo; Cerqueira, Fabio R; Fontes, Elizabeth P B

    2017-05-05

    The Geminiviridae family encompasses a group of single-stranded DNA viruses with twinned and quasi-isometric virions, which infect a wide range of dicotyledonous and monocotyledonous plants and are responsible for significant economic losses worldwide. Geminiviruses are divided into nine genera, according to their insect vector, host range, genome organization, and phylogeny reconstruction. Using rolling-circle amplification approaches along with high-throughput sequencing technologies, thousands of full-length geminivirus and satellite genome sequences were amplified and have become available in public databases. As a consequence, many important challenges have emerged, namely, how to classify, store, and analyze massive datasets as well as how to extract information or new knowledge. Data mining approaches, mainly supported by machine learning (ML) techniques, are a natural means for high-throughput data analysis in the context of genomics, transcriptomics, proteomics, and metabolomics. Here, we describe the development of a data warehouse enriched with ML approaches, designated geminivirus.org. We implemented search modules, bioinformatics tools, and ML methods to retrieve high precision information, demarcate species, and create classifiers for genera and open reading frames (ORFs) of geminivirus genomes. The use of data mining techniques such as ETL (Extract, Transform, Load) to feed our database, as well as algorithms based on machine learning for knowledge extraction, allowed us to obtain a database with quality data and suitable tools for bioinformatics analysis. The Geminivirus Data Warehouse (geminivirus.org) offers a simple and user-friendly environment for information retrieval and knowledge discovery related to geminiviruses.

  8. Minimal-Learning-Parameter Technique Based Adaptive Neural Sliding Mode Control of MEMS Gyroscope

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2017-01-01

    Full Text Available This paper investigates an adaptive neural sliding mode controller for MEMS gyroscopes with minimal-learning-parameter technique. Considering the system uncertainty in dynamics, neural network is employed for approximation. Minimal-learning-parameter technique is constructed to decrease the number of update parameters, and in this way the computation burden is greatly reduced. Sliding mode control is designed to cancel the effect of time-varying disturbance. The closed-loop stability analysis is established via Lyapunov approach. Simulation results are presented to demonstrate the effectiveness of the method.

  9. Exploring Characterizations of Learning Object Repositories Using Data Mining Techniques

    Science.gov (United States)

    Segura, Alejandra; Vidal, Christian; Menendez, Victor; Zapata, Alfredo; Prieto, Manuel

    Learning object repositories provide a platform for the sharing of Web-based educational resources. As these repositories evolve independently, it is difficult for users to have a clear picture of the kind of contents they give access to. Metadata can be used to automatically extract a characterization of these resources by using machine learning techniques. This paper presents an exploratory study carried out in the contents of four public repositories that uses clustering and association rule mining algorithms to extract characterizations of repository contents. The results of the analysis include potential relationships between different attributes of learning objects that may be useful to gain an understanding of the kind of resources available and eventually develop search mechanisms that consider repository descriptions as a criteria in federated search.

  10. Leveraging knowledge engineering and machine learning for microbial bio-manufacturing.

    Science.gov (United States)

    Oyetunde, Tolutola; Bao, Forrest Sheng; Chen, Jiung-Wen; Martin, Hector Garcia; Tang, Yinjie J

    2018-05-03

    Genome scale modeling (GSM) predicts the performance of microbial workhorses and helps identify beneficial gene targets. GSM integrated with intracellular flux dynamics, omics, and thermodynamics have shown remarkable progress in both elucidating complex cellular phenomena and computational strain design (CSD). Nonetheless, these models still show high uncertainty due to a poor understanding of innate pathway regulations, metabolic burdens, and other factors (such as stress tolerance and metabolite channeling). Besides, the engineered hosts may have genetic mutations or non-genetic variations in bioreactor conditions and thus CSD rarely foresees fermentation rate and titer. Metabolic models play important role in design-build-test-learn cycles for strain improvement, and machine learning (ML) may provide a viable complementary approach for driving strain design and deciphering cellular processes. In order to develop quality ML models, knowledge engineering leverages and standardizes the wealth of information in literature (e.g., genomic/phenomic data, synthetic biology strategies, and bioprocess variables). Data driven frameworks can offer new constraints for mechanistic models to describe cellular regulations, to design pathways, to search gene targets, and to estimate fermentation titer/rate/yield under specified growth conditions (e.g., mixing, nutrients, and O 2 ). This review highlights the scope of information collections, database constructions, and machine learning techniques (such as deep learning and transfer learning), which may facilitate "Learn and Design" for strain development. Copyright © 2018. Published by Elsevier Inc.

  11. Stimulating Deep Learning Using Active Learning Techniques

    Science.gov (United States)

    Yew, Tee Meng; Dawood, Fauziah K. P.; a/p S. Narayansany, Kannaki; a/p Palaniappa Manickam, M. Kamala; Jen, Leong Siok; Hoay, Kuan Chin

    2016-01-01

    When students and teachers behave in ways that reinforce learning as a spectator sport, the result can often be a classroom and overall learning environment that is mostly limited to transmission of information and rote learning rather than deep approaches towards meaningful construction and application of knowledge. A group of college instructors…

  12. Active Learning Techniques Applied to an Interdisciplinary Mineral Resources Course.

    Science.gov (United States)

    Aird, H. M.

    2015-12-01

    An interdisciplinary active learning course was introduced at the University of Puget Sound entitled 'Mineral Resources and the Environment'. Various formative assessment and active learning techniques that have been effective in other courses were adapted and implemented to improve student learning, increase retention and broaden knowledge and understanding of course material. This was an elective course targeted towards upper-level undergraduate geology and environmental majors. The course provided an introduction to the mineral resources industry, discussing geological, environmental, societal and economic aspects, legislation and the processes involved in exploration, extraction, processing, reclamation/remediation and recycling of products. Lectures and associated weekly labs were linked in subject matter; relevant readings from the recent scientific literature were assigned and discussed in the second lecture of the week. Peer-based learning was facilitated through weekly reading assignments with peer-led discussions and through group research projects, in addition to in-class exercises such as debates. Writing and research skills were developed through student groups designing, carrying out and reporting on their own semester-long research projects around the lasting effects of the historical Ruston Smelter on the biology and water systems of Tacoma. The writing of their mini grant proposals and final project reports was carried out in stages to allow for feedback before the deadline. Speakers from industry were invited to share their specialist knowledge as guest lecturers, and students were encouraged to interact with them, with a view to employment opportunities. Formative assessment techniques included jigsaw exercises, gallery walks, placemat surveys, think pair share and take-home point summaries. Summative assessment included discussion leadership, exams, homeworks, group projects, in-class exercises, field trips, and pre-discussion reading exercises

  13. Using the 5E Learning Cycle with Metacognitive Technique to Enhance Students’ Mathematical Critical Thinking Skills

    Directory of Open Access Journals (Sweden)

    Runisah Runisah

    2017-02-01

    Full Text Available This study aims to describe enhancement and achievement of mathematical critical thinking skills of students who received the 5E Learning Cycle with Metacognitive technique, the 5E Learning Cycle, and conventional learning. This study use experimental method with pretest-posttest control group design. Population are junior high school students in Indramayu city, Indonesia. Sample are three classes of eighth grade students from high level school and three classes from medium level school. The study reveal that in terms of overall, mathematical critical thinking skills enhancement and achievement of students who received the 5E Learning Cycle with Metacognitive technique is better than students who received the 5E Learning Cycle and conventional learning. Mathematical critical thinking skills of students who received the 5E Learning Cycle is better than students who received conventional learning. There is no interaction effect between learning model and school level toward enhancement and achievement of students’ mathematical critical thinking skills.

  14. The impact of machine learning techniques in the study of bipolar disorder: A systematic review.

    Science.gov (United States)

    Librenza-Garcia, Diego; Kotzian, Bruno Jaskulski; Yang, Jessica; Mwangi, Benson; Cao, Bo; Pereira Lima, Luiza Nunes; Bermudez, Mariane Bagatin; Boeira, Manuela Vianna; Kapczinski, Flávio; Passos, Ives Cavalcante

    2017-09-01

    Machine learning techniques provide new methods to predict diagnosis and clinical outcomes at an individual level. We aim to review the existing literature on the use of machine learning techniques in the assessment of subjects with bipolar disorder. We systematically searched PubMed, Embase and Web of Science for articles published in any language up to January 2017. We found 757 abstracts and included 51 studies in our review. Most of the included studies used multiple levels of biological data to distinguish the diagnosis of bipolar disorder from other psychiatric disorders or healthy controls. We also found studies that assessed the prediction of clinical outcomes and studies using unsupervised machine learning to build more consistent clinical phenotypes of bipolar disorder. We concluded that given the clinical heterogeneity of samples of patients with BD, machine learning techniques may provide clinicians and researchers with important insights in fields such as diagnosis, personalized treatment and prognosis orientation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comparison of Machine Learning Techniques in Inferring Phytoplankton Size Classes

    Directory of Open Access Journals (Sweden)

    Shuibo Hu

    2018-03-01

    Full Text Available The size of phytoplankton not only influences its physiology, metabolic rates and marine food web, but also serves as an indicator of phytoplankton functional roles in ecological and biogeochemical processes. Therefore, some algorithms have been developed to infer the synoptic distribution of phytoplankton cell size, denoted as phytoplankton size classes (PSCs, in surface ocean waters, by the means of remotely sensed variables. This study, using the NASA bio-Optical Marine Algorithm Data set (NOMAD high performance liquid chromatography (HPLC database, and satellite match-ups, aimed to compare the effectiveness of modeling techniques, including partial least square (PLS, artificial neural networks (ANN, support vector machine (SVM and random forests (RF, and feature selection techniques, including genetic algorithm (GA, successive projection algorithm (SPA and recursive feature elimination based on support vector machine (SVM-RFE, for inferring PSCs from remote sensing data. Results showed that: (1 SVM-RFE worked better in selecting sensitive features; (2 RF performed better than PLS, ANN and SVM in calibrating PSCs retrieval models; (3 machine learning techniques produced better performance than the chlorophyll-a based three-component method; (4 sea surface temperature, wind stress, and spectral curvature derived from the remote sensing reflectance at 490, 510, and 555 nm were among the most sensitive features to PSCs; and (5 the combination of SVM-RFE feature selection techniques and random forests regression was recommended for inferring PSCs. This study demonstrated the effectiveness of machine learning techniques in selecting sensitive features and calibrating models for PSCs estimations with remote sensing.

  16. Is a volume of 3.6 mL better than 1.8 mL for inferior alveolar nerve blocks in patients with symptomatic irreversible pulpitis?

    Science.gov (United States)

    Fowler, Sara; Reader, Al

    2013-08-01

    The purpose of this retrospective study was to determine the success of the inferior alveolar nerve (IAN) block using either 3.6 mL or 1.8 mL 2% lidocaine with 1:100,000 epinephrine in patients presenting with symptomatic irreversible pulpitis. As part of 7 previously published studies, 319 emergency patients presenting with symptomatic irreversible pulpitis received either a 1.8-mL volume or 3.6-mL volume of 2% lidocaine with 1:100,000 epinephrine in an IAN block. One hundred ninety patients received a 1.8-mL volume, and 129 received a 3.6-mL volume. Endodontic emergency treatment was completed on each subject. Success was defined as the ability to access and instrument the tooth without pain (visual analog scale score of 0) or mild pain (VAS rating ≤54 mm). Success of the 1.8-mL volume was 28%, and for the 3.6-mL volume it was 39%. There was no statistically significant difference between the 2 volumes. In conclusion, for patients presenting with irreversible pulpitis, success was not significantly different between a 3.6-mL volume and a 1.8-mL volume of 2% lidocaine with 1:100,000 epinephrine. The success rates (28%-39%) with either volume were not high enough to ensure complete pulpal anesthesia. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  17. ML-MG: Multi-label Learning with Missing Labels Using a Mixed Graph

    KAUST Repository

    Wu, Baoyuan; Lyu, Siwei; Ghanem, Bernard

    2015-01-01

    This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i.e. some

  18. nmrML: A Community Supported Open Data Standard for the Description, Storage, and Exchange of NMR Data.

    Science.gov (United States)

    Schober, Daniel; Jacob, Daniel; Wilson, Michael; Cruz, Joseph A; Marcu, Ana; Grant, Jason R; Moing, Annick; Deborde, Catherine; de Figueiredo, Luis F; Haug, Kenneth; Rocca-Serra, Philippe; Easton, John; Ebbels, Timothy M D; Hao, Jie; Ludwig, Christian; Günther, Ulrich L; Rosato, Antonio; Klein, Matthias S; Lewis, Ian A; Luchinat, Claudio; Jones, Andrew R; Grauslys, Arturas; Larralde, Martin; Yokochi, Masashi; Kobayashi, Naohiro; Porzel, Andrea; Griffin, Julian L; Viant, Mark R; Wishart, David S; Steinbeck, Christoph; Salek, Reza M; Neumann, Steffen

    2018-01-02

    NMR is a widely used analytical technique with a growing number of repositories available. As a result, demands for a vendor-agnostic, open data format for long-term archiving of NMR data have emerged with the aim to ease and encourage sharing, comparison, and reuse of NMR data. Here we present nmrML, an open XML-based exchange and storage format for NMR spectral data. The nmrML format is intended to be fully compatible with existing NMR data for chemical, biochemical, and metabolomics experiments. nmrML can capture raw NMR data, spectral data acquisition parameters, and where available spectral metadata, such as chemical structures associated with spectral assignments. The nmrML format is compatible with pure-compound NMR data for reference spectral libraries as well as NMR data from complex biomixtures, i.e., metabolomics experiments. To facilitate format conversions, we provide nmrML converters for Bruker, JEOL and Agilent/Varian vendor formats. In addition, easy-to-use Web-based spectral viewing, processing, and spectral assignment tools that read and write nmrML have been developed. Software libraries and Web services for data validation are available for tool developers and end-users. The nmrML format has already been adopted for capturing and disseminating NMR data for small molecules by several open source data processing tools and metabolomics reference spectral libraries, e.g., serving as storage format for the MetaboLights data repository. The nmrML open access data standard has been endorsed by the Metabolomics Standards Initiative (MSI), and we here encourage user participation and feedback to increase usability and make it a successful standard.

  19. The ATLAS Higgs Machine Learning Challenge

    CERN Document Server

    Cowan, Glen; The ATLAS collaboration; Bourdarios, Claire

    2015-01-01

    High Energy Physics has been using Machine Learning techniques (commonly known as Multivariate Analysis) since the 1990s with Artificial Neural Net and more recently with Boosted Decision Trees, Random Forest etc. Meanwhile, Machine Learning has become a full blown field of computer science. With the emergence of Big Data, data scientists are developing new Machine Learning algorithms to extract meaning from large heterogeneous data. HEP has exciting and difficult problems like the extraction of the Higgs boson signal, and at the same time data scientists have advanced algorithms: the goal of the HiggsML project was to bring the two together by a “challenge”: participants from all over the world and any scientific background could compete online to obtain the best Higgs to tau tau signal significance on a set of ATLAS fully simulated Monte Carlo signal and background. Instead of HEP physicists browsing through machine learning papers and trying to infer which new algorithms might be useful for HEP, then c...

  20. Machine Learning-based Virtual Screening and Its Applications to Alzheimer's Drug Discovery: A Review.

    Science.gov (United States)

    Carpenter, Kristy A; Huang, Xudong

    2018-06-07

    Virtual Screening (VS) has emerged as an important tool in the drug development process, as it conducts efficient in silico searches over millions of compounds, ultimately increasing yields of potential drug leads. As a subset of Artificial Intelligence (AI), Machine Learning (ML) is a powerful way of conducting VS for drug leads. ML for VS generally involves assembling a filtered training set of compounds, comprised of known actives and inactives. After training the model, it is validated and, if sufficiently accurate, used on previously unseen databases to screen for novel compounds with desired drug target binding activity. The study aims to review ML-based methods used for VS and applications to Alzheimer's disease (AD) drug discovery. To update the current knowledge on ML for VS, we review thorough backgrounds, explanations, and VS applications of the following ML techniques: Naïve Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). All techniques have found success in VS, but the future of VS is likely to lean more heavily toward the use of neural networks - and more specifically, Convolutional Neural Networks (CNN), which are a subset of ANN that utilize convolution. We additionally conceptualize a work flow for conducting ML-based VS for potential therapeutics of for AD, a complex neurodegenerative disease with no known cure and prevention. This both serves as an example of how to apply the concepts introduced earlier in the review and as a potential workflow for future implementation. Different ML techniques are powerful tools for VS, and they have advantages and disadvantages albeit. ML-based VS can be applied to AD drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Malignant lymphomas (ML and HIV infection in Tanzania

    Directory of Open Access Journals (Sweden)

    Mwakigonja Amos R

    2008-06-01

    Full Text Available Abstract Background HIV infection is reported to be associated with some malignant lymphomas (ML so called AIDS-related lymphomas (ARL, with an aggressive behavior and poor prognosis. The ML frequency, pathogenicity, clinical patterns and possible association with AIDS in Tanzania, are not well documented impeding the development of preventive and therapeutic strategies. Methods Sections of 176 archival formalin-fixed paraffin-embedded biopsies of ML patients at Muhimbili National Hospital (MNH/Muhimbili University of Health and Allied Sciences (MUHAS, Tanzania from 1996–2001 were stained for hematoxylin and eosin and selected (70 cases for expression of pan-leucocytic (CD45, B-cell (CD20, T-cell (CD3, Hodgkin/RS cell (CD30, histiocyte (CD68 and proliferation (Ki-67 antigen markers. Corresponding clinical records were also evaluated. Available sera from 38 ML patients were screened (ELISA for HIV antibodies. Results The proportion of ML out of all diagnosed tumors at MNH during the 6 year period was 4.2% (176/4200 comprising 77.84% non-Hodgkin (NHL including 19.32% Burkitt's (BL and 22.16% Hodgkin's disease (HD. The ML tumors frequency increased from 0.42% (1997 to 0.70% (2001 and 23.7% of tested sera from these patients were HIV positive. The mean age for all ML was 30, age-range 3–91 and peak age was 1–20 years. The male:female ratio was 1.8:1. Supra-diaphragmatic presentation was commonest and histological sub-types were mostly aggressive B-cell lymphomas however, no clear cases of primary effusion lymphoma (PEL and primary central nervous system lymphoma (PCNSL were diagnosed. Conclusion Malignant lymphomas apparently, increased significantly among diagnosed tumors at MNH between 1996 and 2001, predominantly among the young, HIV infected and AIDS patients. The frequent aggressive clinical and histological presentation as well as the dominant B-immunophenotype and the HIV serology indicate a pathogenic association with AIDS. Therefore

  2. Malignant lymphomas (ML) and HIV infection in Tanzania.

    Science.gov (United States)

    Mwakigonja, Amos R; Kaaya, Ephata E; Mgaya, Edward M

    2008-06-10

    HIV infection is reported to be associated with some malignant lymphomas (ML) so called AIDS-related lymphomas (ARL), with an aggressive behavior and poor prognosis. The ML frequency, pathogenicity, clinical patterns and possible association with AIDS in Tanzania, are not well documented impeding the development of preventive and therapeutic strategies. Sections of 176 archival formalin-fixed paraffin-embedded biopsies of ML patients at Muhimbili National Hospital (MNH)/Muhimbili University of Health and Allied Sciences (MUHAS), Tanzania from 1996-2001 were stained for hematoxylin and eosin and selected (70) cases for expression of pan-leucocytic (CD45), B-cell (CD20), T-cell (CD3), Hodgkin/RS cell (CD30), histiocyte (CD68) and proliferation (Ki-67) antigen markers. Corresponding clinical records were also evaluated. Available sera from 38 ML patients were screened (ELISA) for HIV antibodies. The proportion of ML out of all diagnosed tumors at MNH during the 6 year period was 4.2% (176/4200) comprising 77.84% non-Hodgkin (NHL) including 19.32% Burkitt's (BL) and 22.16% Hodgkin's disease (HD). The ML tumors frequency increased from 0.42% (1997) to 0.70% (2001) and 23.7% of tested sera from these patients were HIV positive. The mean age for all ML was 30, age-range 3-91 and peak age was 1-20 years. The male:female ratio was 1.8:1. Supra-diaphragmatic presentation was commonest and histological sub-types were mostly aggressive B-cell lymphomas however, no clear cases of primary effusion lymphoma (PEL) and primary central nervous system lymphoma (PCNSL) were diagnosed. Malignant lymphomas apparently, increased significantly among diagnosed tumors at MNH between 1996 and 2001, predominantly among the young, HIV infected and AIDS patients. The frequent aggressive clinical and histological presentation as well as the dominant B-immunophenotype and the HIV serology indicate a pathogenic association with AIDS. Therefore, routine HIV screening of all malignant lymphoma

  3. Post treatment PSA nadirs support continuing dose escalation study in patients with pretreatment PSA levels >10 ng/ml, but not in those with PSA <10 NG/ML

    International Nuclear Information System (INIS)

    Herold, D.H.; Hanlon, A.L.; Movsas, B.; Hanks, G.E.

    1996-01-01

    Purpose: We have recently shown that ICRU reporting point radiation doses above 71 Gy are not associated with improved bNED survival in prostate cancer patients with pretreatment PSA level 20 ng/ml we found a strong correlation between dose and nadir values < 1.0 ng/ml (p=.003) as well as for nadir's < 0.5 ng/ml (p=.04). This dose/nadir effect held at several dose levels, but 74 Gy for nadir values < 1.0 ng/ml and 72 Gy for nadir's < 0.5 ng/ml remained the most significant. 32% of these patients achieved a nadir < 1.0ng/ml and 15% < 0.5ng/ml. Conclusions: This analysis provides strong additional support that patients with pretreatment PSA values of < 10 ng/ml do not benefit from dose escalation beyond an ICRU reporting point dose of 71 Gy. For patients with pretreatment PSA's of 10-19.9 ng/ml there is no dose/nadir response evaluated at a nadir of 1.0 ng/ml; however, there is a borderline effect observed at a nadir of 0.5 ng/ml. Patients with pretreatment PSA's of 20 ng/ml or greater clearly benefit from higher doses as evaluated by PSA nadirs of 1.0 ng/ml, and 0.5 ng/ml. These studies support the continued investigation of dose escalation in treating patients with PSA levels over 10 ng/ml, they do not support continued investigation of dose escalation beyond 71 Gy in patients with pretreatment PSA levels < 10 ng/ml. The failure to demonstrate any dose response for the low PSA group and the finding of only a borderline effect for the intermediate PSA group may be influenced by the relatively small number of patients in our series treated to doses < 70 Gy and the fact that none of our patients were treated to doses below 65.98 Gy. The lower limit of acceptible dose has yet to be defined

  4. Using Machine Learning Techniques in the Analysis of Oceanographic Data

    Science.gov (United States)

    Falcinelli, K. E.; Abuomar, S.

    2017-12-01

    Acoustic Doppler Current Profilers (ADCPs) are oceanographic tools capable of collecting large amounts of current profile data. Using unsupervised machine learning techniques such as principal component analysis, fuzzy c-means clustering, and self-organizing maps, patterns and trends in an ADCP dataset are found. Cluster validity algorithms such as visual assessment of cluster tendency and clustering index are used to determine the optimal number of clusters in the ADCP dataset. These techniques prove to be useful in analysis of ADCP data and demonstrate potential for future use in other oceanographic applications.

  5. Study of CT image texture using deep learning techniques

    Science.gov (United States)

    Dutta, Sandeep; Fan, Jiahua; Chevalier, David

    2018-03-01

    For CT imaging, reduction of radiation dose while improving or maintaining image quality (IQ) is currently a very active research and development topic. Iterative Reconstruction (IR) approaches have been suggested to be able to offer better IQ to dose ratio compared to the conventional Filtered Back Projection (FBP) reconstruction. However, it has been widely reported that often CT image texture from IR is different compared to that from FBP. Researchers have proposed different figure of metrics to quantitate the texture from different reconstruction methods. But there is still a lack of practical and robust method in the field for texture description. This work applied deep learning method for CT image texture study. Multiple dose scans of a 20cm diameter cylindrical water phantom was performed on Revolution CT scanner (GE Healthcare, Waukesha) and the images were reconstructed with FBP and four different IR reconstruction settings. The training images generated were randomly allotted (80:20) to a training and validation set. An independent test set of 256-512 images/class were collected with the same scan and reconstruction settings. Multiple deep learning (DL) networks with Convolution, RELU activation, max-pooling, fully-connected, global average pooling and softmax activation layers were investigated. Impact of different image patch size for training was investigated. Original pixel data as well as normalized image data were evaluated. DL models were reliably able to classify CT image texture with accuracy up to 99%. Results show that the deep learning techniques suggest that CT IR techniques may help lower the radiation dose compared to FBP.

  6. Development of mechanoluminescence technique for impact studies

    International Nuclear Information System (INIS)

    Chandra, B.P.

    2011-01-01

    A new technique called, mechanoluminescence technique, is developed for measuring the parameters of impact. This technique is based on the phenomenon of mechanoluminescence (ML), in which light emission takes place during any mechanical action on solids. When a small solid ball makes an impact on the mechanoluminescent thin film coated on a solid, then initially the elastico ML (EML) intensity increases with time, attains a maximum value I m at a particular time t m , and later on it decreases with time. The contact time T c of ball, can be determined from the relation T c =2t c , where t c is the time at which the EML emission due to compression of the sample becomes negligible. The area from where the EML emission occurs can be taken as the contact area A c . The maximum compression h is given by h=A c /(πr), where r is the radius of the impacting ball, and thus, h can be determined from the known values of A c and r. The maximum force at contact is given by F m =(2mU 0 )/T c , where m is the mass of the impacting ball and U 0 is the velocity of the ball at impact. The maximum impact stress σ m can be obtained from the relation, σ m =F m /A c =(2mU 0 )/(T c A c ). Thus, ML provides a real-time technique for determining the impact parameters such as T c , A c , h, F m and σ m . Using the ML technique, the impact parameters of the SrAl 2 O 4 :Eu film and ZnS:Mn coating are determined. The ML technique can be used to determine the impact parameters in the elastic region and plastic region as well as fracture. ML can also be used to determine the impact parameters for the collision between solid and liquid, if the mechanoluminescent material is coated on the surface of the solid. The measurement of fracto ML in microsecond and nanosecond range may provide a tool for studying the fragmentations in solids by the impact. Using the fast camera the contact area and the depth of compression can be determined for different intervals of time. - Research highlights: → A

  7. Superconducting magnet for 'ML-100'

    Energy Technology Data Exchange (ETDEWEB)

    Saito, R; Fujinaga, T; Tada, N; Kimura, H

    1974-07-01

    A magneticaly levitated experimental vehicle (Ml-100) was designed and constructed in commemoration of the centenary of the Japanese National Railways. For magnetic levitation the vehicle is provided with two superconducting magnets. In the test operation of the vehicle, these superconducting magnets showed stable performance in levitating vehicle body.

  8. Challenges in the Verification of Reinforcement Learning Algorithms

    Science.gov (United States)

    Van Wesel, Perry; Goodloe, Alwyn E.

    2017-01-01

    Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.

  9. Toward accelerating landslide mapping with interactive machine learning techniques

    Science.gov (United States)

    Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne

    2013-04-01

    Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also

  10. Improving orbit prediction accuracy through supervised machine learning

    Science.gov (United States)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  11. A practical guide to SysML the systems modeling language

    CERN Document Server

    Friendenthal,Sanford; Steiner, Rick

    2009-01-01

    This book is the bestselling, authoritative guide to SysML for systems and software engineers, providing a comprehensive and practical resource for modeling systems with SysML. Fully updated to cover newly released version 1.3, it includes a full description of the modeling language along with a quick reference guide, and shows how an organization or project can transition to model-based systems engineering using SysML, with considerations for processes, methods, tools, and training. Numerous examples help readers understand how SysML can be used in practice, while reference material facilitates studying for the OMG Systems Modeling Professional (OCSMP) Certification Program, designed to test candidates' knowledge of SysML and their ability to use models to represent real-world systems.

  12. Aggressiveness of powdery mildew on 'ml-o'- resistant barley

    International Nuclear Information System (INIS)

    Andersen, Lars

    1990-01-01

    The ml-o genes in barley are important sources in breeding for resistance against the barley powdery mildew fungus (Erysiphe graminis). The resistance mechanism is a rapid formation of a large callose containing cell wall apposition at the site of the pathogen's infection attempt. This reduces the chances of infection to almost nil in all epidermal cells, except in the small subsidiary cells, in which appositions are rarely formed. Small mildew colonies from infections in subsidiary cells may be seen on the otherwise resistant leaf. This is described by the infection type 0/(4). Mildew isolate HL 3 selected by SCHWARZBACH has increased aggressiveness. No ml-o-virulent isolates are known. However, ml-o-resistant varieties when grown extensively in Europe, will introduce field selection for mildew pathotypes with aggressiveness or virulence to ml-o resistance. Studies on increased aggressiveness require new methods. The material comprises two powdery mildew isolates: GE 3 without ml-o aggressiveness and the aggressive HL 3/5; and two near-isogenic barley lines in Carlsberg II: Riso 5678(R) with the recessive mutant resistance gene ml-o5 and Riso 5678(S) with the wild-type gene for susceptibility. Latent period and disease efficiency show no significant differences between the two isolates on the susceptible barley line (S) but the isolates differ from each other on the resistant barley line

  13. Machine learning for epigenetics and future medical applications.

    Science.gov (United States)

    Holder, Lawrence B; Haque, M Muksitul; Skinner, Michael K

    2017-07-03

    Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review.

  14. Modelling CRM implementation services with SysML

    OpenAIRE

    Bibiano, Luis H.; Pastor Collado, Juan Antonio; Mayol Sarroca, Enric

    2009-01-01

    CRM information systems are valuable tools for enterprises. But CRM implementation projects are risky and present a high failure rate. In this paper we regard CRM implementation projects as services that could be greatly improved by addressing them in a methodological way that can be designed with the help of tools such as SysML. Here we introduce and comment on our first experience on the use of SysML language, not very well known, for modelling the elements involved in the CRM implementatio...

  15. IrML – a gene encoding a new member of the ML protein family from the hard tick, Ixodes ricinus

    Czech Academy of Sciences Publication Activity Database

    Horáčková, J.; Rudenko, Natalia; Golovchenko, Maryna; Havlíková, S.; Grubhoffer, Libor

    2010-01-01

    Roč. 35, č. 2 (2010), s. 410-418 ISSN 1081-1710 R&D Projects: GA ČR(CZ) GA524/06/1479; GA MŠk(CZ) LC06009 Institutional research plan: CEZ:AV0Z60220518 Keywords : Ixodes ricinus * tick * ML-domain containing protein * in situ hybridization * gene expression * ML (MD-2-related lipid-recognition) domain Subject RIV: GJ - Animal Vermins ; Diseases, Veterinary Medicine Impact factor: 1.256, year: 2010

  16. A Comprehensive Review and meta-analysis on Applications of Machine Learning Techniques in Intrusion Detection

    Directory of Open Access Journals (Sweden)

    Manojit Chattopadhyay

    2018-05-01

    Full Text Available Securing a machine from various cyber-attacks has been of serious concern for researchers, statutory bodies such as governments, business organizations and users in both wired and wireless media. However, during the last decade, the amount of data handling by any device, particularly servers, has increased exponentially and hence the security of these devices has become a matter of utmost concern. This paper attempts to examine the challenges in the application of machine learning techniques to intrusion detection. We review different inherent issues in defining and applying the machine learning techniques to intrusion detection. We also attempt to identify the best technological solution for changing usage pattern by comparing different machine learning techniques on different datasets and summarizing their performance using various performance metrics. This paper highlights the research challenges and future trends of intrusion detection in dynamic scenarios of intrusion detection problems in diverse network technologies.

  17. Machine learning for medical ultrasound: status, methods, and future opportunities.

    Science.gov (United States)

    Brattain, Laura J; Telfer, Brian A; Dhyani, Manish; Grajo, Joseph R; Samir, Anthony E

    2018-04-01

    Ultrasound (US) imaging is the most commonly performed cross-sectional diagnostic imaging modality in the practice of medicine. It is low-cost, non-ionizing, portable, and capable of real-time image acquisition and display. US is a rapidly evolving technology with significant challenges and opportunities. Challenges include high inter- and intra-operator variability and limited image quality control. Tremendous opportunities have arisen in the last decade as a result of exponential growth in available computational power coupled with progressive miniaturization of US devices. As US devices become smaller, enhanced computational capability can contribute significantly to decreasing variability through advanced image processing. In this paper, we review leading machine learning (ML) approaches and research directions in US, with an emphasis on recent ML advances. We also present our outlook on future opportunities for ML techniques to further improve clinical workflow and US-based disease diagnosis and characterization.

  18. Interactive machine learning for health informatics: when do we need the human-in-the-loop?

    Science.gov (United States)

    Holzinger, Andreas

    2016-06-01

    Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

  19. ML Arvutite aktsiakapital suurenes / Anne Oja

    Index Scriptorium Estoniae

    Oja, Anne, 1970-

    2006-01-01

    ML Arvutid omanik Aivar Paalberg tõstis aktsiakapitali seniselt 10 miljonilt 24 miljonile eesmärgiga tugevdada oma positsioone Eesti turul ja kasvada kiiremas tempos kui kogu turg. Diagramm: Majandusnäitajad

  20. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    International Nuclear Information System (INIS)

    Dral, Pavlo O.; Lilienfeld, O. Anatole von; Thiel, Walter

    2015-01-01

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C 7 H 10 O 2 , for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules

  1. Novel Machine Learning-Based Techniques for Efficient Resource Allocation in Next Generation Wireless Networks

    KAUST Repository

    AlQuerm, Ismail A.

    2018-02-21

    There is a large demand for applications of high data rates in wireless networks. These networks are becoming more complex and challenging to manage due to the heterogeneity of users and applications specifically in sophisticated networks such as the upcoming 5G. Energy efficiency in the future 5G network is one of the essential problems that needs consideration due to the interference and heterogeneity of the network topology. Smart resource allocation, environmental adaptivity, user-awareness and energy efficiency are essential features in the future networks. It is important to support these features at different networks topologies with various applications. Cognitive radio has been found to be the paradigm that is able to satisfy the above requirements. It is a very interdisciplinary topic that incorporates flexible system architectures, machine learning, context awareness and cooperative networking. Mitola’s vision about cognitive radio intended to build context-sensitive smart radios that are able to adapt to the wireless environment conditions while maintaining quality of service support for different applications. Artificial intelligence techniques including heuristics algorithms and machine learning are the shining tools that are employed to serve the new vision of cognitive radio. In addition, these techniques show a potential to be utilized in an efficient resource allocation for the upcoming 5G networks’ structures such as heterogeneous multi-tier 5G networks and heterogeneous cloud radio access networks due to their capability to allocate resources according to real-time data analytics. In this thesis, we study cognitive radio from a system point of view focusing closely on architectures, artificial intelligence techniques that can enable intelligent radio resource allocation and efficient radio parameters reconfiguration. We propose a modular cognitive resource management architecture, which facilitates a development of flexible control for

  2. CytometryML and other data formats

    Science.gov (United States)

    Leif, Robert C.

    2006-02-01

    Cytology automation and research will be enhanced by the creation of a common data format. This data format would provide the pathology and research communities with a uniform way for annotating and exchanging images, flow cytometry, and associated data. This specification and/or standard will include descriptions of the acquisition device, staining, the binary representations of the image and list-mode data, the measurements derived from the image and/or the list-mode data, and descriptors for clinical/pathology and research. An international, vendor-supported, non-proprietary specification will allow pathologists, researchers, and companies to develop and use image capture/analysis software, as well as list-mode analysis software, without worrying about incompatibilities between proprietary vendor formats. Presently, efforts to create specifications and/or descriptions of these formats include the Laboratory Digital Imaging Project (LDIP) Data Exchange Specification; extensions to the Digital Imaging and Communications in Medicine (DICOM); Open Microscopy Environment (OME); Flowcyt, an extension to the present Flow Cytometry Standard (FCS); and CytometryML. The feasibility of creating a common data specification for digital microscopy and flow cytometry in a manner consistent with its use for medical devices and interoperability with both hospital information and picture archiving systems has been demonstrated by the creation of the CytometryML schemas. The feasibility of creating a software system for digital microscopy has been demonstrated by the OME. CytometryML consists of schemas that describe instruments and their measurements. These instruments include digital microscopes and flow cytometers. Optical components including the instruments' excitation and emission parts are described. The description of the measurements made by these instruments includes the tagged molecule, data acquisition subsystem, and the format of the list-mode and/or image data. Many

  3. Prediction of lung cancer patient survival via supervised machine learning classification techniques.

    Science.gov (United States)

    Lynch, Chip M; Abdollahi, Behnaz; Fuqua, Joshua D; de Carlo, Alexandra R; Bartholomai, James A; Balgemann, Rayeanne N; van Berkel, Victor H; Frieboes, Hermann B

    2017-12-01

    Outcomes for cancer patients have been previously estimated by applying various machine learning techniques to large datasets such as the Surveillance, Epidemiology, and End Results (SEER) program database. In particular for lung cancer, it is not well understood which types of techniques would yield more predictive information, and which data attributes should be used in order to determine this information. In this study, a number of supervised learning techniques is applied to the SEER database to classify lung cancer patients in terms of survival, including linear regression, Decision Trees, Gradient Boosting Machines (GBM), Support Vector Machines (SVM), and a custom ensemble. Key data attributes in applying these methods include tumor grade, tumor size, gender, age, stage, and number of primaries, with the goal to enable comparison of predictive power between the various methods The prediction is treated like a continuous target, rather than a classification into categories, as a first step towards improving survival prediction. The results show that the predicted values agree with actual values for low to moderate survival times, which constitute the majority of the data. The best performing technique was the custom ensemble with a Root Mean Square Error (RMSE) value of 15.05. The most influential model within the custom ensemble was GBM, while Decision Trees may be inapplicable as it had too few discrete outputs. The results further show that among the five individual models generated, the most accurate was GBM with an RMSE value of 15.32. Although SVM underperformed with an RMSE value of 15.82, statistical analysis singles the SVM as the only model that generated a distinctive output. The results of the models are consistent with a classical Cox proportional hazards model used as a reference technique. We conclude that application of these supervised learning techniques to lung cancer data in the SEER database may be of use to estimate patient survival time

  4. New Techniques for Deep Learning with Geospatial Data using TensorFlow, Earth Engine, and Google Cloud Platform

    Science.gov (United States)

    Hancher, M.

    2017-12-01

    Recent years have seen promising results from many research teams applying deep learning techniques to geospatial data processing. In that same timeframe, TensorFlow has emerged as the most popular framework for deep learning in general, and Google has assembled petabytes of Earth observation data from a wide variety of sources and made them available in analysis-ready form in the cloud through Google Earth Engine. Nevertheless, developing and applying deep learning to geospatial data at scale has been somewhat cumbersome to date. We present a new set of tools and techniques that simplify this process. Our approach combines the strengths of several underlying tools: TensorFlow for its expressive deep learning framework; Earth Engine for data management, preprocessing, postprocessing, and visualization; and other tools in Google Cloud Platform to train TensorFlow models at scale, perform additional custom parallel data processing, and drive the entire process from a single familiar Python development environment. These tools can be used to easily apply standard deep neural networks, convolutional neural networks, and other custom model architectures to a variety of geospatial data structures. We discuss our experiences applying these and related tools to a range of machine learning problems, including classic problems like cloud detection, building detection, land cover classification, as well as more novel problems like illegal fishing detection. Our improved tools will make it easier for geospatial data scientists to apply modern deep learning techniques to their own problems, and will also make it easier for machine learning researchers to advance the state of the art of those techniques.

  5. Machine learning for epigenetics and future medical applications

    OpenAIRE

    Holder, Lawrence B.; Haque, M. Muksitul; Skinner, Michael K.

    2017-01-01

    ABSTRACT Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems w...

  6. ML-MG: Multi-label Learning with Missing Labels Using a Mixed Graph

    KAUST Repository

    Wu, Baoyuan

    2015-12-07

    This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i.e. some of their labels are missing). To handle missing labels, we propose a unified model of label dependencies by constructing a mixed graph, which jointly incorporates (i) instance-level similarity and class co-occurrence as undirected edges and (ii) semantic label hierarchy as directed edges. Unlike most MLML methods, We formulate this learning problem transductively as a convex quadratic matrix optimization problem that encourages training label consistency and encodes both types of label dependencies (i.e. undirected and directed edges) using quadratic terms and hard linear constraints. The alternating direction method of multipliers (ADMM) can be used to exactly and efficiently solve this problem. To evaluate our proposed method, we consider two popular applications (image and video annotation), where the label hierarchy can be derived from Wordnet. Experimental results show that our method achieves a significant improvement over state-of-the-art methods in performance and robustness to missing labels.

  7. Prediction of mortality after radical cystectomy for bladder cancer by machine learning techniques.

    Science.gov (United States)

    Wang, Guanjin; Lam, Kin-Man; Deng, Zhaohong; Choi, Kup-Sze

    2015-08-01

    Bladder cancer is a common cancer in genitourinary malignancy. For muscle invasive bladder cancer, surgical removal of the bladder, i.e. radical cystectomy, is in general the definitive treatment which, unfortunately, carries significant morbidities and mortalities. Accurate prediction of the mortality of radical cystectomy is therefore needed. Statistical methods have conventionally been used for this purpose, despite the complex interactions of high-dimensional medical data. Machine learning has emerged as a promising technique for handling high-dimensional data, with increasing application in clinical decision support, e.g. cancer prediction and prognosis. Its ability to reveal the hidden nonlinear interactions and interpretable rules between dependent and independent variables is favorable for constructing models of effective generalization performance. In this paper, seven machine learning methods are utilized to predict the 5-year mortality of radical cystectomy, including back-propagation neural network (BPN), radial basis function (RBFN), extreme learning machine (ELM), regularized ELM (RELM), support vector machine (SVM), naive Bayes (NB) classifier and k-nearest neighbour (KNN), on a clinicopathological dataset of 117 patients of the urology unit of a hospital in Hong Kong. The experimental results indicate that RELM achieved the highest average prediction accuracy of 0.8 at a fast learning speed. The research findings demonstrate the potential of applying machine learning techniques to support clinical decision making. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A comparison of machine learning techniques for predicting downstream acid mine drainage

    CSIR Research Space (South Africa)

    van Zyl, TL

    2014-07-01

    Full Text Available windowing approach over historical values to generate a prediction for the current value. We evaluate a number of Machine Learning techniques as regressors including Support Vector Regression, Random Forests, Stochastic Gradient Decent Regression, Linear...

  9. Exploring the Earth Using Deep Learning Techniques

    Science.gov (United States)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Research using deep neural networks have significantly matured in recent times, and there is now a surge in interest to apply such methods to Earth systems science and the geosciences. When combined with Big Data, we believe there are opportunities for significantly transforming a number of areas relevant to researchers and policy makers. In particular, by using a combination of data from a range of satellite Earth observations as well as computer simulations from climate models and reanalysis, we can gain new insights into the information that is locked within the data. Global geospatial datasets describe a wide range of physical and chemical parameters, which are mostly available using regular grids covering large spatial and temporal extents. This makes them perfect candidates to apply deep learning methods. So far, these techniques have been successfully applied to image analysis through the use of convolutional neural networks. However, this is only one field of interest, and there is potential for many more use cases to be explored. The deep learning algorithms require fast access to large amounts of data in the form of tensors and make intensive use of CPU in order to train its models. The Australian National Computational Infrastructure (NCI) has recently augmented its Raijin 1.2 PFlop supercomputer with hardware accelerators. Together with NCI's 3000 core high performance OpenStack cloud, these computational systems have direct access to NCI's 10+ PBytes of datasets and associated Big Data software technologies (see http://geonetwork.nci.org.au/ and http://nci.org.au/systems-services/national-facility/nerdip/). To effectively use these computing infrastructures requires that both the data and software are organised in a way that readily supports the deep learning software ecosystem. Deep learning software, such as the open source TensorFlow library, has allowed us to demonstrate the possibility of generating geospatial models by combining information from

  10. Machine-learning techniques applied to antibacterial drug discovery.

    Science.gov (United States)

    Durrant, Jacob D; Amaro, Rommie E

    2015-01-01

    The emergence of drug-resistant bacteria threatens to revert humanity back to the preantibiotic era. Even now, multidrug-resistant bacterial infections annually result in millions of hospital days, billions in healthcare costs, and, most importantly, tens of thousands of lives lost. As many pharmaceutical companies have abandoned antibiotic development in search of more lucrative therapeutics, academic researchers are uniquely positioned to fill the pipeline. Traditional high-throughput screens and lead-optimization efforts are expensive and labor intensive. Computer-aided drug-discovery techniques, which are cheaper and faster, can accelerate the identification of novel antibiotics, leading to improved hit rates and faster transitions to preclinical and clinical testing. The current review describes two machine-learning techniques, neural networks and decision trees, that have been used to identify experimentally validated antibiotics. We conclude by describing the future directions of this exciting field. © 2015 John Wiley & Sons A/S.

  11. ML-Ask: Open Source Affect Analysis Software for Textual Input in Japanese

    Directory of Open Access Journals (Sweden)

    Michal Ptaszynski

    2017-06-01

    Full Text Available We present ML-Ask – the first Open Source Affect Analysis system for textual input in Japanese. ML-Ask analyses the contents of an input (e.g., a sentence and annotates it with information regarding the contained general emotive expressions, specific emotional words, valence-activation dimensions of overall expressed affect, and particular emotion types expressed with their respective expressions. ML-Ask also incorporates the Contextual Valence Shifters model for handling negation in sentences to deal with grammatically expressible shifts in the conveyed valence. The system, designed to work mainly under Linux and MacOS, can be used for research on, or applying the techniques of Affect Analysis within the framework Japanese language. It can also be used as an experimental baseline for specific research in Affect Analysis, and as a practical tool for written contents annotation.   Funding statement: This research has been supported by: a Research Grant from the Nissan Science Foundation (years 2009–2010, The GCOE Program founded by Japan’s Ministry of Education, Culture, Sports, Science and Technology (years 2009–2010, (JSPS KAKENHI Grant-in-Aid for JSPS Fellows (Project Number: 22-00358 (years 2010–2012, (JSPS KAKENHI Grant-in-Aid for Scientific Research (Project Number: 24600001 (years 2012–2015, (JSPS KAKENHI Grant-in-Aid for Research Activity Start-up (Project Number: 25880003 (years 2013–2015, and (JSPS KAKENHI Grant-in-Aid for Encouragement of Young Scientists (B (Project Number: 15K16044 (years 2015-present, project estimated to end in March 2018.

  12. Suppression of phase separation in $(AlAs)_{2ML} (InAs)_{2ML}$ superlattices using $Al_{0.48}In_{0.52}$ As monolayer insertions

    CERN Document Server

    Lee, S R; Follstaedt, D M

    2001-01-01

    Al/sub 0.48/In/sub 0.52/As monolayers (ML) are inserted at the binary-compound interfaces of (AlAs)/sub 2/ /sub ML/(InAs)/sub 2/ /sub ML/ short-period superlattices (SPSs) during growth on (001) In P. The insertion of Al/sub 0.48/In/sub 0.52/As interlayers greater than 2 ML thick tends to suppress the phase separation that normally occurs during molecular beam epitaxy of the SPS. The degree of suppression is a sensitive function of both the monolayer-scale thickness, and the intraperiod growth sequence, of the interlayers in the SPS. Given this sensitivity to monolayer-scale variations in the surface-region composition, we propose that cyclical phase transition of the reconstructed surface initiates SPS decomposition. (21 refs).

  13. Towards large-scale FAME-based bacterial species identification using machine learning techniques.

    Science.gov (United States)

    Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul

    2009-05-01

    In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species

  14. SED-ML web tools: generate, modify and export standard-compliant simulation studies.

    Science.gov (United States)

    Bergmann, Frank T; Nickerson, David; Waltemath, Dagmar; Scharm, Martin

    2017-04-15

    The Simulation Experiment Description Markup Language (SED-ML) is a standardized format for exchanging simulation studies independently of software tools. We present the SED-ML Web Tools, an online application for creating, editing, simulating and validating SED-ML documents. The Web Tools implement all current SED-ML specifications and, thus, support complex modifications and co-simulation of models in SBML and CellML formats. Ultimately, the Web Tools lower the bar on working with SED-ML documents and help users create valid simulation descriptions. http://sysbioapps.dyndns.org/SED-ML_Web_Tools/ . fbergman@caltech.edu . © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  15. Solar photovoltaic power forecasting using optimized modified extreme learning machine technique

    Directory of Open Access Journals (Sweden)

    Manoja Kumar Behera

    2018-06-01

    Full Text Available Prediction of photovoltaic power is a significant research area using different forecasting techniques mitigating the effects of the uncertainty of the photovoltaic generation. Increasingly high penetration level of photovoltaic (PV generation arises in smart grid and microgrid concept. Solar source is irregular in nature as a result PV power is intermittent and is highly dependent on irradiance, temperature level and other atmospheric parameters. Large scale photovoltaic generation and penetration to the conventional power system introduces the significant challenges to microgrid a smart grid energy management. It is very critical to do exact forecasting of solar power/irradiance in order to secure the economic operation of the microgrid and smart grid. In this paper an extreme learning machine (ELM technique is used for PV power forecasting of a real time model whose location is given in the Table 1. Here the model is associated with the incremental conductance (IC maximum power point tracking (MPPT technique that is based on proportional integral (PI controller which is simulated in MATLAB/SIMULINK software. To train single layer feed-forward network (SLFN, ELM algorithm is implemented whose weights are updated by different particle swarm optimization (PSO techniques and their performance are compared with existing models like back propagation (BP forecasting model. Keywords: PV array, Extreme learning machine, Maximum power point tracking, Particle swarm optimization, Craziness particle swarm optimization, Accelerate particle swarm optimization, Single layer feed-forward network

  16. SysML for systems engineering a model-based approach

    CERN Document Server

    Holt, Jon

    2013-01-01

    This new edition of this popular text has been fully updated to reflect SysML 1.3, the latest version of the standard, and the discussion has been extended to show the power of SysML as a tool for systems engineering in an MBSE context.

  17. The ATLAS Higgs machine learning challenge

    CERN Document Server

    Davey, W; The ATLAS collaboration; Rousseau, D; Cowan, G; Kegl, B; Germain-Renaud, C; Guyon, I

    2014-01-01

    High Energy Physics has been using Machine Learning techniques (commonly known as Multivariate Analysis) since the 90's with Artificial Neural Net for example, more recently with Boosted Decision Trees, Random Forest etc... Meanwhile, Machine Learning has become a full blown field of computer science. With the emergence of Big Data, Data Scientists are developing new Machine Learning algorithms to extract sense from large heterogeneous data. HEP has exciting and difficult problems like the extraction of the Higgs boson signal, data scientists have advanced algorithms: the goal of the HiggsML project is to bring the two together by a “challenge”: participants from all over the world and any scientific background can compete online ( https://www.kaggle.com/c/higgs-boson ) to obtain the best Higgs to tau tau signal significance on a set of ATLAS full simulated Monte Carlo signal and background. Winners with the best scores will receive money prizes ; authors of the best method (most usable) will be invited t...

  18. Exploring Graduate Students' Perspectives towards Using Gamification Techniques in Online Learning

    Directory of Open Access Journals (Sweden)

    Daniah ALABBASI

    2017-07-01

    Full Text Available Teachers and educational institutions are attempting to find an appropriate strategy to motivate as well as engage students in the learning process. Institutions are encouraging the use of gamification in education for the purpose of improving the intrinsic motivation as well as engagement. However, the students’ perspective of the issue is under-investigated. The purpose of this research study was to explore graduate students’ perspectives toward the use of gamification techniques in online learning. The study used exploratory research and survey as the data collection tool. Forty-seven graduate students (n = 47 enrolled in an instructional technology program studied in a learning management system that supports gamification (TalentLMS. The average total percentages were calculated for each survey section to compose the final perspective of the included students. The results showed a positive perception toward the use of gamification tools in online learning among graduate students. Students require effort-demanding, challenging, sophisticated learning systems that increase competency, enhance recall memory, concentration, attentiveness, commitment, and social interaction. Limitations of the study are identified, which highlights the need for further research on the subject matter.

  19. 76 FR 45334 - Innovative Techniques for Delivering ITS Learning; Request for Information

    Science.gov (United States)

    2011-07-28

    ... adult learners? 5. Are you aware of any ITS training applications that work on a mobile phone or smart... DEPARTMENT OF TRANSPORTATION Research and Innovative Technology Administration Innovative Techniques for Delivering ITS Learning; Request for Information AGENCY: Research and Innovative Technology...

  20. QuakeML: Status of the XML-based Seismological Data Exchange Format

    Science.gov (United States)

    Euchner, Fabian; Schorlemmer, Danijel; Kästli, Philipp; Quakeml Working Group

    2010-05-01

    QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. The current release (version 1.2) is based on a public Request for Comments process that included contributions from ETH, GFZ, USC, SCEC, USGS, IRIS DMC, EMSC, ORFEUS, GNS, ZAMG, BRGM, Nanometrics, and ISTI. QuakeML has mainly been funded through the EC FP6 infrastructure project NERIES, in which it was endorsed as the preferred data exchange format. Currently, QuakeML services are being installed at several institutions around the globe, including EMSC, ORFEUS, ETH, Geoazur (Europe), NEIC, ANSS, SCEC/SCSN (USA), and GNS Science (New Zealand). Some of these institutions already provide QuakeML earthquake catalog web services. Several implementations of the QuakeML data model have been made. QuakePy, an open-source Python-based seismicity analysis toolkit using the QuakeML data model, is being developed at ETH. QuakePy is part of the software stack used in the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center installations, developed by SCEC. Furthermore, the QuakeML data model is part of the SeisComP3 package from GFZ Potsdam. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, seismic inventory, and resource metadata) has been started, but is at an early stage. Contributions from the community that help to widen the thematic coverage of QuakeML are highly welcome. Online resources: http://www.quakeml.org, http://www.quakepy.org

  1. WaterML: an XML Language for Communicating Water Observations Data

    Science.gov (United States)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.

    2007-12-01

    One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.

  2. ISOLATED SPEECH RECOGNITION SYSTEM FOR TAMIL LANGUAGE USING STATISTICAL PATTERN MATCHING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    VIMALA C.

    2015-05-01

    Full Text Available In recent years, speech technology has become a vital part of our daily lives. Various techniques have been proposed for developing Automatic Speech Recognition (ASR system and have achieved great success in many applications. Among them, Template Matching techniques like Dynamic Time Warping (DTW, Statistical Pattern Matching techniques such as Hidden Markov Model (HMM and Gaussian Mixture Models (GMM, Machine Learning techniques such as Neural Networks (NN, Support Vector Machine (SVM, and Decision Trees (DT are most popular. The main objective of this paper is to design and develop a speaker-independent isolated speech recognition system for Tamil language using the above speech recognition techniques. The background of ASR system, the steps involved in ASR, merits and demerits of the conventional and machine learning algorithms and the observations made based on the experiments are presented in this paper. For the above developed system, highest word recognition accuracy is achieved with HMM technique. It offered 100% accuracy during training process and 97.92% for testing process.

  3. Comment on 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study'.

    Science.gov (United States)

    Valdes, Gilmer; Interian, Yannet

    2018-03-15

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study' with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  4. Improving face image extraction by using deep learning technique

    Science.gov (United States)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  5. Machine Learning for Healthcare: On the Verge of a Major Shift in Healthcare Epidemiology.

    Science.gov (United States)

    Wiens, Jenna; Shenoy, Erica S

    2018-01-06

    The increasing availability of electronic health data presents a major opportunity in healthcare for both discovery and practical applications to improve healthcare. However, for healthcare epidemiologists to best use these data, computational techniques that can handle large complex datasets are required. Machine learning (ML), the study of tools and methods for identifying patterns in data, can help. The appropriate application of ML to these data promises to transform patient risk stratification broadly in the field of medicine and especially in infectious diseases. This, in turn, could lead to targeted interventions that reduce the spread of healthcare-associated pathogens. In this review, we begin with an introduction to the basics of ML. We then move on to discuss how ML can transform healthcare epidemiology, providing examples of successful applications. Finally, we present special considerations for those healthcare epidemiologists who want to use and apply ML. © The Author(s) 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  6. Learning outcomes and effective communication techniques for hazard recognition learning programmes in the transportation thrust area.

    CSIR Research Space (South Africa)

    Krige, PD

    2001-12-01

    Full Text Available on South African mines ............................................ 32 4.3 People development and training techniques associated with confidence, attitudes and leadership............................................ 34 Page 4 4.4 Recommended learning... to rules and procedures, safety commitment of management, supervision style, organising for safety, equipment design and maintenance. Only the last two are engineering issues. The trend is clear. Improvements in engineering design have significantly...

  7. Student’s Perceptions on Simulation as Part of Experiential Learning in Approaches, Methods, and Techniques (AMT Course

    Directory of Open Access Journals (Sweden)

    Marselina Karina Purnomo

    2017-03-01

    Full Text Available Simulation is a part of Experiential Learning which represents certain real-life events. In this study, simulation is used as a learning activity in Approaches, Methods, and Techniques (AMT course which is one of the courses in English Language Education Study Program (ELESP of Sanata Dharma University. Since simulation represents the real-life events, it encourages students to apply the approaches, methods, and techniques being studied based on the real-life classroom. Several experts state that students are able to involve their personal experiences through simulation which additionally is believed to create a meaningful learning in the class. This study aimed to discover ELESP students’ perceptions toward simulation as a part of Experiential Learning in AMT course. From the findings, it could be inferred that students agreed that simulation in class was important for students’ learning for it formed a meaningful learning in class.  DOI: https://doi.org/10.24071/llt.2017.200104

  8. Preparing for the future: opportunities for ML in ATLAS & CMS

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    ML is an established tool in HEP and there are many examples which demonstrate its importance for the kind of classification and regression problem we have in our field. However, there is also a big potential for future applications in yet untapped areas. I will summarise these opportunities and highlight recent, ongoing and planned studies of novel ML applications in HEP. Certain aspects of the problems we are faced with in HEP are quite unique and represent interesting benchmark problems for the ML community as a whole. Hence, efficient communication and close interaction between the ML and HEP community is expected to lead to promising cross-fertilisation. This talk attempts to serve as a starting point for such a prospective collaboration.

  9. Behavioral and functional neuroanatomical correlates of anterograde autobiographical memory in isolated retrograde amnesic patient M.L.

    Science.gov (United States)

    Levine, Brian; Svoboda, Eva; Turner, Gary R; Mandic, Marina; Mackey, Allison

    2009-09-01

    Patient M.L. [Levine, B., Black, S. E., Cabeza, R., Sinden, M., Mcintosh, A. R., Toth, J. P., et al. (1998). Episodic memory and the self in a case of isolated retrograde amnesia. Brain, 121, 1951-1973], lost memory for events occurring before his severe traumatic brain injury, yet his anterograde (post-injury) learning and memory appeared intact, a syndrome known as isolated or focal retrograde amnesia. Studies with M.L. demonstrated a dissociation between episodic and semantic memory. His retrograde amnesia was specific to episodic autobiographical memory. Convergent behavioral and functional imaging data suggested that his anterograde memory, while appearing normal, was accomplished with reduced autonoetic awareness (awareness of the self as a continuous entity across time that is a crucial element of episodic memory). While previous research on M.L. focused on anterograde memory of laboratory stimuli, in this study, M.L.'s autobiographical memory for post-injury events or anterograde autobiographical memory was examined using prospective collection of autobiographical events via audio diary with detailed behavioral and functional neuroanatomical analysis. Consistent with his reports of subjective disconnection from post-injury autobiographical events, M.L. assigned fewer "remember" ratings to his autobiographical events than comparison subjects. His generation of event-specific details using the Autobiographical Interview [Levine, B., Svoboda, E., Hay, J., Winocur, G., & Moscovitch, M. (2002). Aging and autobiographical memory: dissociating episodic from semantic retrieval. Psychology and Aging, 17, 677-689] was low, but not significantly so, suggesting that it is possible to generate episodic-like details even when re-experiencing of those details is compromised. While listening to the autobiographical audio diary segments, M.L. showed reduced activation relative to comparison subjects in midline frontal and posterior nodes previously identified as part of the

  10. Using Machine Learning in Adversarial Environments.

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approaches only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.

  11. A MuDDy Experience-ML Bindings to a BDD Library

    DEFF Research Database (Denmark)

    Larsen, Ken Friis

    2009-01-01

    . This combination of an ML interface to a high-performance C library is surprisingly fruitful. ML allows you to quickly experiment with high-level symbolic algorithms before handing over the grunt work to the C library. I show how, with a relatively little effort, you can make a domain specific language...... for concurrent finite state-machines embedded in Standard ML and then write various custom model-checking algorithms for this domain specific embedded language (DSEL)....

  12. Microsoft Azure machine learning

    CERN Document Server

    Mund, Sumit

    2015-01-01

    The book is intended for those who want to learn how to use Azure Machine Learning. Perhaps you already know a bit about Machine Learning, but have never used ML Studio in Azure; or perhaps you are an absolute newbie. In either case, this book will get you up-and-running quickly.

  13. The evolution of the CUAHSI Water Markup Language (WaterML)

    Science.gov (United States)

    Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.

    2009-04-01

    The CUAHSI Hydrologic Information System (HIS, his.cuahsi.org) uses web services as the core data exchange mechanism which provides programmatic connection between many heterogeneous sources of hydrologic data and a variety of online and desktop client applications. The service message schema follows the CUAHSI Water Markup Language (WaterML) 1.x specification (see OGC Discussion Paper 07-041r1). Data sources that can be queried via WaterML-compliant water data services include national and international repositories such as USGS NWIS (National Water Information System), USEPA STORET (Storage & Retrieval), USDA SNOTEL (Snowpack Telemetry), NCDC ISH and ISD(Integrated Surface Hourly and Daily Data), MODIS (Moderate Resolution Imaging Spectroradiometer), and DAYMET (Daily Surface Weather Data and Climatological Summaries). Besides government data sources, CUAHSI HIS provides access to a growing number of academic hydrologic observation networks. These networks are registered by researchers associated with 11 hydrologic observatory testbeds around the US, and other research, government and commercial groups wishing to join the emerging CUAHSI Water Data Federation. The Hydrologic Information Server (HIS Server) software stack deployed at NSF-supported hydrologic observatory sites and other universities around the country, supports a hydrologic data publication workflow which includes the following steps: (1) observational data are loaded from static files or streamed from sensors into a local instance of an Observations Data Model (ODM) database; (2) a generic web service template is configured for the new ODM instance to expose the data as a WaterML-compliant water data service, and (3) the new water data service is registered at the HISCentral registry (hiscentral.cuahsi.org), its metadata are harvested and semantically tagged using concepts from a hydrologic ontology. As a result, the new service is indexed in the CUAHSI central metadata catalog, and becomes

  14. In silico machine learning methods in drug development.

    Science.gov (United States)

    Dobchev, Dimitar A; Pillai, Girinath G; Karelson, Mati

    2014-01-01

    Machine learning (ML) computational methods for predicting compounds with pharmacological activity, specific pharmacodynamic and ADMET (absorption, distribution, metabolism, excretion and toxicity) properties are being increasingly applied in drug discovery and evaluation. Recently, machine learning techniques such as artificial neural networks, support vector machines and genetic programming have been explored for predicting inhibitors, antagonists, blockers, agonists, activators and substrates of proteins related to specific therapeutic targets. These methods are particularly useful for screening compound libraries of diverse chemical structures, "noisy" and high-dimensional data to complement QSAR methods, and in cases of unavailable receptor 3D structure to complement structure-based methods. A variety of studies have demonstrated the potential of machine-learning methods for predicting compounds as potential drug candidates. The present review is intended to give an overview of the strategies and current progress in using machine learning methods for drug design and the potential of the respective model development tools. We also regard a number of applications of the machine learning algorithms based on common classes of diseases.

  15. Neural robust stabilization via event-triggering mechanism and adaptive learning technique.

    Science.gov (United States)

    Wang, Ding; Liu, Derong

    2018-06-01

    The robust control synthesis of continuous-time nonlinear systems with uncertain term is investigated via event-triggering mechanism and adaptive critic learning technique. We mainly focus on combining the event-triggering mechanism with adaptive critic designs, so as to solve the nonlinear robust control problem. This can not only make better use of computation and communication resources, but also conduct controller design from the view of intelligent optimization. Through theoretical analysis, the nonlinear robust stabilization can be achieved by obtaining an event-triggered optimal control law of the nominal system with a newly defined cost function and a certain triggering condition. The adaptive critic technique is employed to facilitate the event-triggered control design, where a neural network is introduced as an approximator of the learning phase. The performance of the event-triggered robust control scheme is validated via simulation studies and comparisons. The present method extends the application domain of both event-triggered control and adaptive critic control to nonlinear systems possessing dynamical uncertainties. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    Science.gov (United States)

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  17. Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’

    Science.gov (United States)

    Valdes, Gilmer; Interian, Yannet

    2018-03-01

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’ with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  18. Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games

    KAUST Repository

    Jaleel, Hassan; Shamma, Jeff S.

    2018-01-01

    dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics have the same stochastically stable states, LLL and ML correspond to different behavioral models for decision making. Moreover, we demonstrate through

  19. An Interactive Learning Environment for Teaching the Imperative and Object-Oriented Programming Techniques in Various Learning Contexts

    Science.gov (United States)

    Xinogalos, Stelios

    The acquisition of problem-solving and programming skills in the era of knowledge society seems to be particularly important. Due to the intrinsic difficulty of acquiring such skills various educational tools have been developed. Unfortunately, most of these tools are not utilized. In this paper we present the programming microworlds Karel and objectKarel that support the procedural-imperative and Object-Oriented Programming (OOP) techniques and can be used for supporting the teaching and learning of programming in various learning contexts and audiences. The paper focuses on presenting the pedagogical features that are common to both environments and mainly on presenting the potential uses of these environments.

  20. [Learning experience of acupuncture technique from professor ZHANG Jin].

    Science.gov (United States)

    Xue, Hongsheng; Zhang, Jin

    2017-08-12

    As a famous acupuncturist in the world, professor ZHANG Jin believes the key of acupuncture technique is the use of force, and the understanding of the "concentrating the force into needle body" is essential to understand the essence of acupuncture technique. With deep study of Huangdi Neijing ( The Inner Canon of Huangdi ) and Zhenjiu Dacheng ( Compendium of Acupuncture and Moxibustion ), the author further learned professor ZHANG Jin 's theory and operation specification of "concentrating force into needle body, so the force arriving before and together with needle". The whole-body force should be subtly focused on the tip of needle, and gentle force at tip of needle could get significant reinforcing and reducing effect. In addition, proper timing at tip of needle could start reinforcing and reducing effect, lead qi to disease location, and achieve superior clinical efficacy.

  1. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  2. CytometryML: a markup language for analytical cytology

    Science.gov (United States)

    Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.

    2003-06-01

    Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.

  3. VarioML framework for comprehensive variation data representation and exchange.

    Science.gov (United States)

    Byrne, Myles; Fokkema, Ivo Fac; Lancaster, Owen; Adamusiak, Tomasz; Ahonen-Bishopp, Anni; Atlan, David; Béroud, Christophe; Cornell, Michael; Dalgleish, Raymond; Devereau, Andrew; Patrinos, George P; Swertz, Morris A; Taschner, Peter Em; Thorisson, Gudmundur A; Vihinen, Mauno; Brookes, Anthony J; Muilu, Juha

    2012-10-03

    Sharing of data about variation and the associated phenotypes is a critical need, yet variant information can be arbitrarily complex, making a single standard vocabulary elusive and re-formatting difficult. Complex standards have proven too time-consuming to implement. The GEN2PHEN project addressed these difficulties by developing a comprehensive data model for capturing biomedical observations, Observ-OM, and building the VarioML format around it. VarioML pairs a simplified open specification for describing variants, with a toolkit for adapting the specification into one's own research workflow. Straightforward variant data can be captured, federated, and exchanged with no overhead; more complex data can be described, without loss of compatibility. The open specification enables push-button submission to gene variant databases (LSDBs) e.g., the Leiden Open Variation Database, using the Cafe Variome data publishing service, while VarioML bidirectionally transforms data between XML and web-application code formats, opening up new possibilities for open source web applications building on shared data. A Java implementation toolkit makes VarioML easily integrated into biomedical applications. VarioML is designed primarily for LSDB data submission and transfer scenarios, but can also be used as a standard variation data format for JSON and XML document databases and user interface components. VarioML is a set of tools and practices improving the availability, quality, and comprehensibility of human variation information. It enables researchers, diagnostic laboratories, and clinics to share that information with ease, clarity, and without ambiguity.

  4. Reactive Programming in Standard ML

    OpenAIRE

    Pucella, Riccardo

    2004-01-01

    Reactive systems are systems that maintain an ongoing interaction with their environment, activated by receiving input events from the environment and producing output events in response. Modern programming languages designed to program such systems use a paradigm based on the notions of instants and activations. We describe a library for Standard ML that provides basic primitives for programming reactive systems. The library is a low-level system upon which more sophisticated reactive behavi...

  5. Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights

    Science.gov (United States)

    Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd

    2017-11-01

    This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.

  6. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    Science.gov (United States)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  7. Potentials and limitations of low-concentration contrast medium (150 mg iodine/ml) in CT pulmonary angiography

    International Nuclear Information System (INIS)

    Radon, M.R.; Kaduthodil, M.J.; Jagdish, J.; Matthews, S.; Hill, C.; Bull, M.J.; Morcos, S.K.

    2011-01-01

    Aim: To assess the feasibility of producing diagnostic multidetector computed tomography (MDCT) pulmonary angiography with low iodine concentration contrast media (150 mg iodine/ml) in patients with suspected acute pulmonary embolism. Materials and methods: Ninety-five randomized patients underwent MDCT (64 row) pulmonary angiography with 100 ml iopromide either at low concentration (LC) of 150 mg iodine/ml (n = 45) or high concentration (HC) of 300 mg iodine/ml (n = 50), delivered at the rate of 5 ml/s via a power injector. Two experienced radiologists, blinded to the concentration used, subjectively assessed the diagnostic quality and confidence using a four-point scale [1 = poor (not diagnostic), 2 = satisfactory, 3 = good, 4 = excellent]. Attenuation values (in HU) were measured in the main proximal branches of the pulmonary arteries. Results: The median diagnostic quality score for both observers was 3.5 (interquartile range 3-4) in the HC group and 2.5 (interquartile range 1.5-3) in the LC group (p < 0.01). The median diagnostic confidence score for both observers was 4 (interquartile range 3-4) in the HC group and 3 (interquartile range 1.5-4) in the LC group (p < 0.01). Both observers rated examinations as diagnostic in 69% of cases in the LC group, compared with 96% of cases in the HC group. Good interobserver agreement was found in both groups (K value 0.72 in the LC group and 0.73 in the HC). Obesity, poor scan timing, and dilution by venous return of non-opacified blood were the main reasons for a reduction in diagnostic quality of examinations in the LC group. Conclusion: Despite a 50% reduction of contrast medium dose in comparison to the standard technique, 150 mg iodine/ml can produce diagnostic MDCT pulmonary angiogram studies in the absence of obesity or high cardiac output and hyper-dynamic pulmonary circulation. Reducing the dose of contrast media would minimize the risk of contrast nephropathy in patients at risk of this complication

  8. Symbolic Machine Learning: A Different Answer to the Problem of the Acquisition of Lexical Knowledge from Corpora

    Directory of Open Access Journals (Sweden)

    Pascale Sébillot

    2008-07-01

    Full Text Available One relevant way to structure the domain of lexical knowledge (e.g. relations between lexical units acquisition from corpora is to oppose numerical versus symbolic techniques. Numerical approaches of acquisition exploit the frequential aspect of data, have been widely used, and produce portable systems, but poor explanations of their results. Symbolic approaches exploit the structural aspect of data. Among them, the symbolic machine learning (ML techniques can infer efficient and expressive patterns of a target relation from examples of elements that verify this relation. These methods are however far less known, and the aim of this paper is to point out their interest through the description of one precise experiment. To remove their supervised characteristic, and instead of opposing them to numerical approaches, we finally show that it is possible to combine one symbolic ML technique to one numerical one, and keep advantages of both (meaningful patterns, efficient extraction, portability.

  9. THE GAME TECHNIQUE NTCHNIQUE STIMULATING LEARNING ACTIVITY OF JUNIOR STUDENTS SPECIALIZING IN ECONOMICS

    Directory of Open Access Journals (Sweden)

    Juri. S. Ezrokh

    2014-01-01

    Full Text Available The research is aimed at specifying and developing the modern control system of current academic achievements of junior university students; and the main task is to find the adequate ways for stimulating the junior students’ learning activities, and estimating their individual achievements.Methods: The author applies his own assessment method for estimating and stimulating students’ learning outcomes, based on the rating-point system of gradually obtained points building up a student’s integrated learning outcomes.Results: The research findings prove that implementation of the given method can increase the motivational, multiplicative and controlling components of the learning process.Scientific novelty: The method in question is based on the new original game approach to controlling procedures and stimulation of learning motivation of the economic profile students.Practical significance: The recommended technique can intensify the incentivebased training activities both in and outside a classroom, developing thereby students’ professional and personal qualities.

  10. QuakeML: XML for Seismological Data Exchange and Resource Metadata Description

    Science.gov (United States)

    Euchner, F.; Schorlemmer, D.; Becker, J.; Heinloo, A.; Kästli, P.; Saul, J.; Weber, B.; QuakeML Working Group

    2007-12-01

    QuakeML is an XML-based data exchange format for seismology that is under development. Current collaborators are from ETH, GFZ, USC, USGS, IRIS DMC, EMSC, ORFEUS, and ISTI. QuakeML development was motivated by the lack of a widely accepted and well-documented data format that is applicable to a broad range of fields in seismology. The development team brings together expertise from communities dealing with analysis and creation of earthquake catalogs, distribution of seismic bulletins, and real-time processing of seismic data. Efforts to merge QuakeML with existing XML dialects are under way. The first release of QuakeML will cover a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Further extensions are in progress or planned, e.g., for macroseismic information, location probability density functions, slip distributions, and ground motion information. The QuakeML language definition is supplemented by a concept to provide resource metadata and facilitate metadata exchange between distributed data providers. For that purpose, we introduce unique, location-independent identifiers of seismological resources. As an application of QuakeML, ETH Zurich currently develops a Python-based seismicity analysis toolkit as a contribution to CSEP (Collaboratory for the Study of Earthquake Predictability). We follow a collaborative and transparent development approach along the lines of the procedures of the World Wide Web Consortium (W3C). QuakeML currently is in working draft status. The standard description will be subjected to a public Request for Comments (RFC) process and eventually reach the status of a recommendation. QuakeML can be found at http://www.quakeml.org.

  11. Assembling Components using SysML with Non-Functional Requirements

    OpenAIRE

    Chouali , Samir; Hammad , Ahmed; Mountassir , Hassan

    2013-01-01

    International audience; Non-functional requirements of component based systems are important as their functional requirements, therefore they must be considered in components assembly. These properties are beforehand specified with SysML requirement diagram. We specify component based system architecture with SysML block definition diagram, and component behaviors with sequence diagrams. We propose to specify formally component interfaces with interface automata, obtained from requirement and...

  12. Design of dialogic eLearning-to-learn: metalearning as pedagogical methodology

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard

    2008-01-01

    This paper presents a perspective emphasising Meta learning (ML) as the most significant and pertinent feature for promoting a democratic, collaborative eLearning-to-Learn (eL2L) phenomenon in a global context. Through attempting to understand and clarify the powers of pedagogical design of global...... networked e Learning based on Learning-to-Learn (L2L), it makes a plea for L2L in a dialogic global learning context, offering a vision of global democratic citizens able to engage in critical dialogue with fellow learners. http://www.inderscience.com/search/index.php?action=record&rec_id=17675&prev...

  13. Machine Learning or Information Retrieval Techniques for Bug Triaging: Which is better?

    Directory of Open Access Journals (Sweden)

    Anjali Goyal

    2017-07-01

    Full Text Available Bugs are the inevitable part of a software system. Nowadays, large software development projects even release beta versions of their products to gather bug reports from users. The collected bug reports are then worked upon by various developers in order to resolve the defects and make the final software product more reliable. The high frequency of incoming bugs makes the bug handling a difficult and time consuming task. Bug assignment is an integral part of bug triaging that aims at the process of assigning a suitable developer for the reported bug who corrects the source code in order to resolve the bug. There are various semi and fully automated techniques to ease the task of bug assignment. This paper presents the current state of the art of various techniques used for bug report assignment. Through exhaustive research, the authors have observed that machine learning and information retrieval based bug assignment approaches are most popular in literature. A deeper investigation has shown that the trend of techniques is taking a shift from machine learning based approaches towards information retrieval based approaches. Therefore, the focus of this work is to find the reason behind the observed drift and thus a comparative analysis is conducted on the bug reports of the Mozilla, Eclipse, Gnome and Open Office projects in the Bugzilla repository. The results of the study show that the information retrieval based technique yields better efficiency in recommending the developers for bug reports.

  14. Challenges of Using Learning Analytics Techniques to Support Mobile Learning

    Science.gov (United States)

    Arrigo, Marco; Fulantelli, Giovanni; Taibi, Davide

    2015-01-01

    Evaluation of Mobile Learning remains an open research issue, especially as regards the activities that take place outside the classroom. In this context, Learning Analytics can provide answers, and offer the appropriate tools to enhance Mobile Learning experiences. In this poster we introduce a task-interaction framework, using learning analytics…

  15. Reinforcement learning for dpm of embedded visual sensor nodes

    International Nuclear Information System (INIS)

    Khani, U.; Sadhayo, I. H.

    2014-01-01

    This paper proposes a RL (Reinforcement Learning) based DPM (Dynamic Power Management) technique to learn time out policies during a visual sensor node's operation which has multiple power/performance states. As opposed to the widely used static time out policies, our proposed DPM policy which is also referred to as OLTP (Online Learning of Time out Policies), learns to dynamically change the time out decisions in the different node states including the non-operational states. The selection of time out values in different power/performance states of a visual sensing platform is based on the workload estimates derived from a ML-ANN (Multi-Layer Artificial Neural Network) and an objective function given by weighted performance and power parameters. The DPM approach is also able to dynamically adjust the power-performance weights online to satisfy a given constraint of either power consumption or performance. Results show that the proposed learning algorithm explores the power-performance tradeoff with non-stationary workload and outperforms other DPM policies. It also performs the online adjustment of the tradeoff parameters in order to meet a user-specified constraint. (author)

  16. VarioML framework for comprehensive variation data representation and exchange

    Directory of Open Access Journals (Sweden)

    Byrne Myles

    2012-10-01

    Full Text Available Abstract Background Sharing of data about variation and the associated phenotypes is a critical need, yet variant information can be arbitrarily complex, making a single standard vocabulary elusive and re-formatting difficult. Complex standards have proven too time-consuming to implement. Results The GEN2PHEN project addressed these difficulties by developing a comprehensive data model for capturing biomedical observations, Observ-OM, and building the VarioML format around it. VarioML pairs a simplified open specification for describing variants, with a toolkit for adapting the specification into one's own research workflow. Straightforward variant data can be captured, federated, and exchanged with no overhead; more complex data can be described, without loss of compatibility. The open specification enables push-button submission to gene variant databases (LSDBs e.g., the Leiden Open Variation Database, using the Cafe Variome data publishing service, while VarioML bidirectionally transforms data between XML and web-application code formats, opening up new possibilities for open source web applications building on shared data. A Java implementation toolkit makes VarioML easily integrated into biomedical applications. VarioML is designed primarily for LSDB data submission and transfer scenarios, but can also be used as a standard variation data format for JSON and XML document databases and user interface components. Conclusions VarioML is a set of tools and practices improving the availability, quality, and comprehensibility of human variation information. It enables researchers, diagnostic laboratories, and clinics to share that information with ease, clarity, and without ambiguity.

  17. WeedML: a Tool for Collaborative Weed Demographic Modeling

    OpenAIRE

    Holst, Niels

    2010-01-01

    WeedML is a proposed standard to formulate models of weed demography, or maybe even complex models in general, that are both transparent and straightforward to re-use as building blocks for new models. The paper describes the design and thoughts behind WeedML which relies on XML and object-oriented systems development. Proof-of-concept software is provided as open-source C++ code and executables that can be downloaded freely.

  18. Group Guidance Services with Self-Regulation Technique to Improve Student Learning Motivation in Junior High School (JHS)

    Science.gov (United States)

    Pranoto, Hadi; Atieka, Nurul; Wihardjo, Sihadi Darmo; Wibowo, Agus; Nurlaila, Siti; Sudarmaji

    2016-01-01

    This study aims at: determining students motivation before being given a group guidance with self-regulation technique, determining students' motivation after being given a group counseling with self-regulation technique, generating a model of group counseling with self-regulation technique to improve motivation of learning, determining the…

  19. Separation of pulsar signals from noise using supervised machine learning algorithms

    Science.gov (United States)

    Bethapudi, S.; Desai, S.

    2018-04-01

    We evaluate the performance of four different machine learning (ML) algorithms: an Artificial Neural Network Multi-Layer Perceptron (ANN MLP), Adaboost, Gradient Boosting Classifier (GBC), and XGBoost, for the separation of pulsars from radio frequency interference (RFI) and other sources of noise, using a dataset obtained from the post-processing of a pulsar search pipeline. This dataset was previously used for the cross-validation of the SPINN-based machine learning engine, obtained from the reprocessing of the HTRU-S survey data (Morello et al., 2014). We have used the Synthetic Minority Over-sampling Technique (SMOTE) to deal with high-class imbalance in the dataset. We report a variety of quality scores from all four of these algorithms on both the non-SMOTE and SMOTE datasets. For all the above ML methods, we report high accuracy and G-mean for both the non-SMOTE and SMOTE cases. We study the feature importances using Adaboost, GBC, and XGBoost and also from the minimum Redundancy Maximum Relevance approach to report algorithm-agnostic feature ranking. From these methods, we find that the signal to noise of the folded profile to be the best feature. We find that all the ML algorithms report FPRs about an order of magnitude lower than the corresponding FPRs obtained in Morello et al. (2014), for the same recall value.

  20. Applying perceptual and adaptive learning techniques for teaching introductory histopathology

    Directory of Open Access Journals (Sweden)

    Sally Krasne

    2013-01-01

    Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.

  1. An Ontology for State Analysis: Formalizing the Mapping to SysML

    Science.gov (United States)

    Wagner, David A.; Bennett, Matthew B.; Karban, Robert; Rouquette, Nicolas; Jenkins, Steven; Ingham, Michel

    2012-01-01

    State Analysis is a methodology developed over the last decade for architecting, designing and documenting complex control systems. Although it was originally conceived for designing robotic spacecraft, recent applications include the design of control systems for large ground-based telescopes. The European Southern Observatory (ESO) began a project to design the European Extremely Large Telescope (E-ELT), which will require coordinated control of over a thousand articulated mirror segments. The designers are using State Analysis as a methodology and the Systems Modeling Language (SysML) as a modeling and documentation language in this task. To effectively apply the State Analysis methodology in this context it became necessary to provide ontological definitions of the concepts and relations in State Analysis and greater flexibility through a mapping of State Analysis into a practical extension of SysML. The ontology provides the formal basis for verifying compliance with State Analysis semantics including architectural constraints. The SysML extension provides the practical basis for applying the State Analysis methodology with SysML tools. This paper will discuss the method used to develop these formalisms (the ontology), the formalisms themselves, the mapping to SysML and approach to using these formalisms to specify a control system and enforce architectural constraints in a SysML model.

  2. Approximate multi-state reliability expressions using a new machine learning technique

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Muselli, Marco

    2005-01-01

    The machine-learning-based methodology, previously proposed by the authors for approximating binary reliability expressions, is now extended to develop a new algorithm, based on the procedure of Hamming Clustering, which is capable to deal with multi-state systems and any success criterion. The proposed technique is presented in details and verified on literature cases: experiment results show that the new algorithm yields excellent predictions

  3. An overview of the CellML API and its implementation

    Directory of Open Access Journals (Sweden)

    Halstead Matt

    2010-04-01

    Full Text Available Abstract Background CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models. However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API, and a good implementation of that API, upon which tools can base their support for CellML. Results We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages. We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Conclusions Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions.

  4. Theoretical analysis of hydrogen chemisorption on Pd(111), Re(0001) and PdML/Re(0001), ReML/Pd(111) pseudomorphic overlayers

    DEFF Research Database (Denmark)

    Pallassana, Venkataraman; Neurock, Matthew; Hansen, Lars Bruno

    1999-01-01

    not appear to provide an independent parameter for assessing surface reactivity. The weak chemisorption of hydrogen on the Pd-ML/Re(0001) surface relates to substantial lowering of the d-band center of Pd, when it is pseudomorphically deposited as a monolayer on a Re substrate. [S0163-1829(99)00331-2].......Gradient-corrected density-functional theory (DFT-GGA) periodic slab calculations have been used to analyze the binding of atomic hydrogen on monometallic Pd(111), Re(0001), and bimetallic Pd-mL/Re(0001) [pseudomorphic monolayer of Pd(111) on Re(0001)] and Re-ML/Pd(111) surfaces. The computed...

  5. Polymorphic New World monkeys with more than three M/L cone types

    Science.gov (United States)

    Jacobs, Gerald H.; Deegan, Jess F.

    2005-10-01

    Most New World (platyrrhine) monkeys have M/L cone photopigment polymorphisms that map directly into individual variations in visual sensitivity and color vision. We used electroretinogram flicker photometry to examine M/L cone photopigments in the New World monkey Callicebus moloch (the dusky Titi). Like other New World monkeys, this species has an M/L cone photopigment polymorphism that reflects the presence of X-chromosome opsin gene alleles. However, unlike other platyrrhines in which three M/L photopigments are typical, Callicebus has a total of five M/L cone photopigments. The peak sensitivity values for these pigments extend across the range from 530 to 562 nm. The result is an enhanced array of potential color vision phenotypes in this species.

  6. CRDM motion analysis using machine learning technique

    International Nuclear Information System (INIS)

    Nishimura, Takuya; Nakayama, Hiroyuki; Saitoh, Mayumi; Yaguchi, Seiji

    2017-01-01

    Magnetic jack type Control Rod Drive Mechanism (CRDM) for pressurized water reactor (PWR) plant operates control rods in response to electrical signals from a reactor control system. CRDM operability is evaluated by quantifying armature's response of closed/opened time which means interval time between coil energizing/de-energizing points and armature closed/opened points. MHI has already developed an automatic CRDM motion analysis and applied it to actual plants so far. However, CRDM operational data has wide variation depending on their characteristics such as plant condition, plant, etc. In the existing motion analysis, there is an issue of analysis accuracy for applying a single analysis technique to all plant conditions, plants, etc. In this study, MHI investigated motion analysis using machine learning (Random Forests) which is flexibly accommodated to CRDM operational data with wide variation, and is improved analysis accuracy. (author)

  7. SPAM CLASSIFICATION BASED ON SUPERVISED LEARNING USING MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    T. Hamsapriya

    2011-12-01

    Full Text Available E-mail is one of the most popular and frequently used ways of communication due to its worldwide accessibility, relatively fast message transfer, and low sending cost. The flaws in the e-mail protocols and the increasing amount of electronic business and financial transactions directly contribute to the increase in e-mail-based threats. Email spam is one of the major problems of the today’s Internet, bringing financial damage to companies and annoying individual users. Spam emails are invading users without their consent and filling their mail boxes. They consume more network capacity as well as time in checking and deleting spam mails. The vast majority of Internet users are outspoken in their disdain for spam, although enough of them respond to commercial offers that spam remains a viable source of income to spammers. While most of the users want to do right think to avoid and get rid of spam, they need clear and simple guidelines on how to behave. In spite of all the measures taken to eliminate spam, they are not yet eradicated. Also when the counter measures are over sensitive, even legitimate emails will be eliminated. Among the approaches developed to stop spam, filtering is the one of the most important technique. Many researches in spam filtering have been centered on the more sophisticated classifier-related issues. In recent days, Machine learning for spam classification is an important research issue. The effectiveness of the proposed work is explores and identifies the use of different learning algorithms for classifying spam messages from e-mail. A comparative analysis among the algorithms has also been presented.

  8. Fostering students’ thinking skill and social attitude through STAD cooperative learning technique on tenth grade students of chemistry class

    Science.gov (United States)

    Kriswintari, D.; Yuanita, L.; Widodo, W.

    2018-04-01

    The aim of this study was to develop chemistry learning package using Student Teams Achievement Division (STAD) cooperative learning technique to foster students’ thinking skills and social attitudes. The chemistry learning package consisting of lesson plan, handout, students’ worksheet, thinking skill test, and observation sheet of social attitude was developed using the Dick and Carey model. Research subject of this study was chemistry learning package using STAD which was tried out on tenth grade students of SMA Trimurti Surabaya. The tryout was conducted using the one-group pre-test post-test design. Data was collected through observation, test, and questionnaire. The obtained data were analyzed using descriptive qualitative analysis. The findings of this study revealed that the developed chemistry learning package using STAD cooperative learning technique was categorized valid, practice and effective to be implemented in the classroom to foster students’ thinking skill and social attitude.

  9. ISAC's Gating-ML 2.0 data exchange standard for gating description.

    Science.gov (United States)

    Spidlen, Josef; Moore, Wayne; Brinkman, Ryan R

    2015-07-01

    The lack of software interoperability with respect to gating has traditionally been a bottleneck preventing the use of multiple analytical tools and reproducibility of flow cytometry data analysis by independent parties. To address this issue, ISAC developed Gating-ML, a computer file format to encode and interchange gates. Gating-ML 1.5 was adopted and published as an ISAC Candidate Recommendation in 2008. Feedback during the probationary period from implementors, including major commercial software companies, instrument vendors, and the wider community, has led to a streamlined Gating-ML 2.0. Gating-ML has been significantly simplified and therefore easier to support by software tools. To aid developers, free, open source reference implementations, compliance tests, and detailed examples are provided to stimulate further commercial adoption. ISAC has approved Gating-ML as a standard ready for deployment in the public domain and encourages its support within the community as it is at a mature stage of development having undergone extensive review and testing, under both theoretical and practical conditions. © 2015 International Society for Advancement of Cytometry.

  10. Gating-ML: XML-based gating descriptions in flow cytometry.

    Science.gov (United States)

    Spidlen, Josef; Leif, Robert C; Moore, Wayne; Roederer, Mario; Brinkman, Ryan R

    2008-12-01

    The lack of software interoperability with respect to gating due to lack of a standardized mechanism for data exchange has traditionally been a bottleneck, preventing reproducibility of flow cytometry (FCM) data analysis and the usage of multiple analytical tools. To facilitate interoperability among FCM data analysis tools, members of the International Society for the Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) have developed an XML-based mechanism to formally describe gates (Gating-ML). Gating-ML, an open specification for encoding gating, data transformations and compensation, has been adopted by the ISAC DSTF as a Candidate Recommendation. Gating-ML can facilitate exchange of gating descriptions the same way that FCS facilitated for exchange of raw FCM data. Its adoption will open new collaborative opportunities as well as possibilities for advanced analyses and methods development. The ISAC DSTF is satisfied that the standard addresses the requirements for a gating exchange standard.

  11. The performance review of EEWS(Earthquake Early Warning System) about Gyeongju earthquakes with Ml 5.1 and Ml 5.8 in Korea

    Science.gov (United States)

    Park, Jung-Ho; Chi, Heon-Cheol; Lim, In-Seub; Seong, Yun-Jeong; Park, Jihwan

    2017-04-01

    EEW(Earthquake Early Warning) service to the public has been officially operated by KMA (Korea Meteorological Administration) from 2015 in Korea. For the KMA's official EEW service, KIGAM has adopted ElarmS from UC Berkeley BSL and modified local magnitude relation, 1-D travel time curves and association procedures with real time waveform from about 201 seismic stations of KMA, KIGAM, KINS and KEPRI. There were two moderate size earthquakes with magnitude Ml 5.1 and Ml 5.8 close to Gyeongju city located at the southeastern part of Korea on Sep. 12. 2016. We have checked the performance of EEWS(Earthquake Early Warning System) named as TrigDB by KIGAM reviewing of these two Gyeongju earthquakes. The nearest station to epicenters of two earthquakes Ml 5.1(35.7697 N, 129.1904 E) and Ml 5.8(35.7632 N, 129.1898 E) was MKL which detected P phases in about 2.1 and 3.6 seconds after the origin times respectively. The first events were issued in 6.3 and 7.0 seconds from each origin time. Because of the unstable results on the early steps due to very few stations and unexpected automated analysis, KMA has the policy to wait for more 20 seconds for confirming the reliability. For these events KMA published EEW alarms in about 26 seconds after origin times with M 5.3 and M 5.9 respectively.

  12. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    Science.gov (United States)

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  13. HEP meets ML award talk : XGBoost

    CERN Multimedia

    CERN. Geneva; CHEN, Tianqi

    2015-01-01

    Tianqi Chen and Tong He (team crowwork) have provided very early in the challenge to all participants XGBoost (for eXtreme Gradient Boosted). It is a parallelised software to train boost decision trees, which has been effectively used by many participants to the challenge. For this, they have won the "HEP meets ML" award which is the invitation to CERN happening today.

  14. Revitalizing pathology laboratories in a gastrointestinal pathophysiology course using multimedia and team-based learning techniques.

    Science.gov (United States)

    Carbo, Alexander R; Blanco, Paola G; Graeme-Cooke, Fiona; Misdraji, Joseph; Kappler, Steven; Shaffer, Kitt; Goldsmith, Jeffrey D; Berzin, Tyler; Leffler, Daniel; Najarian, Robert; Sepe, Paul; Kaplan, Jennifer; Pitman, Martha; Goldman, Harvey; Pelletier, Stephen; Hayward, Jane N; Shields, Helen M

    2012-05-15

    In 2008, we changed the gastrointestinal pathology laboratories in a gastrointestinal pathophysiology course to a more interactive format using modified team-based learning techniques and multimedia presentations. The results were remarkably positive and can be used as a model for pathology laboratory improvement in any organ system. Over a two-year period, engaging and interactive pathology laboratories were designed. The initial restructuring of the laboratories included new case material, Digital Atlas of Video Education Project videos, animations and overlays. Subsequent changes included USMLE board-style quizzes at the beginning of each laboratory, with individual readiness assessment testing and group readiness assessment testing, incorporation of a clinician as a co-teacher and role playing for the student groups. Student responses for pathology laboratory contribution to learning improved significantly compared to baseline. Increased voluntary attendance at pathology laboratories was observed. Spontaneous student comments noted the positive impact of the laboratories on their learning. Pathology laboratory innovations, including modified team-based learning techniques with individual and group self-assessment quizzes, multimedia presentations, and paired teaching by a pathologist and clinical gastroenterologist led to improvement in student perceptions of pathology laboratory contributions to their learning and better pathology faculty evaluations. These changes can be universally applied to other pathology laboratories to improve student satisfaction. Copyright © 2012 Elsevier GmbH. All rights reserved.

  15. Deep Learning for ECG Classification

    Science.gov (United States)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  16. QualityML: a dictionary for quality metadata encoding

    Science.gov (United States)

    Ninyerola, Miquel; Sevillano, Eva; Serral, Ivette; Pons, Xavier; Zabala, Alaitz; Bastin, Lucy; Masó, Joan

    2014-05-01

    The scenario of rapidly growing geodata catalogues requires tools focused on facilitate users the choice of products. Having quality fields populated in metadata allow the users to rank and then select the best fit-for-purpose products. In this direction, we have developed the QualityML (http://qualityml.geoviqua.org), a dictionary that contains hierarchically structured concepts to precisely define and relate quality levels: from quality classes to quality measurements. Generically, a quality element is the path that goes from the higher level (quality class) to the lowest levels (statistics or quality metrics). This path is used to encode quality of datasets in the corresponding metadata schemas. The benefits of having encoded quality, in the case of data producers, are related with improvements in their product discovery and better transmission of their characteristics. In the case of data users, particularly decision-makers, they would find quality and uncertainty measures to take the best decisions as well as perform dataset intercomparison. Also it allows other components (such as visualization, discovery, or comparison tools) to be quality-aware and interoperable. On one hand, the QualityML is a profile of the ISO geospatial metadata standards providing a set of rules for precisely documenting quality indicator parameters that is structured in 6 levels. On the other hand, QualityML includes semantics and vocabularies for the quality concepts. Whenever possible, if uses statistic expressions from the UncertML dictionary (http://www.uncertml.org) encoding. However it also extends UncertML to provide list of alternative metrics that are commonly used to quantify quality. A specific example, based on a temperature dataset, is shown below. The annual mean temperature map has been validated with independent in-situ measurements to obtain a global error of 0.5 ° C. Level 0: Quality class (e.g., Thematic accuracy) Level 1: Quality indicator (e.g., Quantitative

  17. IMPROVING THE STUDENTS‘ READING COMPREHENSION THROUGH KNOW-WANT-LEARN (KWL TECHNIQUE TO TEACH ANALYTICAL EXPOSITION ( Class Action Research

    Directory of Open Access Journals (Sweden)

    Meike Imelda Wachyu

    2017-12-01

    Full Text Available This study is aimed at finding out the impacts of the use of Know-Want-Learn technique in improving the reading comprehension to teach analytical exposition among eleventh grade students of SMA N 2 Indramayu in the academic year of 2017/2018. The study was action research in two research cycles. In the study, the researcher collaborated with the English teachers and the students. The data of this study were qualitative in nature supported by quantitative data. Qualitative data were obtained from the results of classroom observation and collaborators‘ discussion. Quantitative data were obtained from pre-test and post test results. The instruments for collecting the data were observation guides, interview guides, and the pre-test and posttest. The data were in the form of field notes, interview transcripts, and the scores of the students‘ pre-test and posttest. The results of the two cycles show that the use of Know-WantLearn technique is effective to improve the students‘ reading comprehension. It is supported by the qualitative data which show that (1 Know-Want-Learn technique can help the teacher to scaffold the students‘ comprehension of the text by focusing on the steps before, during, and after reading; (2 Know-Want-Learn technique can help the students to preview the text, assess what they have learned after reading, and motivate their interest in reading; (3 The kind of activities given such as preeteaching vocabulary, using skimming and scanning, using fix-up strategies, and guessing meaning can help the students to read the text efficiently.

  18. Evaluation of ML-MC as a Depth Discriminant in Yellowstone, USA and Italy

    Science.gov (United States)

    Li, Z.; Koper, K. D.; Burlacu, R.; Sun, D.; D'Amico, S.

    2017-12-01

    Recent work has shown that the difference between two magnitude scales, ML (local Richter magnitude) and MC (coda/duration magnitude), acts as a depth discriminant in Utah. Shallow seismic sources, such as mining induced earthquakes and explosions, have strongly negative ML-MC values, while deeper tectonic earthquakes have ML-MC values near zero. These observations imply that ML-MC might be effective at discriminating small explosions from deeper natural earthquakes at local distances. In this work, we examine seismicity catalogs for the Yellowstone region and Italy to determine if ML-MCacts as a depth discriminant in these regions as well. We identified 4,780 earthquakes that occurred in the Yellowstone region between Sept. 24, 1994 and March 31, 2017 for which both ML and MC were calculated. The ML-MC distribution is well described by a Gaussian function with a mean of 0.102 and a standard deviation of 0.326. We selected a subset of these events with accurate depths and determined mean ML-MC values in various depth bins. An event depth was considered accurate if the formal depth error was less than 2 km and either (1) the nearest station was within one focal depth or (2) the distance to the nearest station was smaller than the bin size. We find that ML-MC decreases as event depths become shallower than about 10 km. Similar to the results for Utah, the decrease is statistically significant and is robust with respect to small changes in bin size and the criteria used to define accurate depths. We used a similar process to evaluate whether ML-MC was a function of source depth for 63,555 earthquakes that occurred between April 16, 2005 and April 30, 2012 in Italy. The ML-MC values in Italy are also well described by a normal distribution, with a mean of -0.477 and standard deviation of 0.315. We again find a statistically significant decrease in ML-MC for shallow earthquakes. In contrast to the Yellowstone results, for Italy ML-MC decreases at a nearly constant rate

  19. Introduction to Machine Learning: Class Notes 67577

    OpenAIRE

    Shashua, Amnon

    2009-01-01

    Introduction to Machine learning covering Statistical Inference (Bayes, EM, ML/MaxEnt duality), algebraic and spectral methods (PCA, LDA, CCA, Clustering), and PAC learning (the Formal model, VC dimension, Double Sampling theorem).

  20. [Motor capacities involved in the psychomotor skills of the cardiopulmonary resuscitation technique: recommendations for the teaching-learning process].

    Science.gov (United States)

    Miyadahira, A M

    2001-12-01

    It is a bibliographic study about the identification of the motor capacities involved in the psychomotor skills of the cardiopulmonary resuscitation (CPR) which aims to obtain subsidies to the planning of the teaching-learning process of this skill. It was found that: the motor capacities involved in the psychomotor skill of the CPR technique are predominantly cognitive and motor, involving 9 perceptive-motor capacities and 8 physical proficiency capacities. The CPR technique is a psychomotor skill classified as open, done in series and categorized as a thin and global skill and the teaching-learning process of the CPR technique has an elevated degree of complexity.

  1. Markerless gating for lung cancer radiotherapy based on machine learning techniques

    International Nuclear Information System (INIS)

    Lin Tong; Li Ruijiang; Tang Xiaoli; Jiang, Steve B; Dy, Jennifer G

    2009-01-01

    In lung cancer radiotherapy, radiation to a mobile target can be delivered by respiratory gating, for which we need to know whether the target is inside or outside a predefined gating window at any time point during the treatment. This can be achieved by tracking one or more fiducial markers implanted inside or near the target, either fluoroscopically or electromagnetically. However, the clinical implementation of marker tracking is limited for lung cancer radiotherapy mainly due to the risk of pneumothorax. Therefore, gating without implanted fiducial markers is a promising clinical direction. We have developed several template-matching methods for fluoroscopic marker-less gating. Recently, we have modeled the gating problem as a binary pattern classification problem, in which principal component analysis (PCA) and support vector machine (SVM) are combined to perform the classification task. Following the same framework, we investigated different combinations of dimensionality reduction techniques (PCA and four nonlinear manifold learning methods) and two machine learning classification methods (artificial neural networks-ANN and SVM). Performance was evaluated on ten fluoroscopic image sequences of nine lung cancer patients. We found that among all combinations of dimensionality reduction techniques and classification methods, PCA combined with either ANN or SVM achieved a better performance than the other nonlinear manifold learning methods. ANN when combined with PCA achieves a better performance than SVM in terms of classification accuracy and recall rate, although the target coverage is similar for the two classification methods. Furthermore, the running time for both ANN and SVM with PCA is within tolerance for real-time applications. Overall, ANN combined with PCA is a better candidate than other combinations we investigated in this work for real-time gated radiotherapy.

  2. Rare A2ML1 variants confer susceptibility to otitis media

    Science.gov (United States)

    Santos-Cortez, Regie Lyn P.; Chiong, Charlotte M.; Reyes-Quintos, Ma. Rina T.; Tantoco, Ma. Leah C.; Wang, Xin; Acharya, Anushree; Abbe, Izoduwa; Giese, Arnaud P.; Smith, Joshua D.; Allen, E. Kaitlynn; Li, Biao; Cutiongco-de la Paz, Eva Maria; Garcia, Marieflor Cristy; Llanes, Erasmo Gonzalo D.V.; Labra, Patrick John; Gloria-Cruz, Teresa Luisa I.; Chan, Abner L.; Wang, Gao T.; Daly, Kathleen A.; Shendure, Jay; Bamshad, Michael J.; Nickerson, Deborah A.; Patel, Janak A.; Riazuddin, Saima; Sale, Michele M.; Chonmaitree, Tasnee; Ahmed, Zubair M.; Abes, Generoso T.; Leal, Suzanne M.

    2015-01-01

    A duplication variant within middle-ear-specific gene A2ML1 co-segregates with otitis media in an indigenous Filipino pedigree (LOD score=7.5 at reduced penetrance) and lies within a founder haplotype that is also shared by three otitis-prone European- and Hispanic-American children, but is absent in non-otitis-prone children and >62,000 next-generation sequences. Seven additional A2ML1 variants were identified in six otitis-prone children. Collectively our studies support a role for A2ML1 in the pathophysiology of otitis media. PMID:26121085

  3. qcML: an exchange format for quality control metrics from mass spectrometry experiments.

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-08-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  4. qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*

    Science.gov (United States)

    Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart

    2014-01-01

    Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958

  5. Classification of Phishing Email Using Random Forest Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Andronicus A. Akinyelu

    2014-01-01

    Full Text Available Phishing is one of the major challenges faced by the world of e-commerce today. Thanks to phishing attacks, billions of dollars have been lost by many companies and individuals. In 2012, an online report put the loss due to phishing attack at about $1.5 billion. This global impact of phishing attacks will continue to be on the increase and thus requires more efficient phishing detection techniques to curb the menace. This paper investigates and reports the use of random forest machine learning algorithm in classification of phishing attacks, with the major objective of developing an improved phishing email classifier with better prediction accuracy and fewer numbers of features. From a dataset consisting of 2000 phishing and ham emails, a set of prominent phishing email features (identified from the literature were extracted and used by the machine learning algorithm with a resulting classification accuracy of 99.7% and low false negative (FN and false positive (FP rates.

  6. Comparison of dural puncture epidural technique versus conventional epidural technique for labor analgesia in primigravida

    Directory of Open Access Journals (Sweden)

    Pritam Yadav

    2018-01-01

    Full Text Available >Background: Dural puncture epidural (DPE is a method in which a dural hole is created prior to epidural injection. This study was planned to evaluate whether dural puncture improves onset and duration of labor analgesia when compared to conventional epidural technique.Methods and Materials: Sixty term primigravida parturients of ASA grade I and II were randomly assigned to two groups of 30 each (Group E for conventional epidural and Group DE for dural puncture epidural. In group E, epidural space was identified and 18-gauge multi-orifice catheter was threaded 5 cm into the epidural space. In group DE, dura was punctured using the combines spinal epidural (CSE spinal needle and epidural catheter threaded as in group E followed by 10 ml of injection of Ropivacaine (0.2% with 20 mcg of Fentanyl (2 mcg/ml in fractions of 2.5 ml. Later, Ropivacaine 10 ml was given as a top-up on patient request. Onset, visual analouge scale (VAS, sensory and motor block, haemodynamic variables, duration of analgesia of initial dose were noted along with mode of delivery and the neonatal outcome.Results: Six parturients in group DE achieved adequate analgesia in 5 minutes while none of those in group E (P 0.05.Conclusions: Both techniques of labor analgesia are efficacious; dural puncture epidural has the potential to fasten onset and improve quality of labor analgesia when compared with conventional epidural technique.

  7. Memorization techniques: Using mnemonics to learn fifth grade science terms

    Science.gov (United States)

    Garcia, Juan O.

    The purpose of this study was to determine whether mnemonic instruction could assist students in learning fifth-grade science terminology more effectively than traditional-study methods of recall currently in practice The task was to examine if fifth-grade students were able to learn a mnemonic and then use it to understand science vocabulary; subsequently, to determine if students were able to remember the science terms after a period of time. The problem is that in general, elementary school students are not being successful in science achievement at the fifth grade level. In view of this problem, if science performance is increased at the elementary level, then it is likely that students will be successful when tested at the 8th and 10th grade in science with the Texas Assessment of Knowledge and Skills (TAKS) in the future. Two research questions were posited: (1) Is there a difference in recall achievement when a mnemonic such as method of loci, pegword method, or keyword method is used in learning fifth-grade science vocabulary as compared to the traditional-study method? (2) If using a mnemonic in learning fifth-grade science vocabulary was effective on recall achievement, would this achievement be maintained over a span of time? The need for this study was to assist students in learning science terms and concepts for state accountability purposes. The first assumption was that memorization techniques are not commonly applied in fifth-grade science classes in elementary schools. A second assumption was that mnemonic devices could be used successfully in learning science terms and increase long term retention. The first limitation was that the study was conducted on one campus in one school district in South Texas which limited the generalization of the study. The second limitation was that it included random assigned intact groups as opposed to random student assignment to fifth-grade classroom groups.

  8. Comparison of two different techniques of cooperative learning approach: Undergraduates' conceptual understanding in the context of hormone biochemistry.

    Science.gov (United States)

    Mutlu, Ayfer

    2018-03-01

    The purpose of the research was to compare the effects of two different techniques of the cooperative learning approach, namely Team-Game Tournament and Jigsaw, on undergraduates' conceptual understanding in a Hormone Biochemistry course. Undergraduates were randomly assigned to Group 1 (N = 23) and Group 2 (N = 29). Instructions were accomplished using Team-Game Tournament in Group 1 and Jigsaw in Group 2. Before the instructions, all groups were informed about cooperative learning and techniques, their responsibilities in the learning process and accessing of resources. Instructions were conducted under the guidance of the researcher for nine weeks and the Hormone Concept Test developed by the researcher was used before and after the instructions for data collection. According to the results, while both techniques improved students' understanding, Jigsaw was more effective than Team-Game Tournament. © 2017 by The International Union of Biochemistry and Molecular Biology, 46(2):114-120, 2018. © 2017 The International Union of Biochemistry and Molecular Biology.

  9. Application of Machine Learning Techniques in Aquaculture

    OpenAIRE

    Rahman, Akhlaqur; Tasnim, Sumaira

    2014-01-01

    In this paper we present applications of different machine learning algorithms in aquaculture. Machine learning algorithms learn models from historical data. In aquaculture historical data are obtained from farm practices, yields, and environmental data sources. Associations between these different variables can be obtained by applying machine learning algorithms to historical data. In this paper we present applications of different machine learning algorithms in aquaculture applications.

  10. Machine Learning and Inverse Problem in Geodynamics

    Science.gov (United States)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.

    2017-12-01

    During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques

  11. Learning Machine Learning: A Case Study

    Science.gov (United States)

    Lavesson, N.

    2010-01-01

    This correspondence reports on a case study conducted in the Master's-level Machine Learning (ML) course at Blekinge Institute of Technology, Sweden. The students participated in a self-assessment test and a diagnostic test of prerequisite subjects, and their results on these tests are correlated with their achievement of the course's learning…

  12. QuakeML: status of the XML-based seismological data exchange format

    Directory of Open Access Journals (Sweden)

    Joachim Saul

    2011-04-01

    Full Text Available QuakeML is an XML-based data exchange standard for seismology that is in its fourth year of active community-driven development. Its development was motivated by the need to consolidate existing data formats for applications in statistical seismology, as well as setting a cutting-edge, community-agreed standard to foster interoperability of distributed infrastructures. The current release (version 1.2 is based on a public Request for Comments process and accounts for suggestions and comments provided by a broad international user community. QuakeML is designed as an umbrella schema under which several sub-packages are collected. The present scope of QuakeML 1.2 covers a basic description of seismic events including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Work on additional packages (macroseismic information, ground motion, seismic inventory, and resource metadata has been started, but is at an early stage. Several applications based on the QuakeML data model have been created so far. Among these are earthquake catalog web services at the European Mediterranean Seismological Centre (EMSC, GNS Science, and the Southern California Earthquake Data Center (SCEDC, and QuakePy, an open-source Python-based seismicity analysis toolkit. Furthermore, QuakeML is being used in the SeisComP3 system from GFZ Potsdam, and in the Collaboratory for the Study of Earthquake Predictability (CSEP testing center installations, developed by Southern California Earthquake Center (SCEC. QuakeML is still under active and dynamic development. Further contributions from the community are crucial to its success and are highly welcome.

  13. GeoSciML version 3: A GML application for geologic information

    Science.gov (United States)

    International Union of Geological Sciences., I. C.; Richard, S. M.

    2011-12-01

    After 2 years of testing and development, XML schema for GeoSciML version 3 are now ready for application deployment. GeoSciML draws from many geoscience data modelling efforts to establish a common suite of feature types to represent information associated with geologic maps (materials, structures, and geologic units) and observations including structure data, samples, and chemical analyses. After extensive testing and use case analysis, in December 2008 the CGI Interoperability Working Group (IWG) released GeoSciML 2.0 as an application schema for basic geological information. GeoSciML 2.0 is in use to deliver geologic data by the OneGeology Europe portal, the Geological Survey of Canada Groundwater Information Network (wet GIN), and the Auscope Mineral Resources portal. GeoSciML to version 3.0 is updated to OGC Geography Markup Language v3.2, re-engineered patterns for association of element values with controlled vocabulary concepts, incorporation of ISO19156 Observation and Measurement constructs for representing numeric and categorical values and for representing analytical data, incorporation of EarthResourceML to represent mineral occurrences and mines, incorporation of the GeoTime model to represent GSSP and stratigraphic time scale, and refactoring of the GeoSciML namespace to follow emerging ISO practices for decoupling of dependencies between standardized namespaces. These changes will make it easier for data providers to link to standard vocabulary and registry services. The depth and breadth of GeoSciML remains largely unchanged, covering the representation of geologic units, earth materials and geologic structures. ISO19156 elements and patterns are used to represent sampling features such as boreholes and rock samples, as well as geochemical and geochronologic measurements. Geologic structures include shear displacement structures (brittle faults and ductile shears), contacts, folds, foliations, lineations and structures with no preferred

  14. Big data - modelling of midges in Europa using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Cuellar, Ana Carolina; Kjær, Lene Jung; Skovgaard, Henrik

    2017-01-01

    coordinates of each trap, start and end dates of trapping. We used 120 environmental predictor variables together with Random Forest machine learning algorithms to predict the overall species distribution (probability of occurrence) and monthly abundance in Europe. We generated maps for every month...... and the Obsoletus group, although abundance was generally higher for a longer period of time for C. imicula than for the Obsoletus group. Using machine learning techniques, we were able to model the spatial distribution in Europe for C. imicola and the Obsoletus group in terms of abundance and suitability...

  15. Introducing E-Learning in a Norwegian Service Company with Participatory Design and Evolutionary Prototyping Techniques

    OpenAIRE

    Mørch , Anders I.; Engen , Bård Ketil; Hansen Åsand , Hege-René; Brynhildsen , Camilla; Tødenes , Ida

    2004-01-01

    Over a 2-year period, we have participated in the introduction of e-learning in a Norwegian service company, a gas station division of an oil company. This company has an advanced computer network infrastructure for communication and information sharing, but the primary task of the employees is serving customers. We identify some challenges to introducing e-learning in this kind of environment. A primary emphasis has been on using participatory design techniques during the planning stages and...

  16. A framework for detection of malicious software in Android handheld systems using machine learning techniques

    OpenAIRE

    Torregrosa García, Blas

    2015-01-01

    The present study aims at designing and developing new approaches to detect malicious applications in Android-based devices. More precisely, MaLDroide (Machine Learning-based Detector for Android malware), a framework for detection of Android malware based on machine learning techniques, is introduced here. It is devised to identify malicious applications. Este trabajo tiene como objetivo el diseño y el desarrollo de nuevas formas de detección de aplicaciones maliciosas en los dispositivos...

  17. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    Science.gov (United States)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  18. Quantification of gastrointestinal liquid volumes and distribution following a 240 mL dose of water in the fasted state.

    Science.gov (United States)

    Mudie, Deanna M; Murray, Kathryn; Hoad, Caroline L; Pritchard, Susan E; Garnett, Martin C; Amidon, Gordon L; Gowland, Penny A; Spiller, Robin C; Amidon, Gregory E; Marciani, Luca

    2014-09-02

    The rate and extent of drug dissolution and absorption from solid oral dosage forms is highly dependent upon the volumes and distribution of gastric and small intestinal water. However, little is known about the time courses and distribution of water volumes in vivo in an undisturbed gut. Previous imaging studies offered a snapshot of water distribution in fasted humans and showed that water in the small intestine is distributed in small pockets. This study aimed to quantify the volume and number of water pockets in the upper gut of fasted healthy humans following ingestion of a glass of water (240 mL, as recommended for bioavailability/bioequivalence (BA/BE) studies), using recently validated noninvasive magnetic resonance imaging (MRI) methods. Twelve healthy volunteers underwent upper and lower abdominal MRI scans before drinking 240 mL (8 fluid ounces) of water. After ingesting the water, they were scanned at intervals for 2 h. The drink volume, inclusion criteria, and fasting conditions matched the international standards for BA/BE testing in healthy volunteers. The images were processed for gastric and intestinal total water volumes and for the number and volume of separate intestinal water pockets larger than 0.5 mL. The fasted stomach contained 35 ± 7 mL (mean ± SEM) of resting water. Upon drinking, the gastric fluid rose to 242 ± 9 mL. The gastric water volume declined rapidly after that with a half emptying time (T50%) of 13 ± 1 min. The mean gastric volume returned back to baseline 45 min after the drink. The fasted small bowel contained a total volume of 43 ± 14 mL of resting water. Twelve minutes after ingestion of water, small bowel water content rose to a maximum value of 94 ± 24 mL contained within 15 ± 2 pockets of 6 ± 2 mL each. At 45 min, when the glass of water had emptied completely from the stomach, total intestinal water volume was 77 ± 15 mL distributed into 16 ± 3 pockets of 5 ± 1 mL each. MRI provided unprecedented insights into

  19. How to Improve Fault Tolerance in Disaster Predictions: A Case Study about Flash Floods Using IoT, ML and Real Data

    Science.gov (United States)

    Furquim, Gustavo; Filho, Geraldo P. R.; Pessin, Gustavo; Pazzi, Richard W.

    2018-01-01

    The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN. PMID:29562657

  20. How to Improve Fault Tolerance in Disaster Predictions: A Case Study about Flash Floods Using IoT, ML and Real Data

    Directory of Open Access Journals (Sweden)

    Gustavo Furquim

    2018-03-01

    Full Text Available The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs for data collection and machine-learning (ML techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT. SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN.

  1. How to Improve Fault Tolerance in Disaster Predictions: A Case Study about Flash Floods Using IoT, ML and Real Data.

    Science.gov (United States)

    Furquim, Gustavo; Filho, Geraldo P R; Jalali, Roozbeh; Pessin, Gustavo; Pazzi, Richard W; Ueyama, Jó

    2018-03-19

    The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN.

  2. Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games

    KAUST Repository

    Jaleel, Hassan

    2018-04-08

    Stochastic stability is a popular solution concept for stochastic learning dynamics in games. However, a critical limitation of this solution concept is its inability to distinguish between different learning rules that lead to the same steady-state behavior. We address this limitation for the first time and develop a framework for the comparative analysis of stochastic learning dynamics with different update rules but same steady-state behavior. We present the framework in the context of two learning dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics have the same stochastically stable states, LLL and ML correspond to different behavioral models for decision making. Moreover, we demonstrate through an example setup of sensor coverage game that for each of these dynamics, the paths to stochastically stable states exhibit distinctive behaviors. Therefore, we propose multiple criteria to analyze and quantify the differences in the short and medium run behavior of stochastic learning dynamics. We derive and compare upper bounds on the expected hitting time to the set of Nash equilibria for both LLL and ML. For the medium to long-run behavior, we identify a set of tools from the theory of perturbed Markov chains that result in a hierarchical decomposition of the state space into collections of states called cycles. We compare LLL and ML based on the proposed criteria and develop invaluable insights into the comparative behavior of the two dynamics.

  3. Development and Experimental Evaluation of Machine-Learning Techniques for an Intelligent Hairy Scalp Detection System

    Directory of Open Access Journals (Sweden)

    Wei-Chien Wang

    2018-05-01

    Full Text Available Deep learning has become the most popular research subject in the fields of artificial intelligence (AI and machine learning. In October 2013, MIT Technology Review commented that deep learning was a breakthrough technology. Deep learning has made progress in voice and image recognition, image classification, and natural language processing. Prior to deep learning, decision tree, linear discriminant analysis (LDA, support vector machines (SVM, k-nearest neighbors algorithm (K-NN, and ensemble learning were popular in solving classification problems. In this paper, we applied the previously mentioned and deep learning techniques to hairy scalp images. Hairy scalp problems are usually diagnosed by non-professionals in hair salons, and people with such problems may be advised by these non-professionals. Additionally, several common scalp problems are similar; therefore, non-experts may provide incorrect diagnoses. Hence, scalp problems have worsened. In this work, we implemented and compared the deep-learning method, the ImageNet-VGG-f model Bag of Words (BOW, with machine-learning classifiers, and histogram of oriented gradients (HOG/pyramid histogram of oriented gradients (PHOG with machine-learning classifiers. The tools from the classification learner apps were used for hairy scalp image classification. The results indicated that deep learning can achieve an accuracy of 89.77% when the learning rate is 1 × 10−4, and this accuracy is far higher than those achieved by BOW with SVM (80.50% and PHOG with SVM (53.0%.

  4. Machine Learning of Fault Friction

    Science.gov (United States)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

  5. Analysed potential of big data and supervised machine learning techniques in effectively forecasting travel times from fused data

    Directory of Open Access Journals (Sweden)

    Ivana Šemanjski

    2015-12-01

    Full Text Available Travel time forecasting is an interesting topic for many ITS services. Increased availability of data collection sensors increases the availability of the predictor variables but also highlights the high processing issues related to this big data availability. In this paper we aimed to analyse the potential of big data and supervised machine learning techniques in effectively forecasting travel times. For this purpose we used fused data from three data sources (Global Positioning System vehicles tracks, road network infrastructure data and meteorological data and four machine learning techniques (k-nearest neighbours, support vector machines, boosting trees and random forest. To evaluate the forecasting results we compared them in-between different road classes in the context of absolute values, measured in minutes, and the mean squared percentage error. For the road classes with the high average speed and long road segments, machine learning techniques forecasted travel times with small relative error, while for the road classes with the small average speeds and segment lengths this was a more demanding task. All three data sources were proven itself to have a high impact on the travel time forecast accuracy and the best results (taking into account all road classes were achieved for the k-nearest neighbours and random forest techniques.

  6. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  7. Multiple-Choice Testing Using Immediate Feedback--Assessment Technique (IF AT®) Forms: Second-Chance Guessing vs. Second-Chance Learning?

    Science.gov (United States)

    Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A.

    2015-01-01

    Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…

  8. Classification of breast tumour using electrical impedance and machine learning techniques.

    Science.gov (United States)

    Al Amin, Abdullah; Parvin, Shahnaj; Kadir, M A; Tahmid, Tasmia; Alam, S Kaisar; Siddique-e Rabbani, K

    2014-06-01

    When a breast lump is detected through palpation, mammography or ultrasonography, the final test for characterization of the tumour, whether it is malignant or benign, is biopsy. This is invasive and carries hazards associated with any surgical procedures. The present work was undertaken to study the feasibility for such characterization using non-invasive electrical impedance measurements and machine learning techniques. Because of changes in cell morphology of malignant and benign tumours, changes are expected in impedance at a fixed frequency, and versus frequency of measurement. Tetrapolar impedance measurement (TPIM) using four electrodes at the corners of a square region of sides 4 cm was used for zone localization. Data of impedance in two orthogonal directions, measured at 5 and 200 kHz from 19 subjects, and their respective slopes with frequency were subjected to machine learning procedures through the use of feature plots. These patients had single or multiple tumours of various types in one or both breasts, and four of them had malignant tumours, as diagnosed by core biopsy. Although size and depth of the tumours are expected to affect the measurements, this preliminary work ignored these effects. Selecting 12 features from the above measurements, feature plots were drawn for the 19 patients, which displayed considerable overlap between malignant and benign cases. However, based on observed qualitative trend of the measured values, when all the feature values were divided by respective ages, the two types of tumours separated out reasonably well. Using K-NN classification method the results obtained are, positive prediction value: 60%, negative prediction value: 93%, sensitivity: 75%, specificity: 87% and efficacy: 84%, which are very good for such a test on a small sample size. Study on a larger sample is expected to give confidence in this technique, and further improvement of the technique may have the ability to replace biopsy.

  9. Classification of breast tumour using electrical impedance and machine learning techniques

    International Nuclear Information System (INIS)

    Amin, Abdullah Al; Parvin, Shahnaj; Kadir, M A; Tahmid, Tasmia; Alam, S Kaisar; Siddique-e Rabbani, K

    2014-01-01

    When a breast lump is detected through palpation, mammography or ultrasonography, the final test for characterization of the tumour, whether it is malignant or benign, is biopsy. This is invasive and carries hazards associated with any surgical procedures. The present work was undertaken to study the feasibility for such characterization using non-invasive electrical impedance measurements and machine learning techniques. Because of changes in cell morphology of malignant and benign tumours, changes are expected in impedance at a fixed frequency, and versus frequency of measurement. Tetrapolar impedance measurement (TPIM) using four electrodes at the corners of a square region of sides 4 cm was used for zone localization. Data of impedance in two orthogonal directions, measured at 5 and 200 kHz from 19 subjects, and their respective slopes with frequency were subjected to machine learning procedures through the use of feature plots. These patients had single or multiple tumours of various types in one or both breasts, and four of them had malignant tumours, as diagnosed by core biopsy. Although size and depth of the tumours are expected to affect the measurements, this preliminary work ignored these effects. Selecting 12 features from the above measurements, feature plots were drawn for the 19 patients, which displayed considerable overlap between malignant and benign cases. However, based on observed qualitative trend of the measured values, when all the feature values were divided by respective ages, the two types of tumours separated out reasonably well. Using K-NN classification method the results obtained are, positive prediction value: 60%, negative prediction value: 93%, sensitivity: 75%, specificity: 87% and efficacy: 84%, which are very good for such a test on a small sample size. Study on a larger sample is expected to give confidence in this technique, and further improvement of the technique may have the ability to replace biopsy. (paper)

  10. Strategies and Principles of Distributed Machine Learning on Big Data

    Directory of Open Access Journals (Sweden)

    Eric P. Xing

    2016-06-01

    Full Text Available The rise of big data has led to new demands for machine learning (ML systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area

  11. An in vivo technique for the measurement of bone blood flow in animals

    International Nuclear Information System (INIS)

    Rosenthal, M.S.; DeLuca, P.M. Jr.; Pearson, D.W.; Nickles, R.J.; Lehner, C.E.; Lanphier, E.H.

    1987-01-01

    A new technique to measure the in vivo clearance of 41 Ar from the bone mineral matrix is demonstrated following fast neutron production of 41 Ar in bone via the 44 Ca(n,α) reaction at 14.1 MeV. At the end of irradiation, the 41 Ar activity is assayed with a Ge(Li) detector where sequential gamma-ray spectra are taken. Following full-energy peak integration, background and dead time correction, the activity of 41 Ar as a function of time is determined. Results indicated that the Ar washout from bone in rats using this technique was approximately 16 ml (100 ml min) -1 and in agreement with other measurement techniques. For sheep the bone perfusion in the tibia was approximately 1.9+-0.2 ml (100 ml min) -1 . (author)

  12. pymzML--Python module for high-throughput bioinformatics on mass spectrometry data.

    Science.gov (United States)

    Bald, Till; Barth, Johannes; Niehues, Anna; Specht, Michael; Hippler, Michael; Fufezan, Christian

    2012-04-01

    pymzML is an extension to Python that offers (i) an easy access to mass spectrometry (MS) data that allows the rapid development of tools, (ii) a very fast parser for mzML data, the standard data format in MS and (iii) a set of functions to compare or handle spectra. pymzML requires Python2.6.5+ and is fully compatible with Python3. The module is freely available on http://pymzml.github.com or pypi, is published under LGPL license and requires no additional modules to be installed. christian@fufezan.net.

  13. Spectra, chromatograms, Metadata: mzML-the standard data format for mass spectrometer output.

    Science.gov (United States)

    Turewicz, Michael; Deutsch, Eric W

    2011-01-01

    This chapter describes Mass Spectrometry Markup Language (mzML), an XML-based and vendor-neutral standard data format for storage and exchange of mass spectrometer output like raw spectra and peak lists. It is intended to replace its two precursor data formats (mzData and mzXML), which had been developed independently a few years earlier. Hence, with the release of mzML, the problem of having two different formats for the same purposes is solved, and with it the duplicated effort of maintaining and supporting two data formats. The new format has been developed by a broad-based consortium of major instrument vendors, software vendors, and academic researchers under the aegis of the Human Proteome Organisation (HUPO), Proteomics Standards Initiative (PSI), with full participation of the main developers of the precursor formats. This comprehensive approach helped mzML to become a generally accepted standard. Furthermore, the collaborative development insured that mzML has adopted the best features of its precursor formats. In this chapter, we discuss mzML's development history, its design principles and use cases, as well as its main building components. We also present the available documentation, an example file, and validation software for mzML.

  14. Evaluation of undergraduate clinical learning experiences in the subject of pediatric dentistry using critical incident technique.

    Science.gov (United States)

    Vyawahare, S; Banda, N R; Choubey, S; Parvekar, P; Barodiya, A; Dutta, S

    2013-01-01

    In pediatric dentistry, the experiences of dental students may help dental educators better prepare graduates to treat the children. Research suggests that student's perceptions should be considered in any discussion of their education, but there has been no systematic examination of India's undergraduate dental students learning experiences. This qualitative investigation aimed to gather and analyze information about experiences in pediatric dentistry from the students' viewpoint using critical incident technique (CIT). The sample group for this investigation came from all 240 3rd and 4th year dental students from all the four dental colleges in Indore. Using CIT, participants were asked to describe at least one positive and one negative experience in detail. They described 308 positive and 359 negative experiences related to the pediatric dentistry clinic. Analysis of the data resulted in the identification of four key factors related to their experiences: 1) The instructor; 2) the patient; 3) the learning process; and 4) the learning environment. The CIT is a useful data collection and analysis technique that provides rich, useful data and has many potential uses in dental education.

  15. Learning mediastinoscopy: the need for education, experience and modern techniques--interdependency of the applied technique and surgeon's training level.

    Science.gov (United States)

    Walles, Thorsten; Friedel, Godehard; Stegherr, Tobias; Steger, Volker

    2013-04-01

    Mediastinoscopy represents the gold standard for invasive mediastinal staging. While learning and teaching the surgical technique are challenging due to the limited accessibility of the operation field, both benefited from the implementation of video-assisted techniques. However, it has not been established yet whether video-assisted mediastinoscopy improves the mediastinal staging in itself. Retrospective single-centre cohort analysis of 657 mediastinoscopies performed at a specialized tertiary care thoracic surgery unit from 1994 to 2006. The number of specimens obtained per procedure and per lymph node station (2, 4, 7, 8 for mediastinoscopy and 2-9 for open lymphadenectomy), the number of lymph node stations examined, sensitivity and negative predictive value with a focus on the technique employed (video-assisted vs standard technique) and the surgeon's experience were calculated. Overall sensitivity was 60%, accuracy was 90% and negative predictive value 88%. With the conventional technique, experience alone improved sensitivity from 49 to 57% and it was predominant at the paratracheal right region (from 62 to 82%). But with the video-assisted technique, experienced surgeons rose sensitivity from 57 to 79% in contrast to inexperienced surgeons who lowered sensitivity from 49 to 33%. We found significant differences concerning (i) the total number of specimens taken, (ii) the amount of lymph node stations examined, (iii) the number of specimens taken per lymph node station and (iv) true positive mediastinoscopies. The video-assisted technique can significantly improve the results of mediastinoscopy. A thorough education on the modern video-assisted technique is mandatory for thoracic surgeons until they can fully exhaust its potential.

  16. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Bojan Nemec

    2014-10-01

    Full Text Available High precision Global Navigation Satellite System (GNSS measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier’s neck. A key issue is how to estimate other more relevant parameters of the skier’s body, like the center of mass (COM and ski trajectories. Previously, these parameters were estimated by modeling the skier’s body with an inverted-pendulum model that oversimplified the skier’s body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier’s body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing.

  17. Using machine learning techniques to differentiate acute coronary syndrome

    Directory of Open Access Journals (Sweden)

    Sougand Setareh

    2015-02-01

    Full Text Available Backgroud: Acute coronary syndrome (ACS is an unstable and dynamic process that includes unstable angina, ST elevation myocardial infarction, and non-ST elevation myocardial infarction. Despite recent technological advances in early diognosis of ACS, differentiating between different types of coronary diseases in the early hours of admission is controversial. The present study was aimed to accurately differentiate between various coronary events, using machine learning techniques. Such methods, as a subset of artificial intelligence, include algorithms that allow computers to learn and play a major role in treatment decisions. Methods: 1902 patients diagnosed with ACS and admitted to hospital were selected according to Euro Heart Survey on ACS. Patients were classified based on decision tree J48. Bagging aggregation algorithms was implemented to increase the efficiency of algorithm. Results: The performance of classifiers was estimated and compared based on their accuracy computed from confusion matrix. The accuracy rates of decision tree and bagging algorithm were calculated to be 91.74% and 92.53%, respectively. Conclusion: The proposed methods used in this study proved to have the ability to identify various ACS. In addition, using matrix of confusion, an acceptable number of subjects with acute coronary syndrome were identified in each class.

  18. Learning L2 German vocabulary through reading: the effect of three enhancement techniques compared

    NARCIS (Netherlands)

    Peters, E.; Hulstijn, J.H.; Sercu, L.; Lutjeharms, M.

    2009-01-01

    This study investigated three techniques designed to increase the chances that second language (L2) readers look up and learn unfamiliar words during and after reading an L2 text. Participants in the study, 137 college students in Belgium (L1 = Dutch, L2 = German), were randomly assigned to one of

  19. Understanding a Deep Learning Technique through a Neuromorphic System a Case Study with SpiNNaker Neuromorphic Platform

    Directory of Open Access Journals (Sweden)

    Sugiarto Indar

    2018-01-01

    Full Text Available Deep learning (DL has been considered as a breakthrough technique in the field of artificial intelligence and machine learning. Conceptually, it relies on a many-layer network that exhibits a hierarchically non-linear processing capability. Some DL architectures such as deep neural networks, deep belief networks and recurrent neural networks have been developed and applied to many fields with incredible results, even comparable to human intelligence. However, many researchers are still sceptical about its true capability: can the intelligence demonstrated by deep learning technique be applied for general tasks? This question motivates the emergence of another research discipline: neuromorphic computing (NC. In NC, researchers try to identify the most fundamental ingredients that construct intelligence behaviour produced by the brain itself. To achieve this, neuromorphic systems are developed to mimic the brain functionality down to cellular level. In this paper, a neuromorphic platform called SpiNNaker is described and evaluated in order to understand its potential use as a platform for a deep learning approach. This paper is a literature review that contains comparative study on algorithms that have been implemented in SpiNNaker.

  20. Alkaline protease production from industrial wastes by bacillus subtilis ML-4

    International Nuclear Information System (INIS)

    Sher, M.G.; Nadeem, M.; Syed, Q.; Irfan, M.; Baig, S.

    2010-01-01

    The influence of various culture conditions on protease production by Bacillus subtilis ML-4 was studied in the presence of growth medium containing poultry feed waste (5%), K/sub 2/HPO/sub 4/ (0.3%), CaCl/sub 2/ (0.03%) and MgSO/sub 4/ (0.015%). Maximum protease production (264.25 +- 1.86 U/ml) was observed at initial pH 9 with 3% (v/v) of inoculum size after 48 h of incubation at 37 degree C. The alkaline protease was stable over a broad range of temperature (30 to 60 degree C) and pH (8 to 11). However, maximum activity (155.45 U/ml) was observed at temperature 50 degree C and pH 10. (author)

  1. Model-driven development of smart grid services using SoaML

    DEFF Research Database (Denmark)

    Kosek, Anna Magdalena; Gehrke, Oliver

    2014-01-01

    This paper presents a model-driven software devel- opment process which can be applied to the design of smart grid services. The Service Oriented Architecture Modelling Language (SoaML) is used to describe the architecture as well as the roles and interactions between service participants....... The individual modelling steps and an example design of a SoaML model for a voltage control service are presented and explained. Finally, the paper discusses a proof-of-concept implementation of the modelled service in a smart grid testing laboratory....

  2. Machine learning approaches to diagnosis and laterality effects in semantic dementia discourse.

    Science.gov (United States)

    Garrard, Peter; Rentoumi, Vassiliki; Gesierich, Benno; Miller, Bruce; Gorno-Tempini, Maria Luisa

    2014-06-01

    Advances in automatic text classification have been necessitated by the rapid increase in the availability of digital documents. Machine learning (ML) algorithms can 'learn' from data: for instance a ML system can be trained on a set of features derived from written texts belonging to known categories, and learn to distinguish between them. Such a trained system can then be used to classify unseen texts. In this paper, we explore the potential of the technique to classify transcribed speech samples along clinical dimensions, using vocabulary data alone. We report the accuracy with which two related ML algorithms [naive Bayes Gaussian (NBG) and naive Bayes multinomial (NBM)] categorized picture descriptions produced by: 32 semantic dementia (SD) patients versus 10 healthy, age-matched controls; and SD patients with left- (n = 21) versus right-predominant (n = 11) patterns of temporal lobe atrophy. We used information gain (IG) to identify the vocabulary features that were most informative to each of these two distinctions. In the SD versus control classification task, both algorithms achieved accuracies of greater than 90%. In the right- versus left-temporal lobe predominant classification, NBM achieved a high level of accuracy (88%), but this was achieved by both NBM and NBG when the features used in the training set were restricted to those with high values of IG. The most informative features for the patient versus control task were low frequency content words, generic terms and components of metanarrative statements. For the right versus left task the number of informative lexical features was too small to support any specific inferences. An enriched feature set, including values derived from Quantitative Production Analysis (QPA) may shed further light on this little understood distinction. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. HepML, an XML-based format for describing simulated data in high energy physics

    Science.gov (United States)

    Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.

    2010-10-01

    In this paper we describe a HepML format and a corresponding C++ library developed for keeping complete description of parton level events in a unified and flexible form. HepML tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. A HepML block can be included into event files in the LHEF format. The structure of the HepML block is described by means of several XML Schemas. The Schemas define necessary information for the HepML block and how this information should be located within the block. The library libhepml is a C++ library intended for parsing and serialization of HepML tags, and representing the HepML block in computer memory. The library is an API for external software. For example, Matrix Element Monte Carlo event generators can use the library for preparing and writing a header of an LHEF file in the form of HepML tags. In turn, Showering and Hadronization event generators can parse the HepML header and get the information in the form of C++ classes. libhepml can be used in C++, C, and Fortran programs. All necessary parts of HepML have been prepared and we present the project to the HEP community. Program summaryProgram title: libhepml Catalogue identifier: AEGL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 138 866 No. of bytes in distributed program, including test data, etc.: 613 122 Distribution format: tar.gz Programming language: C++, C Computer: PCs and workstations Operating system: Scientific Linux CERN 4/5, Ubuntu 9.10 RAM: 1 073 741 824 bytes (1 Gb) Classification: 6.2, 11.1, 11.2 External routines: Xerces XML library ( http://xerces.apache.org/xerces-c/), Expat XML Parser ( http://expat.sourceforge.net/) Nature of problem: Monte Carlo simulation in high

  4. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  5. Diguanylate cyclase activity of the Mycobacterium leprae T cell antigen ML1419c.

    Science.gov (United States)

    Rotcheewaphan, Suwatchareeporn; Belisle, John T; Webb, Kristofor J; Kim, Hee-Jin; Spencer, John S; Borlee, Bradley R

    2016-09-01

    The second messenger, bis-(3',5')-cyclic dimeric guanosine monophosphate (cyclic di-GMP), is involved in the control of multiple bacterial phenotypes, including those that impact host-pathogen interactions. Bioinformatics analyses predicted that Mycobacterium leprae, an obligate intracellular bacterium and the causative agent of leprosy, encodes three active diguanylate cyclases. In contrast, the related pathogen Mycobacterium tuberculosis encodes only a single diguanylate cyclase. One of the M. leprae unique diguanylate cyclases (ML1419c) was previously shown to be produced early during the course of leprosy. Thus, functional analysis of ML1419c was performed. The gene encoding ML1419c was cloned and expressed in Pseudomonas aeruginosa PAO1 to allow for assessment of cyclic di-GMP production and cyclic di-GMP-mediated phenotypes. Phenotypic studies revealed that ml1419c expression altered colony morphology, motility and biofilm formation of P. aeruginosa PAO1 in a manner consistent with increased cyclic di-GMP production. Direct measurement of cyclic di-GMP levels by liquid chromatography-mass spectrometry confirmed that ml1419c expression increased cyclic di-GMP production in P. aeruginosa PAO1 cultures in comparison to the vector control. The observed phenotypes and increased levels of cyclic di-GMP detected in P. aeruginosa expressing ml1419c could be abrogated by mutation of the active site in ML1419c. These studies demonstrated that ML1419c of M. leprae functions as diguanylate cyclase to synthesize cyclic di-GMP. Thus, this protein was renamed DgcA (Diguanylate cyclase A). These results also demonstrated the ability to use P. aeruginosa as a heterologous host for characterizing the function of proteins involved in the cyclic di-GMP pathway of a pathogen refractory to in vitro growth, M. leprae.

  6. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  7. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  8. Body surface area adapted iopromide 300 mg/ml versus 370 mg/ml contrast medium injection protocol: Influence on quantitative and clinical assessment in combined PET/CT

    Energy Technology Data Exchange (ETDEWEB)

    Verburg, Frederik A., E-mail: fverburg@ukaachen.de [RWTH Aachen University Hospital, Department of Nuclear Medicine, Pauwelsstraße 30, 52074 Aachen (Germany); Maastricht University Medical Center, Department of Nuclear Medicine, P. Debyelaan 25, 6229 HX Maastricht (Netherlands); Apitzsch, Jonas [RWTH Aachen University Hospital, Department of Diagnostic and Interventional Radiology, Pauwelsstraße 30, 52074 Aachen (Germany); Lensing, Carina [RWTH Aachen University Hospital, Department of Nuclear Medicine, Pauwelsstraße 30, 52074 Aachen (Germany); Kuhl, Christiane K. [RWTH Aachen University Hospital, Department of Diagnostic and Interventional Radiology, Pauwelsstraße 30, 52074 Aachen (Germany); Pietsch, Hubertus [Bayer Pharma AG, Müllerstrasse 178, 13353 Berlin (Germany); Mottaghy, Felix M. [RWTH Aachen University Hospital, Department of Nuclear Medicine, Pauwelsstraße 30, 52074 Aachen (Germany); Maastricht University Medical Center, Department of Nuclear Medicine, P. Debyelaan 25, 6229 HX Maastricht (Netherlands); Behrendt, Florian F. [RWTH Aachen University Hospital, Department of Nuclear Medicine, Pauwelsstraße 30, 52074 Aachen (Germany)

    2013-12-01

    Purpose: To investigate the quantitative and qualitative differences between combined positron emission tomography and computed X-ray tomography (PET/CT) enhanced with contrast medium with either an iodine concentration 300 mg/ml or 370 mg/ml. Materials and methods: 120 consecutive patients scheduled for F-18-Fluorodeoxyglucose (FDG) PET/CT were included. The first (second) 60 patients received contrast medium with 300 (370) mg iodine/ml. Intravenous injection protocols were adapted for an identical iodine delivery rate (1.3 mg/s) and body surface area (BSA) adapted iodine dose (22.26 g I/m{sup 2}). Maximum and mean standardized uptake values (SUV{sub max}; SUV{sub mean}) and contrast enhancement (HU) were determined in the ascending aorta, the abdominal aorta, the inferior vena cava, the portal vein, the liver and the right kidney in the venous contrast medium phase. PET data were evaluated visually for the presence of malignancy and image quality. Results: Both media caused significantly higher values for HU, SUV{sub mean} and SUV{sub max} for the enhanced PET/CT than the non-enhanced one (all p < 0.01). There were no significant differences in the degree of increase of HU, SUV{sub mean} and SUV{sub max} between the two contrast media at any anatomic site (all p > 0.05). Visual evaluation of lesions showed no differences between contrast and non-contrast PET/CT or between the two different contrast media (p = 0.77). Conclusion: When using a constant iodine delivery rate and total iodine dose in a BSA adapted injection protocol, there are no quantitative or qualitative differences in either CT or PET between contrast media with an iodine concentration of 300 mg/ml and 370 mg/ml, respectively.

  9. NASA Engineering and Safety Center (NESC) Enhanced Melamine (ML) Foam Acoustic Test (NEMFAT)

    Science.gov (United States)

    McNelis, Anne M.; Hughes, William O.; McNelis, Mark E.

    2014-01-01

    The NASA Engineering and Safety Center (NESC) funded a proposal to achieve initial basic acoustic characterization of ML (melamine) foam, which could serve as a starting point for a future, more comprehensive acoustic test program for ML foam. A project plan was developed and implemented to obtain acoustic test data for both normal and enhanced ML foam. This project became known as the NESC Enhanced Melamine Foam Acoustic Test (NEMFAT). This document contains the outcome of the NEMFAT project.

  10. Case-Based Reasoning in Mixed Paradigm Settings and with Learning

    Science.gov (United States)

    1994-04-30

    Learning Prototypical Cases OFF-BROADWAY, MCI and RMHC -* are three CBR-ML systems that learn case prototypes. We feel that methods that enable the...at Irvine Machine Learning Repository, including heart disease and breast cancer databases. OFF-BROADWAY, MCI and RMHC -* made the following notable

  11. Classifying Structures in the ISM with Machine Learning Techniques

    Science.gov (United States)

    Beaumont, Christopher; Goodman, A. A.; Williams, J. P.

    2011-01-01

    The processes which govern molecular cloud evolution and star formation often sculpt structures in the ISM: filaments, pillars, shells, outflows, etc. Because of their morphological complexity, these objects are often identified manually. Manual classification has several disadvantages; the process is subjective, not easily reproducible, and does not scale well to handle increasingly large datasets. We have explored to what extent machine learning algorithms can be trained to autonomously identify specific morphological features in molecular cloud datasets. We show that the Support Vector Machine algorithm can successfully locate filaments and outflows blended with other emission structures. When the objects of interest are morphologically distinct from the surrounding emission, this autonomous classification achieves >90% accuracy. We have developed a set of IDL-based tools to apply this technique to other datasets.

  12. Does Teaching Mnemonics for Vocabulary Learning Make a Difference? Putting the Keyword Method and the Word Part Technique to the Test

    Science.gov (United States)

    Wei, Zheng

    2015-01-01

    The present research tested the effectiveness of the word part technique in comparison with the keyword method and self-strategy learning. One hundred and twenty-one Chinese year-one university students were randomly assigned to one of the three learning conditions: word part, keyword or self-strategy learning condition. Half of the target words…

  13. CytometryML: a data standard which has been designed to interface with other standards

    Science.gov (United States)

    Leif, Robert C.

    2007-02-01

    Because of the differences in the requirements, needs, and past histories including existing standards of the creating organizations, a single encompassing cytology-pathology standard will not, in the near future, replace the multiple existing or under development standards. Except for DICOM and FCS, these standardization efforts are all based on XML. CytometryML is a collection of XML schemas, which are based on the Digital Imaging and Communications in Medicine (DICOM) and Flow Cytometry Standard (FCS) datatypes. The CytometryML schemas contain attributes that link them to the DICOM standard and FCS. Interoperability with DICOM has been facilitated by, wherever reasonable, limiting the difference between CytometryML and the previous standards to syntax. In order to permit the Resource Description Framework, RDF, to reference the CytometryML datatypes, id attributes have been added to many CytometryML elements. The Laboratory Digital Imaging Project (LDIP) Data Exchange Specification and the Flowcyt standards development effort employ RDF syntax. Documentation from DICOM has been reused in CytometryML. The unity of analytical cytology was demonstrated by deriving a microscope type and a flow cytometer type from a generic cytometry instrument type. The feasibility of incorporating the Flowcyt gating schemas into CytometryML has been demonstrated. CytometryML is being extended to include many of the new DICOM Working Group 26 datatypes, which describe patients, specimens, and analytes. In situations where multiple standards are being created, interoperability can be facilitated by employing datatypes based on a common set of semantics and building in links to standards that employ different syntax.

  14. Evaluation of undergraduate clinical learning experiences in the subject of pediatric dentistry using critical incident technique

    Directory of Open Access Journals (Sweden)

    S Vyawahare

    2013-01-01

    Full Text Available Introduction: In pediatric dentistry, the experiences of dental students may help dental educators better prepare graduates to treat the children. Research suggests that student′s perceptions should be considered in any discussion of their education, but there has been no systematic examination of India′s undergraduate dental students learning experiences. Aim: This qualitative investigation aimed to gather and analyze information about experiences in pediatric dentistry from the students′ viewpoint using critical incident technique (CIT. Study Design: The sample group for this investigation came from all 240 3 rd and 4 th year dental students from all the four dental colleges in Indore. Using CIT, participants were asked to describe at least one positive and one negative experience in detail. Results: They described 308 positive and 359 negative experiences related to the pediatric dentistry clinic. Analysis of the data resulted in the identification of four key factors related to their experiences: 1 The instructor; 2 the patient; 3 the learning process; and 4 the learning environment. Conclusion: The CIT is a useful data collection and analysis technique that provides rich, useful data and has many potential uses in dental education.

  15. Modern Languages and Specific Learning Difficulties (SpLD): Implications of Teaching Adult Learners with Dyslexia in Distance Learning

    Science.gov (United States)

    Gallardo, Matilde; Heiser, Sarah; Arias McLaughlin, Ximena

    2015-01-01

    In modern language (ML) distance learning programmes, teachers and students use online tools to facilitate, reinforce and support independent learning. This makes it essential for teachers to develop pedagogical expertise in using online communication tools to perform their role. Teachers frequently raise questions of how best to support the needs…

  16. Anesthetic success of 1.8ml lidocaine 2% for mandibular tooth extraction. A pilot study

    Directory of Open Access Journals (Sweden)

    Pedro Aravena

    2013-04-01

    Full Text Available Aim: To determine the anesthetic effect of a 1.8ml cartridge of anesthetic lidocaine 2% with epinephrine 1:100,000 in inferior alveolar nerve block (NAI for the extraction in mandibular teeth. Material and methods: A pilot study with analitic design. Participating patients of Dental Emergency Service volunteers from Valdivia-Chile for mandibular teeth extractions attending between May and July of 2010. The anesthetic technique was performed by a dentist using only one cartridge of anesthetic to the NAI. After 15 minutes, the effect was considered effective when anesthetic not require reinforcement with additional anesthesia during extraction of teeth. We analyzed the relationship between success anesthetic effect with sex, age, diagnosis of tooth and type and level of pain observed (chi-square and logistic regression, p<0.05. Results: 62 patients were selected, of which only 47(75.8% was achieved anesthetic success. There was no statistical association with sex, age, type or dental diagnosis and perceived pain. Conclusion: Using a 1.8ml cartridge of anesthesia was effective in three of four patients treated by extraction of mandibular teeth. It suggests further research in relation to the clinical effectiveness of other anesthetics with the same dose in NAI.

  17. Tracking Active Learning in the Medical School Curriculum: A Learning-Centered Approach.

    Science.gov (United States)

    McCoy, Lise; Pettit, Robin K; Kellar, Charlyn; Morgan, Christine

    2018-01-01

    Medical education is moving toward active learning during large group lecture sessions. This study investigated the saturation and breadth of active learning techniques implemented in first year medical school large group sessions. Data collection involved retrospective curriculum review and semistructured interviews with 20 faculty. The authors piloted a taxonomy of active learning techniques and mapped learning techniques to attributes of learning-centered instruction. Faculty implemented 25 different active learning techniques over the course of 9 first year courses. Of 646 hours of large group instruction, 476 (74%) involved at least 1 active learning component. The frequency and variety of active learning components integrated throughout the year 1 curriculum reflect faculty familiarity with active learning methods and their support of an active learning culture. This project has sparked reflection on teaching practices and facilitated an evolution from teacher-centered to learning-centered instruction.

  18. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis

    NARCIS (Netherlands)

    Cheplygina, Veronika; de Bruijne, Marleen; Pluim, Josien P. W.

    2018-01-01

    Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods which can learn

  19. Understanding a Deep Learning Technique through a Neuromorphic System a Case Study with SpiNNaker Neuromorphic Platform

    OpenAIRE

    Sugiarto Indar; Pasila Felix

    2018-01-01

    Deep learning (DL) has been considered as a breakthrough technique in the field of artificial intelligence and machine learning. Conceptually, it relies on a many-layer network that exhibits a hierarchically non-linear processing capability. Some DL architectures such as deep neural networks, deep belief networks and recurrent neural networks have been developed and applied to many fields with incredible results, even comparable to human intelligence. However, many researchers are still scept...

  20. The Impact of Using Note Taking's Techniques on the Students' Learning

    Directory of Open Access Journals (Sweden)

    Asrar Jabir Edan

    2017-03-01

    Full Text Available It is often said that the worst pen is better than the best memory and regardless of how good the students' memory might be, they need to take notes during the lesson or lecture because it is impossible to remember all the details later on. This is so easy to use technique which requires a brief record of important information can help students not only recall what has been said in the class, but also to achieve their learning goals and provide a useful summary of the material to be revised especially before the test. Unfortunately, it is noticed that most of the students, especially at the secondary stage, neglect this important skill. Most of them don’t often write notes unless they are told to do so by the teacher or depend only on the textbooks forgetting that not all the material mentioned during the lesson found in them as some are explanations to the complex and abstract ones and others are related to the teacher's experience in the subject matter. In fact, note taking skill is part of the learning process and to be useful, students need to learn how to do it effectively and what to record because not all what is said is important. This requires acquiring more than one skill on the part of the learners and more effort on the part of the teacher to teach them how to do it properly. For the above reasons, more light will be shed in this research on this topic followed by an experiment and a test to evaluate its effectiveness in learning

  1. Interoperability between OPC UA and AutomationML

    OpenAIRE

    Henßen, Robert; Schleipen, Miriam

    2014-01-01

    OPC UA (OPC Unified Architecture) is a platform-independent standard series (IEC 62541) [1], [2] for communication of industrial automation devices and systems. The OPC Unified Architecture is an advanced communication technology for process control. Certainly the launching costs for the initial information model are quite high. AutomationML (Automation Markup Language) is an upcoming open standard series (IEC 62714) [3], [4] for describing production plants or plant components. The goal of t...

  2. Machine learning methods to predict child posttraumatic stress: a proof of concept study.

    Science.gov (United States)

    Saxe, Glenn N; Ma, Sisi; Ren, Jiwen; Aliferis, Constantin

    2017-07-10

    The care of traumatized children would benefit significantly from accurate predictive models for Posttraumatic Stress Disorder (PTSD), using information available around the time of trauma. Machine Learning (ML) computational methods have yielded strong results in recent applications across many diseases and data types, yet they have not been previously applied to childhood PTSD. Since these methods have not been applied to this complex and debilitating disorder, there is a great deal that remains to be learned about their application. The first step is to prove the concept: Can ML methods - as applied in other fields - produce predictive classification models for childhood PTSD? Additionally, we seek to determine if specific variables can be identified - from the aforementioned predictive classification models - with putative causal relations to PTSD. ML predictive classification methods - with causal discovery feature selection - were applied to a data set of 163 children hospitalized with an injury and PTSD was determined three months after hospital discharge. At the time of hospitalization, 105 risk factor variables were collected spanning a range of biopsychosocial domains. Seven percent of subjects had a high level of PTSD symptoms. A predictive classification model was discovered with significant predictive accuracy. A predictive model constructed based on subsets of potentially causally relevant features achieves similar predictivity compared to the best predictive model constructed with all variables. Causal Discovery feature selection methods identified 58 variables of which 10 were identified as most stable. In this first proof-of-concept application of ML methods to predict childhood Posttraumatic Stress we were able to determine both predictive classification models for childhood PTSD and identify several causal variables. This set of techniques has great potential for enhancing the methodological toolkit in the field and future studies should seek to

  3. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    Science.gov (United States)

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  4. The Effect of Semantic Mapping as a Vocabulary Instruction Technique on EFL Learners with Different Perceptual Learning Styles

    Directory of Open Access Journals (Sweden)

    Esmaeel Abdollahzadeh

    2009-05-01

    Full Text Available Traditional and modern vocabulary instruction techniques have been introduced in the past few decades to improve the learners’ performance in reading comprehension. Semantic mapping, which entails drawing learners’ attention to the interrelationships among lexical items through graphic organizers, is claimed to enhance vocabulary learning significantly. However, whether this technique suits all types of learners has not been adequately investigated. This study examines the effectiveness of employing semantic mapping versus traditional approaches in vocabulary instruction to EFL learners with different perceptual modalities. A modified version of Reid’s (1987 perceptual learning style questionnaire was used to determine the learners’ modality types. The results indicate that semantic mapping in comparison to the traditional approaches significantly enhances vocabulary learning of EFL learners. However, although visual learners slightly outperformed other types of learners on the post-test, no significant differences were observed among intermediate learners with different perceptual modalities employing semantic mapping for vocabulary practice.

  5. Current breathomics-a review on data pre-processing techniques and machine learning in metabolomics breath analysis

    DEFF Research Database (Denmark)

    Smolinska, A.; Hauschild, A. C.; Fijten, R. R. R.

    2014-01-01

    been extensively developed. Yet, the application of machine learning methods for fingerprinting VOC profiles in the breathomics is still in its infancy. Therefore, in this paper, we describe the current state of the art in data pre-processing and multivariate analysis of breathomics data. We start...... different conditions (e.g. disease stage, treatment). Independently of the utilized analytical method, the most important question, 'which VOCs are discriminatory?', remains the same. Answers can be given by several modern machine learning techniques (multivariate statistics) and, therefore, are the focus...

  6. Downscaling Coarse Scale Microwave Soil Moisture Product using Machine Learning

    Science.gov (United States)

    Abbaszadeh, P.; Moradkhani, H.; Yan, H.

    2016-12-01

    Soil moisture (SM) is a key variable in partitioning and examining the global water-energy cycle, agricultural planning, and water resource management. It is also strongly coupled with climate change, playing an important role in weather forecasting and drought monitoring and prediction, flood modeling and irrigation management. Although satellite retrievals can provide an unprecedented information of soil moisture at a global-scale, the products might be inadequate for basin scale study or regional assessment. To improve the spatial resolution of SM, this work presents a novel approach based on Machine Learning (ML) technique that allows for downscaling of the satellite soil moisture to fine resolution. For this purpose, the SMAP L-band radiometer SM products were used and conditioned on the Variable Infiltration Capacity (VIC) model prediction to describe the relationship between the coarse and fine scale soil moisture data. The proposed downscaling approach was applied to a western US basin and the products were compared against the available SM data from in-situ gauge stations. The obtained results indicated a great potential of the machine learning technique to derive the fine resolution soil moisture information that is currently used for land data assimilation applications.

  7. Developing an instrument to measure emotional behaviour abilities of meaningful learning through the Delphi technique.

    Science.gov (United States)

    Cadorin, Lucia; Bagnasco, Annamaria; Tolotti, Angela; Pagnucci, Nicola; Sasso, Loredana

    2017-09-01

    To identify items for a new instrument that measures emotional behaviour abilities of meaningful learning, according to Fink's Taxonomy. Meaningful learning is an active process that promotes a wider and deeper understanding of concepts. It is the result of an interaction between new and previous knowledge and produces a long-term change of knowledge and skills. To measure meaningful learning capability, it is very important in the education of health professionals to identify problems or special learning needs. For this reason, it is necessary to create valid instruments. A Delphi Study technique was implemented in four phases by means of e-mail. The study was conducted from April-September 2015. An expert panel consisting of ten researchers with experience in Fink's Taxonomy was established to identify the items of the instrument. Data were analysed for conceptual description and item characteristics and attributes were rated. Expert consensus was sought in each of these phases. An 87·5% consensus cut-off was established. After four rounds, consensus was obtained for validation of the content of the instrument 'Assessment of Meaningful learning Behavioural and Emotional Abilities'. This instrument consists of 56 items evaluated on a 6-point Likert-type scale. Foundational Knowledge, Application, Integration, Human Dimension, Caring and Learning How to Learn were the six major categories explored. This content validated tool can help educators (teachers, trainers and tutors) to identify and improve the strategies to support students' learning capability, which could increase their awareness of and/or responsibility in the learning process. © 2017 John Wiley & Sons Ltd.

  8. Critique: Can Children with AD/HD Learn Relaxation and Breathing Techniques through Biofeedback Video Games?

    Science.gov (United States)

    Wright, Craig; Conlon, Elizabeth

    2009-01-01

    This article presents a critique on K. Amon and A. Campbell's "Can children with AD/HD learn relaxation and breathing techniques through biofeedback video games?". Amon and Campbell reported a successful trial of a commercially available biofeedback program, "The Wild Divine", in reducing symptoms of Attention-Deficit/Hyperactivity Disorder (ADHD)…

  9. The enhanced callose deposition in barley with ml-o powdery mildew resistance genes

    DEFF Research Database (Denmark)

    Skou, Jens-Peder

    1985-01-01

    Carborundum treatment of barley leaves induced a callose deposition which was detected as diffuse blotches in the epidermal cells of susceptible barleys and as deeply stained tracks along the scratches in barleys with the ml-o powdery mildew resistance gene. Subsequent inoculation with powdery...... mildew resulted in appositions that enlarged inversely to their size in the respective varieties when inoculated without carborundum treatment. Aphids sucking the leaves resulted in rows of callose containing spots along the anticlinal cell walls. The spots were larger in the ml-o mutant than...... in the mother variety. Callose was deposited in connection with the pleiotropic necrotic spotting in barleys with the ml-o gene. Modification of the necrotic spotting by crossing the ml-o gene into other gene backgrounds did not result in any change in the size of appositions upon inoculation with powdery...

  10. Integrating SQ4R Technique with Graphic Postorganizers in the Science Learning of Earth and Space

    OpenAIRE

    Djudin, Tomo; Amir, R

    2018-01-01

    This study examined the effect of integrating SQ4R reading technique with graphic post organizers on the students' Earth and Space Science learning achievement and development of metacognitive knowledge. The pretest-posttest non-equivalent control group design was employed in this quasi-experimental method. The sample which consists of 103 seventh grade of secondary school students of SMPN 1 Pontianak was drawn by using intact group random sampling technique. An achievement test and a questio...

  11. Tracking Active Learning in the Medical School Curriculum: A Learning-Centered Approach

    Science.gov (United States)

    McCoy, Lise; Pettit, Robin K; Kellar, Charlyn; Morgan, Christine

    2018-01-01

    Background: Medical education is moving toward active learning during large group lecture sessions. This study investigated the saturation and breadth of active learning techniques implemented in first year medical school large group sessions. Methods: Data collection involved retrospective curriculum review and semistructured interviews with 20 faculty. The authors piloted a taxonomy of active learning techniques and mapped learning techniques to attributes of learning-centered instruction. Results: Faculty implemented 25 different active learning techniques over the course of 9 first year courses. Of 646 hours of large group instruction, 476 (74%) involved at least 1 active learning component. Conclusions: The frequency and variety of active learning components integrated throughout the year 1 curriculum reflect faculty familiarity with active learning methods and their support of an active learning culture. This project has sparked reflection on teaching practices and facilitated an evolution from teacher-centered to learning-centered instruction. PMID:29707649

  12. The additional benefit of the ML Flow test to classify leprosy patients.

    Science.gov (United States)

    Bührer-Sékula, Samira; Illarramendi, Ximena; Teles, Rose B; Penna, Maria Lucia F; Nery, José Augusto C; Sales, Anna Maria; Oskam, Linda; Sampaio, Elizabeth P; Sarno, Euzenir N

    2009-08-01

    The use of the skin lesion counting classification leads to both under and over diagnosis of leprosy in many instances. Thus, there is a need to complement this classification with another simple and robust test for use in the field. Data of 202 untreated leprosy patients diagnosed at FIOCRUZ, Rio de Janeiro, Brazil, was analyzed. There were 90 patients classified as PB and 112 classified as MB according to the reference standard. The BI was positive in 111 (55%) patients and the ML Flow test in 116 (57.4%) patients. The ML Flow test was positive in 95 (86%) of the patients with a positive BI. The lesion counting classification was confirmed by both BI and ML Flow tests in 65% of the 92 patients with 5 or fewer lesions, and in 76% of the 110 patients with 6 or more lesions. The combination of skin lesion counting and the ML Flow test results yielded a sensitivity of 85% and a specificity of 87% for MB classification, and correctly classified 86% of the patients when compared to the standard reference. A considerable proportion of the patients (43.5%) with discordant test results in relation to standard classification was in reaction. The use of any classification system has limitations, especially those that oversimplify a complex disease such as leprosy. In the absence of an experienced dermatologist and slit skin smear, the ML Flow test could be used to improve treatment decisions in field conditions.

  13. The learning curve of the three-port two-instrument complete thoracoscopic lobectomy for lung cancer—A feasible technique worthy of popularization

    Directory of Open Access Journals (Sweden)

    Yu-Jen Cheng

    2015-07-01

    Conclusion: Three-port complete thoracoscopic lobectomy with the two-instrument technique is feasible for lung cancer treatment. The length of the learning curve consisted of 28 cases. This TPTI technique should be popularized.

  14. Machine learning for healthcare technologies

    CERN Document Server

    Clifton, David A

    2016-01-01

    This book brings together chapters on the state-of-the-art in machine learning (ML) as it applies to the development of patient-centred technologies, with a special emphasis on 'big data' and mobile data.

  15. Machine Learning and Neurosurgical Outcome Prediction: A Systematic Review.

    Science.gov (United States)

    Senders, Joeky T; Staples, Patrick C; Karhade, Aditya V; Zaki, Mark M; Gormley, William B; Broekman, Marike L D; Smith, Timothy R; Arnaout, Omar

    2018-01-01

    Accurate measurement of surgical outcomes is highly desirable to optimize surgical decision-making. An important element of surgical decision making is identification of the patient cohort that will benefit from surgery before the intervention. Machine learning (ML) enables computers to learn from previous data to make accurate predictions on new data. In this systematic review, we evaluate the potential of ML for neurosurgical outcome prediction. A systematic search in the PubMed and Embase databases was performed to identify all potential relevant studies up to January 1, 2017. Thirty studies were identified that evaluated ML algorithms used as prediction models for survival, recurrence, symptom improvement, and adverse events in patients undergoing surgery for epilepsy, brain tumor, spinal lesions, neurovascular disease, movement disorders, traumatic brain injury, and hydrocephalus. Depending on the specific prediction task evaluated and the type of input features included, ML models predicted outcomes after neurosurgery with a median accuracy and area under the receiver operating curve of 94.5% and 0.83, respectively. Compared with logistic regression, ML models performed significantly better and showed a median absolute improvement in accuracy and area under the receiver operating curve of 15% and 0.06, respectively. Some studies also demonstrated a better performance in ML models compared with established prognostic indices and clinical experts. In the research setting, ML has been studied extensively, demonstrating an excellent performance in outcome prediction for a wide range of neurosurgical conditions. However, future studies should investigate how ML can be implemented as a practical tool supporting neurosurgical care. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Desempenho de idosos brasileiros no teste de deglutição de 100 ml de água Performance of Brazilian elderly on the 100 ml water swallowing test

    Directory of Open Access Journals (Sweden)

    Graziela Maria Martins Moreira

    2012-03-01

    Full Text Available OBJETIVO: Comparar o desempenho de idosos brasileiros, residentes em uma instituição de longa permanência, no teste de deglutição de 100 ml de água com os resultados obtidos em idosos ingleses. MÉTODOS: Dezoito idosos residentes numa instituição de longa permanência, considerados normais para a função de deglutição (13 mulheres e cinco homens, com idade média de 83,46 anos foram solicitados a beber 100 ml de água de um copo plástico, reproduzindo o estudo inglês. O avaliador observou lateralmente o número de goles, tempo gasto e intercorrências, gerando três índices: volume por deglutição (ml, tempo por deglutição (s e capacidade de deglutição (ml/s. RESULTADOS: A capacidade de deglutição para homens foi menor do que a das mulheres, divergindo do estudo original. O tempo médio de cada deglutição e o volume médio por deglutição foi semelhante para ambos os gêneros. CONCLUSÃO: A capacidade de deglutição em idosos é inferior à de adultos normais, indicando lentificação da deglutição. A diferença entre gêneros encontrada no estudo original não foi reproduzida, entretanto nossa amostra foi mais idosa.PURPOSE: To compare the performance of Brazilian elderly patients living in a long-term care facility on the 100 ml water swallowing test with the results obtained with British elderly. METHODS: Eighteen elderly subjects (13 women and five men, mean age 83.46, residents in a long-term care facility and considered normal regarding the swallowing function, were selected to take part in this study. As in a British study, they were laterally observed by the examiner while swallowing 100 ml of water from a plastic cup. The examiner observed the number of sips, the time taken, and complications during the test, which generated the following indices: volume per swallow (ml, time per swallow (s, and swallowing capacity (ml/s. RESULTS: The elderly men had lower swallowing capacity than the women in the research

  17. The Bonding of Pa to d8-ML3 Complexes

    OpenAIRE

    Kang, Sung-Kwon; Albright, Thomas A.; Silvestre, Jerome

    1985-01-01

    Extended Hiickel calculmions were carried out on 171, 'f/ 2, and 'f/3 complexes of P4 to Rh(PH3)2Cl. The 'f/ 1-square planar and an 'f/2 complex with C2v symmetry are the most stable. Geometrical optimizations and a detailed account of the bonding in each have been carried out. d10 'f/1-tetrahedral complexes of P4 are expected to be quite stable. The best candidate for an 'f/3 mode of bonding is the trimer Fe3(C0)9. Alternative complexes at 'f/3 include a d6-ML3 and d4-ML...

  18. Particle identification at LHCb: new calibration techniques and machine learning classification algorithms

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Particle identification (PID) plays a crucial role in LHCb analyses. Combining information from LHCb subdetectors allows one to distinguish between various species of long-lived charged and neutral particles. PID performance directly affects the sensitivity of most LHCb measurements. Advanced multivariate approaches are used at LHCb to obtain the best PID performance and control systematic uncertainties. This talk highlights recent developments in PID that use innovative machine learning techniques, as well as novel data-driven approaches which ensure that PID performance is well reproduced in simulation.

  19. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Directory of Open Access Journals (Sweden)

    Gaetano Luglio

    2015-06-01

    Conclusion: Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon.

  20. Simple ML Detector for Multiple Antennas Communication System

    Directory of Open Access Journals (Sweden)

    Ahmad Taqwa

    2010-10-01

    Full Text Available In order to support providing broadband wireless communication services against limited and expensive frequency bandwidth, we have to develop a bandwidth efficient system. Therefore, in this paper we propose a closed-loop MIMO (Multiple-Input-Multiple-Output system using ML (Maximum Likelihood detector to optimize capacity and to increase system performance. What is especially exciting about the benefits offered by MIMO is that a high capacity and performance can be attained without additional frequency-spectral resource. The grand scenario of this concept is the attained advantages of transformation matrices having capability to allocate transmitted signals power suit to the channel. Furthermore, product of these matrices forms parallel singular channels. Due to zero inter-channels correlation, thus we can design ML detector to increase the system performance. Finally, computer simulations validates that at 0 dB SNR our system can reach optimal capacity up to 1 bps/Hz and SER up to 0.2 higher than opened-loop MIMO.

  1. The simulated early learning of cervical spine manipulation technique utilising mannequins.

    Science.gov (United States)

    Chapman, Peter D; Stomski, Norman J; Losco, Barrett; Walker, Bruce F

    2015-01-01

    Trivial pain or minor soreness commonly follows neck manipulation and has been estimated at one in three treatments. In addition, rare catastrophic events can occur. Some of these incidents have been ascribed to poor technique where the neck is rotated too far. The aims of this study were to design an instrument to measure competency of neck manipulation in beginning students when using a simulation mannequin, and then examine the suitability of using a simulation mannequin to teach the early psychomotor skills for neck chiropractic manipulative therapy. We developed an initial set of questionnaire items and then used an expert panel to assess an instrument for neck manipulation competency among chiropractic students. The study sample comprised all 41 fourth year 2014 chiropractic students at Murdoch University. Students were randomly allocated into either a usual learning or mannequin group. All participants crossed over to undertake the alternative learning method after four weeks. A chi-square test was used to examine differences between groups in the proportion of students achieving an overall pass mark at baseline, four weeks, and eight weeks. This study was conducted between January and March 2014. We successfully developed an instrument of measurement to assess neck manipulation competency in chiropractic students. We then randomised 41 participants to first undertake either "usual learning" (n = 19) or "mannequin learning" (n = 22) for early neck manipulation training. There were no significant differences between groups in the overall pass rate at baseline (χ(2) = 0.10, p = 0.75), four weeks (χ(2) = 0.40, p = 0.53), and eight weeks (χ(2) = 0.07, p = 0.79). This study demonstrates that the use of a mannequin does not affect the manipulation competency grades of early learning students at short term follow up. Our findings have potentially important safety implications as the results indicate that students could initially

  2. Phishtest: Measuring the Impact of Email Headers on the Predictive Accuracy of Machine Learning Techniques

    Science.gov (United States)

    Tout, Hicham

    2013-01-01

    The majority of documented phishing attacks have been carried by email, yet few studies have measured the impact of email headers on the predictive accuracy of machine learning techniques in detecting email phishing attacks. Research has shown that the inclusion of a limited subset of email headers as features in training machine learning…

  3. A data-driven predictive approach for drug delivery using machine learning techniques.

    Directory of Open Access Journals (Sweden)

    Yuanyuan Li

    Full Text Available In drug delivery, there is often a trade-off between effective killing of the pathogen, and harmful side effects associated with the treatment. Due to the difficulty in testing every dosing scenario experimentally, a computational approach will be helpful to assist with the prediction of effective drug delivery methods. In this paper, we have developed a data-driven predictive system, using machine learning techniques, to determine, in silico, the effectiveness of drug dosing. The system framework is scalable, autonomous, robust, and has the ability to predict the effectiveness of the current drug treatment and the subsequent drug-pathogen dynamics. The system consists of a dynamic model incorporating both the drug concentration and pathogen population into distinct states. These states are then analyzed using a temporal model to describe the drug-cell interactions over time. The dynamic drug-cell interactions are learned in an adaptive fashion and used to make sequential predictions on the effectiveness of the dosing strategy. Incorporated into the system is the ability to adjust the sensitivity and specificity of the learned models based on a threshold level determined by the operator for the specific application. As a proof-of-concept, the system was validated experimentally using the pathogen Giardia lamblia and the drug metronidazole in vitro.

  4. SED-ED, a workflow editor for computational biology experiments written in SED-ML.

    Science.gov (United States)

    Adams, Richard R

    2012-04-15

    The simulation experiment description markup language (SED-ML) is a new community data standard to encode computational biology experiments in a computer-readable XML format. Its widespread adoption will require the development of software support to work with SED-ML files. Here, we describe a software tool, SED-ED, to view, edit, validate and annotate SED-ML documents while shielding end-users from the underlying XML representation. SED-ED supports modellers who wish to create, understand and further develop a simulation description provided in SED-ML format. SED-ED is available as a standalone Java application, as an Eclipse plug-in and as an SBSI (www.sbsi.ed.ac.uk) plug-in, all under an MIT open-source license. Source code is at https://sed-ed-sedmleditor.googlecode.com/svn. The application itself is available from https://sourceforge.net/projects/jlibsedml/files/SED-ED/.

  5. Comparison of Machine Learning Techniques for the Prediction of Compressive Strength of Concrete

    Directory of Open Access Journals (Sweden)

    Palika Chopra

    2018-01-01

    Full Text Available A comparative analysis for the prediction of compressive strength of concrete at the ages of 28, 56, and 91 days has been carried out using machine learning techniques via “R” software environment. R is digging out a strong foothold in the statistical realm and is becoming an indispensable tool for researchers. The dataset has been generated under controlled laboratory conditions. Using R miner, the most widely used data mining techniques decision tree (DT model, random forest (RF model, and neural network (NN model have been used and compared with the help of coefficient of determination (R2 and root-mean-square error (RMSE, and it is inferred that the NN model predicts with high accuracy for compressive strength of concrete.

  6. Learning the „Look-at-you-go” Moment in Corporate Governance Negotiation Techniques

    Directory of Open Access Journals (Sweden)

    Clara VOLINTIRU

    2015-06-01

    Full Text Available This article explores in an interdisciplinary manner the way concepts are learned or internalized, depending on the varying means of transmission, as well as on the sequencing in which the information is transmitted. In this sense, we build on the constructivist methodology framework in assessing concept acquisition in academic disciplines, at an advanced level. We also present the evolution of certain negotiation techniques, from traditional setting, to less predictable ones. This assessment is compared to a specific Pop Culture case study in which we find an expressive representation of negotiation techniques. Our methodology employs both focus groups and experimental design to test the relative positioning of theoretical concept acquisition (TCA as opposed to expressive concept-acquisition (ECA. Our findings suggest that while expressive concept acquisition (ECA via popular culture representations enhances the students understanding of negotiation techniques, this can only happen in circumstances in which a theoretical concept acquisition (TCA is pre-existent.

  7. GeoSciML v3.0 - a significant upgrade of the CGI-IUGS geoscience data model

    Science.gov (United States)

    Raymond, O.; Duclaux, G.; Boisvert, E.; Cipolloni, C.; Cox, S.; Laxton, J.; Letourneau, F.; Richard, S.; Ritchie, A.; Sen, M.; Serrano, J.-J.; Simons, B.; Vuollo, J.

    2012-04-01

    GeoSciML version 3.0 (http://www.geosciml.org), released in late 2011, is the latest version of the CGI-IUGS* Interoperability Working Group geoscience data interchange standard. The new version is a significant upgrade and refactoring of GeoSciML v2 which was released in 2008. GeoSciML v3 has already been adopted by several major international interoperability initiatives, including OneGeology, the EU INSPIRE program, and the US Geoscience Information Network, as their standard data exchange format for geoscience data. GeoSciML v3 makes use of recently upgraded versions of several Open Geospatial Consortium (OGC) and ISO data transfer standards, including GML v3.2, SWE Common v2.0, and Observations and Measurements v2 (ISO 19156). The GeoSciML v3 data model has been refactored from a single large application schema with many packages, into a number of smaller, but related, application schema modules with individual namespaces. This refactoring allows the use and future development of modules of GeoSciML (eg; GeologicUnit, GeologicStructure, GeologicAge, Borehole) in smaller, more manageable units. As a result of this refactoring and the integration with new OGC and ISO standards, GeoSciML v3 is not backwardly compatible with previous GeoSciML versions. The scope of GeoSciML has been extended in version 3.0 to include new models for geomorphological data (a Geomorphology application schema), and for geological specimens, geochronological interpretations, and metadata for geochemical and geochronological analyses (a LaboratoryAnalysis-Specimen application schema). In addition, there is better support for borehole data, and the PhysicalProperties model now supports a wider range of petrophysical measurements. The previously used CGI_Value data type has been superseded in favour of externally governed data types provided by OGC's SWE Common v2 and GML v3.2 data standards. The GeoSciML v3 release includes worked examples of best practice in delivering geochemical

  8. Exploring Machine Learning Techniques Using Patient Interactions in Online Health Forums to Classify Drug Safety

    Science.gov (United States)

    Chee, Brant Wah Kwong

    2011-01-01

    This dissertation explores the use of personal health messages collected from online message forums to predict drug safety using natural language processing and machine learning techniques. Drug safety is defined as any drug with an active safety alert from the US Food and Drug Administration (FDA). It is believed that this is the first…

  9. Reinforcement learning techniques for controlling resources in power networks

    Science.gov (United States)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  10. Research On C4.5 As One Of The Inductive Learning Techniques

    OpenAIRE

    Yıldırım, Savaş

    2003-01-01

    The thesis in hand deals with C4.5 (Decision Tree Construction Algorithm) as one of the most significant techniques of machine learning, and how it differs from its older version ID3. With this aim in mind, not only the approaches provided by C4.5 but also other approaches are examined. The decision tree algorithms are useful in a variety of spheres from defense to medicine or economics; and bear a vital importance for decision support systems in these areas. Written by Quinlan in 1993 in C p...

  11. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  12. Classification of Cytochrome P450 1A2 Inhibitors and Non-Inhibitors by Machine Learning Techniques

    DEFF Research Database (Denmark)

    Vasanthanathan, Poongavanam; Taboureau, Olivier; Oostenbrink, Chris

    2009-01-01

    of CYP1A2 inhibitors and non-inhibitors. Training and test sets consisted of about 400 and 7000 compounds, respectively. Various machine learning techniques, like binary QSAR, support vector machine (SVM), random forest, kappa nearest neighbors (kNN), and decision tree methods were used to develop...

  13. Smart Training, Smart Learning: The Role of Cooperative Learning in Training for Youth Services.

    Science.gov (United States)

    Doll, Carol A.

    1997-01-01

    Examines cooperative learning in youth services and adult education. Discusses characteristics of cooperative learning techniques; specific cooperative learning techniques (brainstorming, mini-lecture, roundtable technique, send-a-problem problem solving, talking chips technique, and three-step interview); and the role of the trainer. (AEF)

  14. A Cultural Psychological Approach to Analyze Intercultural Learning: Potential and Limits of the Structure Formation Technique

    Directory of Open Access Journals (Sweden)

    Doris Weidemann

    2009-01-01

    Full Text Available Despite the huge interest in sojourner adjustment, there is still a lack of qualitative as well as of longitudinal research that would offer more detailed insights into intercultural learning processes during overseas stays. The present study aims to partly fill that gap by documenting changes in knowledge structures and general living experiences of fifteen German sojourners in Taiwan in a longitudinal, cultural-psychological study. As part of a multimethod design a structure formation technique was used to document subjective theories on giving/losing face and their changes over time. In a second step results from this study are compared to knowledge-structures of seven long-term German residents in Taiwan, and implications for the conceptualization of intercultural learning will be proposed. Finally, results from both studies serve to discuss the potential and limits of structure formation techniques in the field of intercultural communication research. URN: urn:nbn:de:0114-fqs0901435

  15. Machine learning techniques in searches for t t-bar h in the h  →  b b-bar decay channel

    International Nuclear Information System (INIS)

    Santos, R.; Nguyen, M.; Zhou, J.; Webster, J.; Ryu, S.; Chekanov, S.; Adelman, J.

    2017-01-01

    Study of the production of pairs of top quarks in association with a Higgs boson is one of the primary goals of the Large Hadron Collider over the next decade, as measurements of this process may help us to understand whether the uniquely large mass of the top quark plays a special role in electroweak symmetry breaking. Higgs bosons decay predominantly to b b-bar , yielding signatures for the signal that are similar to t t-bar  + jets with heavy flavor. Though particularly challenging to study due to the similar kinematics between signal and background events, such final states ( t t-bar   b b-bar ) are an important channel for studying the top quark Yukawa coupling. This paper presents a systematic study of machine learning (ML) methods for detecting t t-bar h in the h  →  b b-bar decay channel. Among the eight ML methods tested, we show that two models, extreme gradient boosted trees and neural network models, outperform alternative methods. We further study the effectiveness of ML algorithms by investigating the impact of feature set and data size, as well as the structure of the models. While extended feature set and larger training sets expectedly lead to improvement of performance, shallow models deliver comparable or better performance than their deeper counterparts. Our study suggests that ensembles of trees and neurons, not necessarily deep, work effectively for the problem of t t-bar h detection.

  16. Locomotion training of legged robots using hybrid machine learning techniques

    Science.gov (United States)

    Simon, William E.; Doerschuk, Peggy I.; Zhang, Wen-Ran; Li, Andrew L.

    1995-01-01

    In this study artificial neural networks and fuzzy logic are used to control the jumping behavior of a three-link uniped robot. The biped locomotion control problem is an increment of the uniped locomotion control. Study of legged locomotion dynamics indicates that a hierarchical controller is required to control the behavior of a legged robot. A structured control strategy is suggested which includes navigator, motion planner, biped coordinator and uniped controllers. A three-link uniped robot simulation is developed to be used as the plant. Neurocontrollers were trained both online and offline. In the case of on-line training, a reinforcement learning technique was used to train the neurocontroller to make the robot jump to a specified height. After several hundred iterations of training, the plant output achieved an accuracy of 7.4%. However, when jump distance and body angular momentum were also included in the control objectives, training time became impractically long. In the case of off-line training, a three-layered backpropagation (BP) network was first used with three inputs, three outputs and 15 to 40 hidden nodes. Pre-generated data were presented to the network with a learning rate as low as 0.003 in order to reach convergence. The low learning rate required for convergence resulted in a very slow training process which took weeks to learn 460 examples. After training, performance of the neurocontroller was rather poor. Consequently, the BP network was replaced by a Cerebeller Model Articulation Controller (CMAC) network. Subsequent experiments described in this document show that the CMAC network is more suitable to the solution of uniped locomotion control problems in terms of both learning efficiency and performance. A new approach is introduced in this report, viz., a self-organizing multiagent cerebeller model for fuzzy-neural control of uniped locomotion is suggested to improve training efficiency. This is currently being evaluated for a possible

  17. e-Learning readiness amongst nursing students at the Durban ...

    African Journals Online (AJOL)

    Marilynne Coopasami

    c Centre for Excellence in Learning and Teaching, ML Sultan Campus, Durban University of Technology, Durban ... education, technological and equipment readiness require attention before it can be ... strategy; consider the benefits and disadvantages of e- ... using an appropriate tool to measure e-Learning readiness has.

  18. Predicting activities of daily living for cancer patients using an ontology-guided machine learning methodology.

    Science.gov (United States)

    Min, Hua; Mobahi, Hedyeh; Irvin, Katherine; Avramovic, Sanja; Wojtusiak, Janusz

    2017-09-16

    Bio-ontologies are becoming increasingly important in knowledge representation and in the machine learning (ML) fields. This paper presents a ML approach that incorporates bio-ontologies and its application to the SEER-MHOS dataset to discover patterns of patient characteristics that impact the ability to perform activities of daily living (ADLs). Bio-ontologies are used to provide computable knowledge for ML methods to "understand" biomedical data. This retrospective study included 723 cancer patients from the SEER-MHOS dataset. Two ML methods were applied to create predictive models for ADL disabilities for the first year after a patient's cancer diagnosis. The first method is a standard rule learning algorithm; the second is that same algorithm additionally equipped with methods for reasoning with ontologies. The models showed that a patient's race, ethnicity, smoking preference, treatment plan and tumor characteristics including histology, staging, cancer site, and morphology were predictors for ADL performance levels one year after cancer diagnosis. The ontology-guided ML method was more accurate at predicting ADL performance levels (P ontologies. This study demonstrated that bio-ontologies can be harnessed to provide medical knowledge for ML algorithms. The presented method demonstrates that encoding specific types of hierarchical relationships to guide rule learning is possible, and can be extended to other types of semantic relationships present in biomedical ontologies. The ontology-guided ML method achieved better performance than the method without ontologies. The presented method can also be used to promote the effectiveness and efficiency of ML in healthcare, in which use of background knowledge and consistency with existing clinical expertise is critical.

  19. Investigating the Effects of Group Investigation (GI and Cooperative Integrated Reading and Comprehension (CIRC as the Cooperative Learning Techniques on Learner's Reading Comprehension

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Karafkan

    2015-11-01

    Full Text Available Cooperative learning consists of some techniques for helping students work together more effectively. This study investigated the effects of Group Investigation (GI and Cooperative Integrated Reading and Composition (CIRC as cooperative learning techniques on Iranian EFL learners’ reading comprehension at an intermediate level. The participants of the study were 207 male students who studied at an intermediate level at ILI. The participants were randomly assigned into three equal groups: one control group and two experimental groups. The control group was instructed via conventional technique following an individualistic instructional approach. One experimental group received GI technique. The other experimental group received CIRC technique. The findings showed that there was a meaningful difference between the mean of the reading comprehension score of GI experimental group and CRIC experimental group. CRIC technique is more effective than GI technique in enhancing the reading comprehension test scores of students.

  20. Adaptive Landmark-Based Navigation System Using Learning Techniques

    DEFF Research Database (Denmark)

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin

    2014-01-01

    The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. In...... hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.......The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal....... Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  1. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  2. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  3. A Severe Weather Laboratory Exercise for an Introductory Weather and Climate Class Using Active Learning Techniques

    Science.gov (United States)

    Grundstein, Andrew; Durkee, Joshua; Frye, John; Andersen, Theresa; Lieberman, Jordan

    2011-01-01

    This paper describes a new severe weather laboratory exercise for an Introductory Weather and Climate class, appropriate for first and second year college students (including nonscience majors), that incorporates inquiry-based learning techniques. In the lab, students play the role of meteorologists making forecasts for severe weather. The…

  4. Evaluating the Impact of Design-Driven Requirements Using SysML

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed research will develop SysML requirements modeling patterns and scripts to automate the evaluation of the impact of design driven requirements....

  5. The WUW ML bundle detector A flow through detector for alpha-emitters

    CERN Document Server

    Wenzel, U; Lochny, M

    1999-01-01

    Using conventional laboratory ware, we designed and manufactured a flow through cell for monitoring alpha-bearing solutions. The cell consists of a bundle of thermoplastic, transparent tubes coated with a thin layer of the meltable scintillator MELTILEX sup T sup M at the inner surface. With appropriate energy windows set, the detector can suppress beta-particles to a great extent due to its geometrical dimensions. For pure alpha-solutions, the detection limits are 5 Bq/ml, for composite nuclide mixtures, the detector is capable to monitor the decontamination of medium active waste (<=10 sup 7 Bq/ml) down to 100 Bq alpha/g solution. At a throughput of 1 ml/s, the pressure build-up amounts to approx 2 bar. We have developed a quality control program to ensure the regularity of the individual bundle loops.

  6. Microstructures and Mechanical Properties of Co-Cr Dental Alloys Fabricated by Three CAD/CAM-Based Processing Techniques

    Directory of Open Access Journals (Sweden)

    Hae Ri Kim

    2016-07-01

    Full Text Available The microstructures and mechanical properties of cobalt-chromium (Co-Cr alloys produced by three CAD/CAM-based processing techniques were investigated in comparison with those produced by the traditional casting technique. Four groups of disc- (microstructures or dumbbell- (mechanical properties specimens made of Co-Cr alloys were prepared using casting (CS, milling (ML, selective laser melting (SLM, and milling/post-sintering (ML/PS. For each technique, the corresponding commercial alloy material was used. The microstructures of the specimens were evaluated via X-ray diffractometry, optical and scanning electron microscopy with energy-dispersive X-ray spectroscopy, and electron backscattered diffraction pattern analysis. The mechanical properties were evaluated using a tensile test according to ISO 22674 (n = 6. The microstructure of the alloys was strongly influenced by the manufacturing processes. Overall, the SLM group showed superior mechanical properties, the ML/PS group being nearly comparable. The mechanical properties of the ML group were inferior to those of the CS group. The microstructures and mechanical properties of Co-Cr alloys were greatly dependent on the manufacturing technique as well as the chemical composition. The SLM and ML/PS techniques may be considered promising alternatives to the Co-Cr alloy casting process.

  7. Machine Learning Techniques for Arterial Pressure Waveform Analysis

    Directory of Open Access Journals (Sweden)

    João Cardoso

    2013-05-01

    Full Text Available The Arterial Pressure Waveform (APW can provide essential information about arterial wall integrity and arterial stiffness. Most of APW analysis frameworks individually process each hemodynamic parameter and do not evaluate inter-dependencies in the overall pulse morphology. The key contribution of this work is the use of machine learning algorithms to deal with vectorized features extracted from APW. With this purpose, we follow a five-step evaluation methodology: (1 a custom-designed, non-invasive, electromechanical device was used in the data collection from 50 subjects; (2 the acquired position and amplitude of onset, Systolic Peak (SP, Point of Inflection (Pi and Dicrotic Wave (DW were used for the computation of some morphological attributes; (3 pre-processing work on the datasets was performed in order to reduce the number of input features and increase the model accuracy by selecting the most relevant ones; (4 classification of the dataset was carried out using four different machine learning algorithms: Random Forest, BayesNet (probabilistic, J48 (decision tree and RIPPER (rule-based induction; and (5 we evaluate the trained models, using the majority-voting system, comparatively to the respective calculated Augmentation Index (AIx. Classification algorithms have been proved to be efficient, in particular Random Forest has shown good accuracy (96.95% and high area under the curve (AUC of a Receiver Operating Characteristic (ROC curve (0.961. Finally, during validation tests, a correlation between high risk labels, retrieved from the multi-parametric approach, and positive AIx values was verified. This approach gives allowance for designing new hemodynamic morphology vectors and techniques for multiple APW analysis, thus improving the arterial pulse understanding, especially when compared to traditional single-parameter analysis, where the failure in one parameter measurement component, such as Pi, can jeopardize the whole evaluation.

  8. Analisis Pengawasan Logistik Produk Aqua Ukuran 330ml Pada CV. Dlu'x Resto Samarinda

    OpenAIRE

    Mardiana, Ali Masuhud, H. Mulyadi Syp

    2016-01-01

    The problem in this research is "Are Determination Against Aqua Products Logistics Control 330ml sizes on CV. DLux Resto has been optimized? "This study aims to determine the amount of inventory on the CV aqua 330ml sizes. Dlu'x Resto in Samarinda.Formulation of the problem in this study is whether the determination of the logistical monitoring product inventory aqua 330ml sizes that have been carried out on the CV. Dlu'x Resto Samarinda already performed optimally.The hypothesis in this stud...

  9. Game Art Complete All-in-One; Learn Maya, 3ds Max, ZBrush, and Photoshop Winning Techniques

    CERN Document Server

    Gahan, Andrew

    2008-01-01

    A compilation of key chapters from the top Focal game art books available today - in the areas of Max, Maya, Photoshop, and ZBrush. The chapters provide the CG Artist with an excellent sampling of essential techniques that every 3D artist needs to create stunning game art. Game artists will be able to master the modeling, rendering, rigging, and texturing techniques they need - with advice from Focal's best and brightest authors. Artists can learn hundreds of tips, tricks and shortcuts in Max, Maya, Photoshop, ZBrush - all within the covers of one complete, inspiring reference

  10. Who is that masked educator? Deconstructing the teaching and learning processes of an innovative humanistic simulation technique.

    Science.gov (United States)

    McAllister, Margaret; Searl, Kerry Reid; Davis, Susan

    2013-12-01

    Simulation learning in nursing has long made use of mannequins, standardized actors and role play to allow students opportunity to practice technical body-care skills and interventions. Even though numerous strategies have been developed to mimic or amplify clinical situations, a common problem that is difficult to overcome in even the most well-executed simulation experiences, is that students may realize the setting is artificial and fail to fully engage, remember or apply the learning. Another problem is that students may learn technical competence but remain uncertain about communicating with the person. Since communication capabilities are imperative in human service work, simulation learning that only achieves technical competence in students is not fully effective for the needs of nursing education. Furthermore, while simulation learning is a burgeoning space for innovative practices, it has been criticized for the absence of a basis in theory. It is within this context that an innovative simulation learning experience named "Mask-Ed (KRS simulation)", has been deconstructed and the active learning components examined. Establishing a theoretical basis for creative teaching and learning practices provides an understanding of how, why and when simulation learning has been effective and it may help to distinguish aspects of the experience that could be improved. Three conceptual theoretical fields help explain the power of this simulation technique: Vygotskian sociocultural learning theory, applied theatre and embodiment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Building machine learning systems with Python

    CERN Document Server

    Richert, Willi

    2013-01-01

    This is a tutorial-driven and practical, but well-grounded book showcasing good Machine Learning practices. There will be an emphasis on using existing technologies instead of showing how to write your own implementations of algorithms. This book is a scenario-based, example-driven tutorial. By the end of the book you will have learnt critical aspects of Machine Learning Python projects and experienced the power of ML-based systems by actually working on them.This book primarily targets Python developers who want to learn about and build Machine Learning into their projects, or who want to pro

  12. Physical Properties of Asteroid (10302) 1989 ML, a Potential Spacecraft Target, from Spitzer Observations

    Science.gov (United States)

    Mueller, Michael; Harris, A. W.

    2006-09-01

    We report on results from recent Spitzer observations of near-Earth asteroid (10302) 1989 ML, which is among the lowest-ranking objects in terms of the specific momentum Δv required to reach it from Earth. It was originally considered as a target for Hayabusa and is now under consideration as a target of the planned ESA mission Don Quijote. Unfortunately, little is known about the physical properties of 1989 ML, in particular its size and albedo are unknown. Its exhibits an X type reflection spectrum, so depending on its albedo, 1989 ML may be an E, M, or P type asteroid. Provisional results from thermal-infrared observations carried out with Spitzer indicate that the albedo of 1989 ML is compatible with an M- or E-type classification. We will discuss our results and their implications for the physical properties and the rotation period of 1989 ML, and its importance as a potential spacecraft target. This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.

  13. Machine learning and statistical techniques : an application to the prediction of insolvency in Spanish non-life insurance companies

    OpenAIRE

    Díaz, Zuleyka; Segovia, María Jesús; Fernández, José

    2005-01-01

    Prediction of insurance companies insolvency has arisen as an important problem in the field of financial research. Most methods applied in the past to tackle this issue are traditional statistical techniques which use financial ratios as explicative variables. However, these variables often do not satisfy statistical assumptions, which complicates the application of the mentioned methods. In this paper, a comparative study of the performance of two non-parametric machine learning techniques ...

  14. Medical students benefit from the use of ultrasound when learning peripheral IV techniques.

    Science.gov (United States)

    Osborn, Scott R; Borhart, Joelle; Antonis, Michael S

    2012-03-06

    Recent studies support high success rates after a short learning period of ultrasound IV technique, and increased patient and provider satisfaction when using ultrasound as an adjunct to peripheral IV placement. No study to date has addressed the efficacy for instructing ultrasound-naive providers. We studied the introduction of ultrasound to the teaching technique of peripheral IV insertion on first- and second-year medical students. This was a prospective, randomized, and controlled trial. A total of 69 medical students were randomly assigned to the control group with a classic, landmark-based approach (n = 36) or the real-time ultrasound-guided group (n = 33). Both groups observed a 20-min tutorial on IV placement using both techniques and then attempted vein cannulation. Students were given a survey to report their results and observations by a 10-cm visual analog scale. The survey response rate was 100%. In the two groups, 73.9% stated that they attempted an IV previously, and 63.7% of students had used an ultrasound machine prior to the study. None had used ultrasound for IV access prior to our session. The average number of attempts at cannulation was 1.42 in either group. There was no difference between the control and ultrasound groups in terms of number of attempts (p = 0.31). In both groups, 66.7% of learners were able to cannulate in one attempt, 21.7% in two attempts, and 11.6% in three attempts. The study group commented that they felt they gained more knowledge from the experience (p students feel they learn more when using ultrasound after a 20-min tutorial to place IVs and cannulation of the vein feels easier. Success rates are comparable between the traditional and ultrasound teaching approaches.

  15. Implementation of a Goal-Based Systems Engineering Process Using the Systems Modeling Language (SysML)

    Science.gov (United States)

    Breckenridge, Jonathan T.; Johnson, Stephen B.

    2013-01-01

    Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.

  16. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    Science.gov (United States)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  17. Use of the Learning together technique associated to the theory of significative learning

    Directory of Open Access Journals (Sweden)

    Ester López Donoso

    2008-09-01

    Full Text Available This article deals with an experimental research, regarding a qualitative and quantitative design, applied to a group of students of General Physics course during the first semester of the university career of Engineering. Historically, students of this course present learning difficulties that directly affect their performance, conceptualization and permanence in the university. The present methodology integrates the collaborative learning, denominated Learning Together", with the theory of significant learning to avoid the above-written difficulties. Results of this research show that the proposed methodology works properly, especially to improve the conceptualization.

  18. A framework for understanding outcomes of mutual learning situations in IT projects

    DEFF Research Database (Denmark)

    Hansen, Magnus Rotvit Perlt

    2012-01-01

    How do we analyse and understand design decisions derived from mutual learning (ML) situations and how may practitioners take advantage of these in IT projects? In the following we present a framework of design decisions inferred from ML situations that occurred between end-users and stakeholders...

  19. Active learning-based information structure analysis of full scientific articles and two applications for biomedical literature review.

    Science.gov (United States)

    Guo, Yufan; Silins, Ilona; Stenius, Ulla; Korhonen, Anna

    2013-06-01

    Techniques that are capable of automatically analyzing the information structure of scientific articles could be highly useful for improving information access to biomedical literature. However, most existing approaches rely on supervised machine learning (ML) and substantial labeled data that are expensive to develop and apply to different sub-fields of biomedicine. Recent research shows that minimal supervision is sufficient for fairly accurate information structure analysis of biomedical abstracts. However, is it realistic for full articles given their high linguistic and informational complexity? We introduce and release a novel corpus of 50 biomedical articles annotated according to the Argumentative Zoning (AZ) scheme, and investigate active learning with one of the most widely used ML models-Support Vector Machines (SVM)-on this corpus. Additionally, we introduce two novel applications that use AZ to support real-life literature review in biomedicine via question answering and summarization. We show that active learning with SVM trained on 500 labeled sentences (6% of the corpus) performs surprisingly well with the accuracy of 82%, just 2% lower than fully supervised learning. In our question answering task, biomedical researchers find relevant information significantly faster from AZ-annotated than unannotated articles. In the summarization task, sentences extracted from particular zones are significantly more similar to gold standard summaries than those extracted from particular sections of full articles. These results demonstrate that active learning of full articles' information structure is indeed realistic and the accuracy is high enough to support real-life literature review in biomedicine. The annotated corpus, our AZ classifier and the two novel applications are available at http://www.cl.cam.ac.uk/yg244/12bioinfo.html

  20. Mobile Robot Navigation Based on Q-Learning Technique

    Directory of Open Access Journals (Sweden)

    Lazhar Khriji

    2011-03-01

    Full Text Available This paper shows how Q-learning approach can be used in a successful way to deal with the problem of mobile robot navigation. In real situations where a large number of obstacles are involved, normal Q-learning approach would encounter two major problems due to excessively large state space. First, learning the Q-values in tabular form may be infeasible because of the excessive amount of memory needed to store the table. Second, rewards in the state space may be so sparse that with random exploration they will only be discovered extremely slowly. In this paper, we propose a navigation approach for mobile robot, in which the prior knowledge is used within Q-learning. We address the issue of individual behavior design using fuzzy logic. The strategy of behaviors based navigation reduces the complexity of the navigation problem by dividing them in small actions easier for design and implementation. The Q-Learning algorithm is applied to coordinate between these behaviors, which make a great reduction in learning convergence times. Simulation and experimental results confirm the convergence to the desired results in terms of saved time and computational resources.

  1. Architecting the Human Space Flight Program with Systems Modeling Language (SysML)

    Science.gov (United States)

    Jackson, Maddalena M.; Fernandez, Michela Munoz; McVittie, Thomas I.; Sindiy, Oleg V.

    2012-01-01

    The next generation of missions in NASA's Human Space Flight program focuses on the development and deployment of highly complex systems (e.g., Orion Multi-Purpose Crew Vehicle, Space Launch System, 21st Century Ground System) that will enable astronauts to venture beyond low Earth orbit and explore the moon, near-Earth asteroids, and beyond. Architecting these highly complex system-of-systems requires formal systems engineering techniques for managing the evolution of the technical features in the information exchange domain (e.g., data exchanges, communication networks, ground software) and also, formal correlation of the technical architecture to stakeholders' programmatic concerns (e.g., budget, schedule, risk) and design development (e.g., assumptions, constraints, trades, tracking of unknowns). This paper will describe how the authors have applied System Modeling Language (SysML) to implement model-based systems engineering for managing the description of the End-to-End Information System (EEIS) architecture and associated development activities and ultimately enables stakeholders to understand, reason, and answer questions about the EEIS under design for proposed lunar Exploration Missions 1 and 2 (EM-1 and EM-2).

  2. Impact of corpus domain for sentiment classification: An evaluation study using supervised machine learning techniques

    Science.gov (United States)

    Karsi, Redouane; Zaim, Mounia; El Alami, Jamila

    2017-07-01

    Thanks to the development of the internet, a large community now has the possibility to communicate and express its opinions and preferences through multiple media such as blogs, forums, social networks and e-commerce sites. Today, it becomes clearer that opinions published on the web are a very valuable source for decision-making, so a rapidly growing field of research called “sentiment analysis” is born to address the problem of automatically determining the polarity (Positive, negative, neutral,…) of textual opinions. People expressing themselves in a particular domain often use specific domain language expressions, thus, building a classifier, which performs well in different domains is a challenging problem. The purpose of this paper is to evaluate the impact of domain for sentiment classification when using machine learning techniques. In our study three popular machine learning techniques: Support Vector Machines (SVM), Naive Bayes and K nearest neighbors(KNN) were applied on datasets collected from different domains. Experimental results show that Support Vector Machines outperforms other classifiers in all domains, since it achieved at least 74.75% accuracy with a standard deviation of 4,08.

  3. Machine Learning from Garden Path Sentences: The Application of Computational Linguistics

    Directory of Open Access Journals (Sweden)

    Jiali Du

    2014-12-01

    Full Text Available This paper discusses the application of computational linguistics in the machine learning (ML system for the processing of garden path sentences. ML is closely related to artificial intelligence and linguistic cognition. The rapid and efficient processing of the complex structures is an effective method to test the system. By means of parsing the garden path sentence, we draw a conclusion that the integration of theoretical and statistical methods is helpful for the development of ML system.

  4. Simulation Experiment Description Markup Language (SED-ML Level 1 Version 3 (L1V3

    Directory of Open Access Journals (Sweden)

    Bergmann Frank T.

    2018-03-01

    Full Text Available The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML is an XML-based format that encodes, for a given simulation experiment, (i which models to use; (ii which modifications to apply to models before simulation; (iii which simulation procedures to run on each model; (iv how to post-process the data; and (v how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1 implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  5. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).

    Science.gov (United States)

    Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar

    2018-03-19

    The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.

  6. Application and evaluation of a combination of socratice and learning through discussion techniques.

    Science.gov (United States)

    van Aswegen, E J; Brink, H I; Steyn, P J

    2001-11-01

    This article has its genesis in the inquirer's interest in the need for internalizing critical thinking, creative thinking and reflective skills in adult learners. As part of a broader study the inquirer used a combination of two techniques over a period of nine months, namely: Socratic discussion/questioning and Learning Through Discussion Technique. The inquirer within this inquiry elected mainly qualitative methods, because they were seen as more adaptable to dealing with multiple realities and more sensitive and adaptable to the many shaping influences and value patterns that may be encountered (Lincoln & Guba, 1989). Purposive sampling was used and sample size (n = 10) was determined by the willingness of potential participants to enlist in the chosen techniques. Feedback from participants was obtained: (1) verbally after each discussion session, and (2) in written format after completion of the course content. The final/summative evaluation was obtained through a semi-structured questionnaire. This was deemed necessary, in that the participants were already studying for the end of the year examination. For the purpose of this condensed report the inquirer reflected only on the feedback obtained with the help of the questionnaire. The empirical study showed that in spite of various adaptation problems experienced, eight (8) of the ten (10) participants felt positive toward the applied techniques.

  7. Status of the Usage of Active Learning and Teaching Method and Techniques by Social Studies Teachers

    Science.gov (United States)

    Akman, Özkan

    2016-01-01

    The purpose of this study was to determine the active learning and teaching methods and techniques which are employed by the social studies teachers working in state schools of Turkey. This usage status was assessed using different variables. This was a case study, wherein the research was limited to 241 social studies teachers. These teachers…

  8. Conventional and Piecewise Growth Modeling Techniques: Applications and Implications for Investigating Head Start Children's Early Literacy Learning

    Science.gov (United States)

    Hindman, Annemarie H.; Cromley, Jennifer G.; Skibbe, Lori E.; Miller, Alison L.

    2011-01-01

    This article reviews the mechanics of conventional and piecewise growth models to demonstrate the unique affordances of each technique for examining the nature and predictors of children's early literacy learning during the transition from preschool through first grade. Using the nationally representative Family and Child Experiences Survey…

  9. The gel electrophoresis markup language (GelML) from the Proteomics Standards Initiative.

    Science.gov (United States)

    Gibson, Frank; Hoogland, Christine; Martinez-Bartolomé, Salvador; Medina-Aunon, J Alberto; Albar, Juan Pablo; Babnigg, Gyorgy; Wipat, Anil; Hermjakob, Henning; Almeida, Jonas S; Stanislaus, Romesh; Paton, Norman W; Jones, Andrew R

    2010-09-01

    The Human Proteome Organisation's Proteomics Standards Initiative has developed the GelML (gel electrophoresis markup language) data exchange format for representing gel electrophoresis experiments performed in proteomics investigations. The format closely follows the reporting guidelines for gel electrophoresis, which are part of the Minimum Information About a Proteomics Experiment (MIAPE) set of modules. GelML supports the capture of metadata (such as experimental protocols) and data (such as gel images) resulting from gel electrophoresis so that laboratories can be compliant with the MIAPE Gel Electrophoresis guidelines, while allowing such data sets to be exchanged or downloaded from public repositories. The format is sufficiently flexible to capture data from a broad range of experimental processes, and complements other PSI formats for MS data and the results of protein and peptide identifications to capture entire gel-based proteome workflows. GelML has resulted from the open standardisation process of PSI consisting of both public consultation and anonymous review of the specifications.

  10. Supervised Machine Learning for Population Genetics: A New Paradigm

    Science.gov (United States)

    Schrider, Daniel R.; Kern, Andrew D.

    2018-01-01

    As population genomic datasets grow in size, researchers are faced with the daunting task of making sense of a flood of information. To keep pace with this explosion of data, computational methodologies for population genetic inference are rapidly being developed to best utilize genomic sequence data. In this review we discuss a new paradigm that has emerged in computational population genomics: that of supervised machine learning (ML). We review the fundamentals of ML, discuss recent applications of supervised ML to population genetics that outperform competing methods, and describe promising future directions in this area. Ultimately, we argue that supervised ML is an important and underutilized tool that has considerable potential for the world of evolutionary genomics. PMID:29331490

  11. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  12. A Simulation of AI Programming Techniques in BASIC.

    Science.gov (United States)

    Mandell, Alan

    1986-01-01

    Explains the functions of and the techniques employed in expert systems. Offers the program "The Periodic Table Expert," as a model for using artificial intelligence techniques in BASIC. Includes the program listing and directions for its use on: Tandy 1000, 1200, and 2000; IBM PC; PC Jr; TRS-80; and Apple computers. (ML)

  13. Computational intelligence for technology enhanced learning

    Energy Technology Data Exchange (ETDEWEB)

    Xhafa, Fatos [Polytechnic Univ. of Catalonia, Barcelona (Spain). Dept. of Languages and Informatics Systems; Caballe, Santi; Daradoumis, Thanasis [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Computer Sciences Multimedia and Telecommunications; Abraham, Ajith [Machine Intelligence Research Labs (MIR Labs), Auburn, WA (United States). Scientific Network for Innovation and Research Excellence; Juan Perez, Angel Alejandro (eds.) [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Information Sciences

    2010-07-01

    E-Learning has become one of the most wide spread ways of distance teaching and learning. Technologies such as Web, Grid, and Mobile and Wireless networks are pushing teaching and learning communities to find new and intelligent ways of using these technologies to enhance teaching and learning activities. Indeed, these new technologies can play an important role in increasing the support to teachers and learners, to shorten the time to learning and teaching; yet, it is necessary to use intelligent techniques to take advantage of these new technologies to achieve the desired support to teachers and learners and enhance learners' performance in distributed learning environments. The chapters of this volume bring advances in using intelligent techniques for technology enhanced learning as well as development of e-Learning applications based on such techniques and supported by technology. Such intelligent techniques include clustering and classification for personalization of learning, intelligent context-aware techniques, adaptive learning, data mining techniques and ontologies in e-Learning systems, among others. Academics, scientists, software developers, teachers and tutors and students interested in e-Learning will find this book useful for their academic, research and practice activity. (orig.)

  14. THE PUZZLE TECHNIQUE, COOPERATIVE LEARNING STRATEGY TO IMPROVE ACADEMIC PERFORMANCE

    Directory of Open Access Journals (Sweden)

    M.ª José Mayorga Fernández

    2012-04-01

    Full Text Available This  article  presents  an  innovative  experience  carried  out  in  the  subject Pedagogical Bases of Special Education, a 4.5 credit core subject taught at the second year of the Degree in Physical Education Teacher Training (to be extinguish, based on the use of a methodological strategic in accordance with the new demands of the EEES. With the experience we pursue a double purpose: firstly, to present the technique of jigsaw or puzzle as a useful methodological strategy for university learning and, on the other hand, to show whether this strategy improves students results. Comparing the results with students previous year results shows that the performance of students who participated in the innovative experience has improved considerably, increasing their motivation and involvement towards the task.

  15. Survey of Analysis of Crime Detection Techniques Using Data Mining and Machine Learning

    Science.gov (United States)

    Prabakaran, S.; Mitra, Shilpa

    2018-04-01

    Data mining is the field containing procedures for finding designs or patterns in a huge dataset, it includes strategies at the convergence of machine learning and database framework. It can be applied to various fields like future healthcare, market basket analysis, education, manufacturing engineering, crime investigation etc. Among these, crime investigation is an interesting application to process crime characteristics to help the society for a better living. This paper survey various data mining techniques used in this domain. This study may be helpful in designing new strategies for crime prediction and analysis.

  16. The learning technique. Theoretical considerations for planning lessons wit h a strategic learning approach

    Directory of Open Access Journals (Sweden)

    Dania Regueira Martínez

    2014-03-01

    Full Text Available This article presents the learning task considered as the unit of smaller organization level in the teaching-learning process that conditions in its systemic structuring, the learning actions, for the students acquisition of the content, by means of the development of the reflection and the metacognitiv e regulation when they conscious ly or partially plan different types of learning strategies in the ir realization, with the objective to solv e the pedagogic professional problems that are p resented in the disciplines they receive and in its research task during the direction o f the teaching-learning process.

  17. Effect of Ability Grouping in Reciprocal Teaching Technique of Collaborative Learning on Individual Achievements and Social Skills

    Science.gov (United States)

    Sumadi; Degeng, I Nyoman S.; Sulthon; Waras

    2017-01-01

    This research focused on effects of ability grouping in reciprocal teaching technique of collaborative learning on individual achievements dan social skills. The results research showed that (1) there are differences in individual achievement significantly between high group of homogeneous, middle group of homogeneous, low group of homogeneous,…

  18. Post-void residual urine under 150 ml does not exclude voiding dysfunction in women

    DEFF Research Database (Denmark)

    Khayyami, Yasmine; Klarskov, Niels; Lose, Gunnar

    2016-01-01

    INTRODUCTION AND HYPOTHESIS: It has been claimed that post-void residual urine (PVR) below 150 ml rules out voiding dysfunction in women with stress urinary incontinence (SUI) and provides license to perform sling surgery. The cut-off of 150 ml seems arbitrary, not evidence-based, and so we sough...

  19. WaterML, an Information Standard for the Exchange of in-situ hydrological observations

    Science.gov (United States)

    Valentine, D.; Taylor, P.; Zaslavsky, I.

    2012-04-01

    The WaterML 2.0 Standards Working Group (SWG), working within the Open Geospatial Consortium (OGC) and in cooperation with the joint OGC-World Meteorological Organization (WMO) Hydrology Domain Working Group (HDWG), has developed an open standard for the exchange of water observation data; WaterML 2.0. The focus of the standard is time-series data, commonly generated from in-situ style monitoring. This is high value data for hydrological applications such as flood forecasting, environmental reporting and supporting hydrological infrastructure (e.g. dams, supply systems), which is commonly exchanged, but a lack of standards inhibits efficient reuse and automation. The process of developing WaterML required doing a harmonization analysis of existing standards to identify overlapping concepts and come to agreement on a harmonized definition. Generally the formats captured similar requirements, all with subtle differences, such as how time-series point metadata was handled. The in-progress standard WaterML 2.0 incorporates the semantics of the hydrologic information: location, procedure, and observations, and is implemented as an application schema of the Geography Markup Language version 3.2.1, making use of the OGC Observations & Measurements standards. WaterML2.0 is designed as an extensible schema to allow encoding of data to be used in a variety of exchange scenarios. Example areas of usage are: exchange of data for operational hydrological monitoring programs; supporting operation of infrastructure (e.g. dams, supply systems); cross-border exchange of observational data; release of data for public dissemination; enhancing disaster management through data exchange; and exchange in support of national reporting The first phase of WaterML2.0 focused on structural definitions allowing for the transfer of time-series, with less work on harmonization of vocabulary items such as quality codes. Vocabularies from various organizations tend to be specific and take time to

  20. Enhancing the Biological Relevance of Machine Learning Classifiers for Reverse Vaccinology

    Directory of Open Access Journals (Sweden)

    Ashley I. Heinson

    2017-02-01

    Full Text Available Reverse vaccinology (RV is a bioinformatics approach that can predict antigens with protective potential from the protein coding genomes of bacterial pathogens for subunit vaccine design. RV has become firmly established following the development of the BEXSERO® vaccine against Neisseria meningitidis serogroup B. RV studies have begun to incorporate machine learning (ML techniques to distinguish bacterial protective antigens (BPAs from non-BPAs. This research contributes significantly to the RV field by using permutation analysis to demonstrate that a signal for protective antigens can be curated from published data. Furthermore, the effects of the following on an ML approach to RV were also assessed: nested cross-validation, balancing selection of non-BPAs for subcellular localization, increasing the training data, and incorporating greater numbers of protein annotation tools for feature generation. These enhancements yielded a support vector machine (SVM classifier that could discriminate BPAs (n = 200 from non-BPAs (n = 200 with an area under the curve (AUC of 0.787. In addition, hierarchical clustering of BPAs revealed that intracellular BPAs clustered separately from extracellular BPAs. However, no immediate benefit was derived when training SVM classifiers on data sets exclusively containing intra- or extracellular BPAs. In conclusion, this work demonstrates that ML classifiers have great utility in RV approaches and will lead to new subunit vaccines in the future.

  1. Enhancing the Biological Relevance of Machine Learning Classifiers for Reverse Vaccinology

    KAUST Repository

    Heinson, Ashley

    2017-02-01

    Reverse vaccinology (RV) is a bioinformatics approach that can predict antigens with protective potential from the protein coding genomes of bacterial pathogens for subunit vaccine design. RV has become firmly established following the development of the BEXSERO® vaccine against Neisseria meningitidis serogroup B. RV studies have begun to incorporate machine learning (ML) techniques to distinguish bacterial protective antigens (BPAs) from non-BPAs. This research contributes significantly to the RV field by using permutation analysis to demonstrate that a signal for protective antigens can be curated from published data. Furthermore, the effects of the following on an ML approach to RV were also assessed: nested cross-validation, balancing selection of non-BPAs for subcellular localization, increasing the training data, and incorporating greater numbers of protein annotation tools for feature generation. These enhancements yielded a support vector machine (SVM) classifier that could discriminate BPAs (n = 200) from non-BPAs (n = 200) with an area under the curve (AUC) of 0.787. In addition, hierarchical clustering of BPAs revealed that intracellular BPAs clustered separately from extracellular BPAs. However, no immediate benefit was derived when training SVM classifiers on data sets exclusively containing intra- or extracellular BPAs. In conclusion, this work demonstrates that ML classifiers have great utility in RV approaches and will lead to new subunit vaccines in the future.

  2. Enhancing the Biological Relevance of Machine Learning Classifiers for Reverse Vaccinology

    KAUST Repository

    Heinson, Ashley; Gunawardana, Yawwani; Moesker, Bastiaan; Hume, Carmen; Vataga, Elena; Hall, Yper; Stylianou, Elena; McShane, Helen; Williams, Ann; Niranjan, Mahesan; Woelk, Christopher

    2017-01-01

    Reverse vaccinology (RV) is a bioinformatics approach that can predict antigens with protective potential from the protein coding genomes of bacterial pathogens for subunit vaccine design. RV has become firmly established following the development of the BEXSERO® vaccine against Neisseria meningitidis serogroup B. RV studies have begun to incorporate machine learning (ML) techniques to distinguish bacterial protective antigens (BPAs) from non-BPAs. This research contributes significantly to the RV field by using permutation analysis to demonstrate that a signal for protective antigens can be curated from published data. Furthermore, the effects of the following on an ML approach to RV were also assessed: nested cross-validation, balancing selection of non-BPAs for subcellular localization, increasing the training data, and incorporating greater numbers of protein annotation tools for feature generation. These enhancements yielded a support vector machine (SVM) classifier that could discriminate BPAs (n = 200) from non-BPAs (n = 200) with an area under the curve (AUC) of 0.787. In addition, hierarchical clustering of BPAs revealed that intracellular BPAs clustered separately from extracellular BPAs. However, no immediate benefit was derived when training SVM classifiers on data sets exclusively containing intra- or extracellular BPAs. In conclusion, this work demonstrates that ML classifiers have great utility in RV approaches and will lead to new subunit vaccines in the future.

  3. Scoping Study of Machine Learning Techniques for Visualization and Analysis of Multi-source Data in Nuclear Safeguards

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Yonggang

    2018-05-07

    In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integrated analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.

  4. Blood-conservation techniques in craniofacial surgery.

    Science.gov (United States)

    Meara, John G; Smith, Ebonie M; Harshbarger, Raymond J; Farlo, Joe N; Matar, Marla M; Levy, Mike L

    2005-05-01

    Attempts at reducing exposure to allogeneic transfusions, using blood conservation techniques such as controlled hypotension and normovolemic hemodilution, have met with mixed results and are not always practical in small infants. Recombinant human erythropoietin (RHE), a hormone that stimulates RBC production, increases the hematocrit when administered to infants. A retrospective chart review of all patients undergoing fronto-orbital advancement for craniosynostosis by the same plastic surgeon between January 2002 and December 2002 was conducted. A subgroup of patients (10/19) received RHE as a blood-conservation strategy. Transfusion requirements were lower in the RHE group (5/10) versus the control group (9/9). Total volume of blood products transfused was statistically lower in the RHE group (154 mL RHE group versus 421 mL control) (P conservation techniques was associated with a decreased need for blood transfusion, thus exposing the patient to fewer risks associated with allogeneic transfusion.

  5. Facial rejuvenation with fillers: The dual plane technique

    Directory of Open Access Journals (Sweden)

    Giovanni Salti

    2015-01-01

    Full Text Available Background: Facial aging is characterized by skin changes, sagging and volume loss. Volume is frequently addressed with reabsorbable fillers like hyaluronic acid gels. Materials and Methods: From an anatomical point of view, the deep and superficial fat compartments evolve differently with aging in a rather predictable manner. Volume can therefore be restored following a technique based on restoring first the deep volumes and there after the superficial volumes. We called this strategy "dual plane". A series of 147 consecutive patients have been treated with fillers using the dual plane technique in the last five years. Results: An average of 4.25 session per patient has been carried out for a total of 625 treatment sessions. The average total amount of products used has been 12 ml per patient with an average amount per session of 3.75 ml. We had few and limited adverse events with this technique. Conclusion: The dual plane technique is an injection technique based on anatomical logics. Different types of products can be used according to the plane of injection and their rheology in order to obtain a natural result and few side effects.

  6. A SysML Test Model and Test Suite for the ETCS Ceiling Speed Monitor

    DEFF Research Database (Denmark)

    Braunstein, Cécile; Peleska, Jan; Schulze, Uwe

    2014-01-01

    System specification. The model is provided in SysML, and it is equipped with a formal semantics that is consistent with the (semi formal) SysML standard published by the Object Management Group (OMG). The model and its description are publicly available on http://www.mbt-benchmarks.de, a website...

  7. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  8. Robotic radical cystectomy and intracorporeal urinary diversion: The USC technique.

    Science.gov (United States)

    Abreu, Andre Luis de Castro; Chopra, Sameer; Azhar, Raed A; Berger, Andre K; Miranda, Gus; Cai, Jie; Gill, Inderbir S; Aron, Monish; Desai, Mihir M

    2014-07-01

    Radical cystectomy is the gold-standard treatment for muscle-invasive and refractory nonmuscle-invasive bladder cancer. We describe our technique for robotic radical cystectomy (RRC) and intracorporeal urinary diversion (ICUD), that replicates open surgical principles, and present our preliminary results. Specific descriptions for preoperative planning, surgical technique, and postoperative care are provided. Demographics, perioperative and 30-day complications data were collected prospectively and retrospectively analyzed. Learning curve trends were analyzed individually for ileal conduits (IC) and neobladders (NB). SAS(®) Software Version 9.3 was used for statistical analyses with statistical significance set at P < 0.05. Between July 2010 and September 2013, RRC and lymph node dissection with ICUD were performed in 103 consecutive patients (orthotopic NB=46, IC 57). All procedures were completed robotically replicating the open surgical principles. The learning curve trends showed a significant reduction in hospital stay for both IC (11 vs. 6-day, P < 0.01) and orthotopic NB (13 vs. 7.5-day, P < 0.01) when comparing the first third of the cohort with the rest of the group. Overall median (range) operative time and estimated blood loss was 7 h (4.8-13) and 200 mL (50-1200), respectively. Within 30-day postoperatively, complications occurred in 61 (59%) patients, with the majority being low grade (n = 43), and no patient died. Median (range) nodes yield was 36 (0-106) and 4 (3.9%) specimens had positive surgical margins. Robotic radical cystectomy with totally ICUD is safe and feasible. It can be performed using the established open surgical principles with encouraging perioperative outcomes.

  9. Students' perceptions of effective learning experiences in dental school: a qualitative study using a critical incident technique.

    Science.gov (United States)

    Victoroff, Kristin Zakariasen; Hogan, Sarah

    2006-02-01

    Students' views of their educational experience can be an important source of information for curriculum assessment. Although quantitative methods, particularly surveys, are frequently used to gather such data, fewer studies have employed qualitative methods to examine students' dental education experiences. The purpose of this study is to explore characteristics of effective learning experiences in dental school using a qualitative method. All third-year (seventy) and fourth-year (seventy) dental students enrolled in one midwestern dental school were invited to participate. Fifty-three dental students (thirty-five male and eighteen female; thirty-two third-year and twenty-one fourth-year) were interviewed using a critical incident interview technique. Each student was asked to describe a specific, particularly effective learning incident that he or she had experienced in dental school and a specific, particularly ineffective learning incident, for comparison. Each interview was audiotaped. Students were assured that only the interviewer and one additional researcher would have access to the tapes. Data analysis resulted in identification of key themes in the data describing characteristics of effective learning experiences. The following characteristics of effective learning experiences were identified: 1) instructor characteristics (personal qualities, "checking-in" with students, and an interactive style); 2) characteristics of the learning process (focus on the "big picture," modeling and demonstrations, opportunities to apply new knowledge, high-quality feedback, focus, specificity and relevance, and peer interactions); and 3) learning environment (culture of the learning environment, technology). Common themes emerged across a wide variety of learning incidents. Although additional research is needed, the characteristics of effective learning experiences identified in this study may have implications for individual course design and for the dental school

  10. [Platelet rich plasma (PRP): potentialities and techniques of extraction].

    Science.gov (United States)

    Pacifici, L; Casella, F; Maggiore, C

    2002-01-01

    This paper describes the various techniques of platelet-rich plasma (PRP) extraction codified in recent years and their use potential is evaluated. PRP is one of the techniques with which at the moment it is attempted to modulate and facilitate the cure of a wound. The use of PRP is based on the theoretical premise that by concentrating platelets the effects of the growth factors (PDGF, TGF-beta, IGF-I and -II) so released will be increased. Marx's original technique is described above all. This prescribes the sampling of a unit of blood (450-500 ml) and the use of a cell separator. We then analysed the technique of Marx and Hannon in which the quantity of blood sampled is reduced to 150 ml, and the two simplified techniques of the Sacchi and Bellanda group. Finally, a new PRP extraction technique is described. We conclude that platelet gel allows access to autologous growth factors which by definition are neither toxic nor immunogenic and are capable of accelerating the normal processes of bone regeneration. PRP can thus be considered a useful instrument for increasing the quality and final quantity of regenerated bone in oral and maxillo-facial surgery operations.

  11. Safety, immunogenicity, and efficacy of the ML29 reassortant vaccine for Lassa fever in small non-human primates✩

    Science.gov (United States)

    Lukashevich, Igor S.; Carrion, Ricardo; Salvato, Maria S.; Mansfield, Keith; Brasky, Kathleen; Zapata, Juan; Cairo, Cristiana; Goicochea, Marco; Hoosien, Gia E.; Ticer, Anysha; Bryant, Joseph; Davis, Harry; Hammamieh, Rasha; Mayda, Maria; Jett, Marti; Patterson, Jean

    2008-01-01

    A single injection of ML29 reassortant vaccine for Lassa fever induces low, transient viremia, and low or moderate levels of ML29 replication in tissues of common marmosets depending on the dose of the vaccination. The vaccination elicits specific immune responses and completely protects marmosets against fatal disease by induction of sterilizing cell-mediated immunity. DNA array analysis of human peripheral blood mononuclear cells from healthy donors exposed to ML29 revealed that gene expression patterns in ML29-exposed PBMC and control, media-exposed PBMC, clustered together confirming safety profile of the ML29 in non-human primates. The ML29 reassortant is a promising vaccine candidate for Lassa fever. PMID:18692539

  12. A Novel Semi-Supervised Electronic Nose Learning Technique: M-Training

    Directory of Open Access Journals (Sweden)

    Pengfei Jia

    2016-03-01

    Full Text Available When an electronic nose (E-nose is used to distinguish different kinds of gases, the label information of the target gas could be lost due to some fault of the operators or some other reason, although this is not expected. Another fact is that the cost of getting the labeled samples is usually higher than for unlabeled ones. In most cases, the classification accuracy of an E-nose trained using labeled samples is higher than that of the E-nose trained by unlabeled ones, so gases without label information should not be used to train an E-nose, however, this wastes resources and can even delay the progress of research. In this work a novel multi-class semi-supervised learning technique called M-training is proposed to train E-noses with both labeled and unlabeled samples. We employ M-training to train the E-nose which is used to distinguish three indoor pollutant gases (benzene, toluene and formaldehyde. Data processing results prove that the classification accuracy of E-nose trained by semi-supervised techniques (tri-training and M-training is higher than that of an E-nose trained only with labeled samples, and the performance of M-training is better than that of tri-training because more base classifiers can be employed by M-training.

  13. Performance of Machine Learning Algorithms for Qualitative and Quantitative Prediction Drug Blockade of hERG1 channel.

    Science.gov (United States)

    Wacker, Soren; Noskov, Sergei Yu

    2018-05-01

    Drug-induced abnormal heart rhythm known as Torsades de Pointes (TdP) is a potential lethal ventricular tachycardia found in many patients. Even newly released anti-arrhythmic drugs, like ivabradine with HCN channel as a primary target, block the hERG potassium current in overlapping concentration interval. Promiscuous drug block to hERG channel may potentially lead to perturbation of the action potential duration (APD) and TdP, especially when with combined with polypharmacy and/or electrolyte disturbances. The example of novel anti-arrhythmic ivabradine illustrates clinically important and ongoing deficit in drug design and warrants for better screening methods. There is an urgent need to develop new approaches for rapid and accurate assessment of how drugs with complex interactions and multiple subcellular targets can predispose or protect from drug-induced TdP. One of the unexpected outcomes of compulsory hERG screening implemented in USA and European Union resulted in large datasets of IC 50 values for various molecules entering the market. The abundant data allows now to construct predictive machine-learning (ML) models. Novel ML algorithms and techniques promise better accuracy in determining IC 50 values of hERG blockade that is comparable or surpassing that of the earlier QSAR or molecular modeling technique. To test the performance of modern ML techniques, we have developed a computational platform integrating various workflows for quantitative structure activity relationship (QSAR) models using data from the ChEMBL database. To establish predictive powers of ML-based algorithms we computed IC 50 values for large dataset of molecules and compared it to automated patch clamp system for a large dataset of hERG blocking and non-blocking drugs, an industry gold standard in studies of cardiotoxicity. The optimal protocol with high sensitivity and predictive power is based on the novel eXtreme gradient boosting (XGBoost) algorithm. The ML-platform with XGBoost

  14. Forecasting Solar Flares Using Magnetogram-based Predictors and Machine Learning

    Science.gov (United States)

    Florios, Kostas; Kontogiannis, Ioannis; Park, Sung-Hong; Guerra, Jordan A.; Benvenuto, Federico; Bloomfield, D. Shaun; Georgoulis, Manolis K.

    2018-02-01

    We propose a forecasting approach for solar flares based on data from Solar Cycle 24, taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) mission. In particular, we use the Space-weather HMI Active Region Patches (SHARP) product that facilitates cut-out magnetograms of solar active regions (AR) in the Sun in near-realtime (NRT), taken over a five-year interval (2012 - 2016). Our approach utilizes a set of thirteen predictors, which are not included in the SHARP metadata, extracted from line-of-sight and vector photospheric magnetograms. We exploit several machine learning (ML) and conventional statistics techniques to predict flares of peak magnitude {>} M1 and {>} C1 within a 24 h forecast window. The ML methods used are multi-layer perceptrons (MLP), support vector machines (SVM), and random forests (RF). We conclude that random forests could be the prediction technique of choice for our sample, with the second-best method being multi-layer perceptrons, subject to an entropy objective function. A Monte Carlo simulation showed that the best-performing method gives accuracy ACC=0.93(0.00), true skill statistic TSS=0.74(0.02), and Heidke skill score HSS=0.49(0.01) for {>} M1 flare prediction with probability threshold 15% and ACC=0.84(0.00), TSS=0.60(0.01), and HSS=0.59(0.01) for {>} C1 flare prediction with probability threshold 35%.

  15. Distributed learning enhances relational memory consolidation.

    Science.gov (United States)

    Litman, Leib; Davachi, Lila

    2008-09-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of forgetting relative to ML. Furthermore, we demonstrate that this savings in forgetting is specific to relational, but not item, memory. In the context of extant theories and knowledge of memory consolidation, these results suggest that an important mechanism underlying the mnemonic benefit of DL is enhanced memory consolidation. We speculate that synaptic strengthening mechanisms supporting long-term memory consolidation may be differentially mediated by the spacing of memory reactivation. These findings have broad implications for the scientific study of episodic memory consolidation and, more generally, for educational curriculum development and policy.

  16. QuakeML: Recent Development and First Applications of the Community-Created Seismological Data Exchange Standard

    Science.gov (United States)

    Euchner, F.; Schorlemmer, D.; Kästli, P.; Quakeml Group, T

    2008-12-01

    QuakeML is an XML-based exchange format for seismological data which is being developed using a community-driven approach. It covers basic event description, including picks, arrivals, amplitudes, magnitudes, origins, focal mechanisms, and moment tensors. Contributions have been made from ETH, GFZ, USC, SCEC, USGS, IRIS DMC, EMSC, ORFEUS, GNS, ZAMG, BRGM, and ISTI. The current release (Version 1.1, Proposed Recommendation) reflects the results of a public Request for Comments process which has been documented online at http://quakeml.org/RFC_BED_1.0. QuakeML has recently been adopted as a distribution format for earthquake catalogs by GNS Science, New Zealand, and the European-Mediterranean Seismological Centre (EMSC). These institutions provide prototype QuakeML web services. Furthermore, integration of the QuakeML data model in the CSEP (Collaboratory for the Study of Earthquake Predictability, http://www.cseptesting.org) testing center software developed by SCEC is under way. QuakePy is a Python- based seismicity analysis toolkit which is based on the QuakeML data model. Recently, QuakePy has been used to implement the PMC method for calculating network recording completeness (Schorlemmer and Woessner 2008, in press). Completeness results for seismic networks in Southern California and Japan can be retrieved through the CompletenessWeb (http://completenessweb.org). Future QuakeML development will include an extension for macroseismic information. Furthermore, development on seismic inventory information, resource identifiers, and resource metadata is under way. Online resources: http://www.quakeml.org, http://www.quakepy.org

  17. Application and evaluation of a combination of socratice and learning through discussion techniques

    Directory of Open Access Journals (Sweden)

    EJ van Aswegen

    2001-09-01

    Full Text Available This article has its genesis in the inquirer’s interest in the need for internalizing critical thinking, creative thinking and reflective skills in adult learners. As part of a broader study the inquirer used a combination of two techniques over a period of nine months, namely: Socratic discussion/questioning and Learning Through Discussion Technique. The inquirer within this inquiry elected mainly qualitative methods, because they were seen as more adaptable to dealing with multiple realities and more sensitive and adaptable to the many shaping influences and value patterns that may be encountered (Lincoln & Guba, 1989. Purposive sampling was used and sample size (n =10 was determined by the willingness of potential participants to enlist in the chosen techniques. Feedback from participants was obtained: (1 verbally after each discussion session, and (2 in written format after completion of the course content. The final/ summative evaluation was obtained through a semi-structured questionnaire. This was deemed necessary, in that the participants were already studying for the end of the year examination. For the purpose of this condensed report the inquirer reflected only on the feedback obtained with the help of the questionnaire. The empirical study showed that in spite of various adaptation problems experienced, eight (8 of the ten (10 participants felt positive toward the applied techniques.

  18. A hybrid stock trading framework integrating technical analysis with machine learning techniques

    Directory of Open Access Journals (Sweden)

    Rajashree Dash

    2016-03-01

    Full Text Available In this paper, a novel decision support system using a computational efficient functional link artificial neural network (CEFLANN and a set of rules is proposed to generate the trading decisions more effectively. Here the problem of stock trading decision prediction is articulated as a classification problem with three class values representing the buy, hold and sell signals. The CEFLANN network used in the decision support system produces a set of continuous trading signals within the range 0–1 by analyzing the nonlinear relationship exists between few popular technical indicators. Further the output trading signals are used to track the trend and to produce the trading decision based on that trend using some trading rules. The novelty of the approach is to engender the profitable stock trading decision points through integration of the learning ability of CEFLANN neural network with the technical analysis rules. For assessing the potential use of the proposed method, the model performance is also compared with some other machine learning techniques such as Support Vector Machine (SVM, Naive Bayesian model, K nearest neighbor model (KNN and Decision Tree (DT model.

  19. Milking technique of sup(99m)Tc generators and labeling efficiencies

    International Nuclear Information System (INIS)

    Salehi, N.; Guignard, P.A.

    1985-01-01

    Increased levels of 99 Tc in generator produced sup(99m)Tc have an adverse effect on the labelling efficiency of red blood cells and human serum albumin. A two-step milking technique in which the first 1-2 ml of eluate is discarded has been found to produce higher and more constant labelling efficiency of lymphocytes and platelets than a one-step procedure. Binding efficiency of platelet-rich plasma and lymphocytes with sup(99m)Tc is greater in the two-step technique. High activity concentration in the eluate for critical labelling is between 1.5-3 ml. (U.K.)

  20. Automatic Classification of Sub-Techniques in Classical Cross-Country Skiing Using a Machine Learning Algorithm on Micro-Sensor Data

    Directory of Open Access Journals (Sweden)

    Ole Marius Hoel Rindal

    2017-12-01

    Full Text Available The automatic classification of sub-techniques in classical cross-country skiing provides unique possibilities for analyzing the biomechanical aspects of outdoor skiing. This is currently possible due to the miniaturization and flexibility of wearable inertial measurement units (IMUs that allow researchers to bring the laboratory to the field. In this study, we aimed to optimize the accuracy of the automatic classification of classical cross-country skiing sub-techniques by using two IMUs attached to the skier’s arm and chest together with a machine learning algorithm. The novelty of our approach is the reliable detection of individual cycles using a gyroscope on the skier’s arm, while a neural network machine learning algorithm robustly classifies each cycle to a sub-technique using sensor data from an accelerometer on the chest. In this study, 24 datasets from 10 different participants were separated into the categories training-, validation- and test-data. Overall, we achieved a classification accuracy of 93.9% on the test-data. Furthermore, we illustrate how an accurate classification of sub-techniques can be combined with data from standard sports equipment including position, altitude, speed and heart rate measuring systems. Combining this information has the potential to provide novel insight into physiological and biomechanical aspects valuable to coaches, athletes and researchers.

  1. The Effect of using Teams Games Tournaments (TGT Technique for Learning Mathematics in Bangladesh

    Directory of Open Access Journals (Sweden)

    Abdus Salam

    2015-07-01

    Full Text Available Games-based learning has captured the interest of educationalists and industrialists who seek to reveal the characteristics of computer games as they are perceived by some to be a potentially effective approach for teaching and learning. Despite this interest in using games-based learning, there is a dearth of studies context of gaming and education in third world countries. This study investigated the effects of game playing on performance and attitudes of students towards mathematics of Grade VIII. The study was undergone by implementing TGT technique for the experimental group and typical lecture-based approach for the control group. A same achievement test was employed as in both pretest and post test, an inventory of attitudes towards mathematics were applied for the pretest and post test on TGT experimental and control group, an attitude scale on computer games was employed for the TGT experimental group, a semi-structured interview for teacher and an FGD guideline for students were applied to serving the purpose of research objectives. After three-weeks of intervention, it had been found out that TGT experimental group students had achieved a significant learning outcome than lecture based control group students. Attitude towards mathematics were differed to a certain positive extent on TGT experimental group. On the basis of findings of this study, some recommendations were made to overcome the barriers of integrating web-based game playing in a classroom.

  2. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    Science.gov (United States)

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These

  3. Determination of Stress-Corrosion Cracking in Aluminum-Lithium Alloy ML377

    Science.gov (United States)

    Valek, Bryan C.

    1995-01-01

    The use of aluminum-lithium alloys for aerospace applications is currently being studied at NASA Langley Research Center's Metallic Materials Branch. The alloys in question will operate under stress in a corrosive environment. These conditions are ideal for the phenomena of Stress-Corrosion Cracking (SCC) to occur. The test procedure for SCC calls for alternate immersion and breaking load tests. These tests were optimized for the lab equipment and materials available in the Light Alloy lab. Al-Li alloy ML377 specimens were then subjected to alternate immersion and breaking load tests to determine residual strength and resistance to SCC. Corrosion morphology and microstructure were examined under magnification. Data shows that ML377 is highly resistant to stress-corrosion cracking.

  4. The Effect of Jigsaw Technique on 6th Graders' Learning of Force and Motion Unit and Their Science Attitudes and Motivation

    Science.gov (United States)

    Ural, Evrim; Ercan, Orhan; Gençoglan, Durdu Mehmet

    2017-01-01

    The study aims to investigate the effects of jigsaw technique on 6th graders' learning of "Force and Motion" unit, their science learning motivation and their attitudes towards science classes. The sample of the study consisted of 49 6th grade students from two different classes taking the Science and Technology course at a government…

  5. Machine Learning Approaches for Predicting Radiation Therapy Outcomes: A Clinician's Perspective

    International Nuclear Information System (INIS)

    Kang, John; Schwartz, Russell; Flickinger, John; Beriwal, Sushil

    2015-01-01

    Radiation oncology has always been deeply rooted in modeling, from the early days of isoeffect curves to the contemporary Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) initiative. In recent years, medical modeling for both prognostic and therapeutic purposes has exploded thanks to increasing availability of electronic data and genomics. One promising direction that medical modeling is moving toward is adopting the same machine learning methods used by companies such as Google and Facebook to combat disease. Broadly defined, machine learning is a branch of computer science that deals with making predictions from complex data through statistical models. These methods serve to uncover patterns in data and are actively used in areas such as speech recognition, handwriting recognition, face recognition, “spam” filtering (junk email), and targeted advertising. Although multiple radiation oncology research groups have shown the value of applied machine learning (ML), clinical adoption has been slow due to the high barrier to understanding these complex models by clinicians. Here, we present a review of the use of ML to predict radiation therapy outcomes from the clinician's point of view with the hope that it lowers the “barrier to entry” for those without formal training in ML. We begin by describing 7 principles that one should consider when evaluating (or creating) an ML model in radiation oncology. We next introduce 3 popular ML methods—logistic regression (LR), support vector machine (SVM), and artificial neural network (ANN)—and critique 3 seminal papers in the context of these principles. Although current studies are in exploratory stages, the overall methodology has progressively matured, and the field is ready for larger-scale further investigation.

  6. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    Science.gov (United States)

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software

  7. iML1515, a knowledgebase that computes Escherichia coli traits

    DEFF Research Database (Denmark)

    Monk, Jonathan M.; Lloyd, Colton J.; Brunk, Elizabeth

    2017-01-01

    To the Editor: Extracting knowledge from the many types of big data produced by high-throughput methods remains a challenge, even when data are from Escherichia coli, the best characterized bacterial species. Here, we present iML1515, the most complete genome-scale reconstruction of the metabolic...

  8. Analysis on the Metrics used in Optimizing Electronic Business based on Learning Techniques

    Directory of Open Access Journals (Sweden)

    Irina-Steliana STAN

    2014-09-01

    Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.

  9. Emerging Paradigms in Machine Learning

    CERN Document Server

    Jain, Lakhmi; Howlett, Robert

    2013-01-01

    This  book presents fundamental topics and algorithms that form the core of machine learning (ML) research, as well as emerging paradigms in intelligent system design. The  multidisciplinary nature of machine learning makes it a very fascinating and popular area for research.  The book is aiming at students, practitioners and researchers and captures the diversity and richness of the field of machine learning and intelligent systems.  Several chapters are devoted to computational learning models such as granular computing, rough sets and fuzzy sets An account of applications of well-known learning methods in biometrics, computational stylistics, multi-agent systems, spam classification including an extremely well-written survey on Bayesian networks shed light on the strengths and weaknesses of the methods. Practical studies yielding insight into challenging problems such as learning from incomplete and imbalanced data, pattern recognition of stochastic episodic events and on-line mining of non-stationary ...

  10. libNeuroML and PyLEMS: using Python to combine procedural and declarative modeling approaches in computational neuroscience.

    Science.gov (United States)

    Vella, Michael; Cannon, Robert C; Crook, Sharon; Davison, Andrew P; Ganapathy, Gautham; Robinson, Hugh P C; Silver, R Angus; Gleeson, Padraig

    2014-01-01

    NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell, and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two Application Programming Interfaces (APIs) written in Python (http://www.python.org), which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment.

  11. libNeuroML and PyLEMS: using Python to combine imperative and declarative modelling approaches in computational neuroscience

    Directory of Open Access Journals (Sweden)

    Michael eVella

    2014-04-01

    Full Text Available NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell,and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two APIs (Application Programming Interfaces written in Python (http://www.python.org, which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment.

  12. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    Science.gov (United States)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  13. Distance Learning

    National Research Council Canada - National Science Library

    Braddock, Joseph

    1997-01-01

    A study reviewing the existing Army Distance Learning Plan (ADLP) and current Distance Learning practices, with a focus on the Army's training and educational challenges and the benefits of applying Distance Learning techniques...

  14. Relationship Between Non-invasive Brain Stimulation-induced Plasticity and Capacity for Motor Learning.

    Science.gov (United States)

    López-Alonso, Virginia; Cheeran, Binith; Fernández-del-Olmo, Miguel

    2015-01-01

    Cortical plasticity plays a key role in motor learning (ML). Non-invasive brain stimulation (NIBS) paradigms have been used to modulate plasticity in the human motor cortex in order to facilitate ML. However, little is known about the relationship between NIBS-induced plasticity over M1 and ML capacity. NIBS-induced MEP changes are related to ML capacity. 56 subjects participated in three NIBS (paired associative stimulation, anodal transcranial direct current stimulation and intermittent theta-burst stimulation), and in three lab-based ML task (serial reaction time, visuomotor adaptation and sequential visual isometric pinch task) sessions. After clustering the patterns of response to the different NIBS protocols, we compared the ML variables between the different patterns found. We used regression analysis to explore further the relationship between ML capacity and summary measures of the MEPs change. We ran correlations with the "responders" group only. We found no differences in ML variables between clusters. Greater response to NIBS protocols may be predictive of poor performance within certain blocks of the VAT. "Responders" to AtDCS and to iTBS showed significantly faster reaction times than "non-responders." However, the physiological significance of these results is uncertain. MEP changes induced in M1 by PAS, AtDCS and iTBS appear to have little, if any, association with the ML capacity tested with the SRTT, the VAT and the SVIPT. However, cortical excitability changes induced in M1 by AtDCS and iTBS may be related to reaction time and retention of newly acquired skills in certain motor learning tasks. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Jagiellonian University Optimizing Higgs Boson CP Measurement in $H\\to \\tau \\tau $ decay with ML Techniques

    CERN Document Server

    Barberio, Elisabetta; Richter-Wąs, Elżbieta; Wąs, Zbigniew; Zanzi, Daniele

    2017-01-01

    Current measurements of the CP state of the Higgs boson have favoured a scalar Higgs boson but are not able to exclude a mixing of scalar and pseudoscalar Higgs boson states. A measurement of possible mixed CP states of the Higgs boson is best done through the $H \\to ττ$ decay mode. The decay products of $τ$ leptons, produced in $H \\to ττ$ decays, encode the CP information of the Higgs. This presents a challenge as a large proportion of τ decays involve cascade decays to three pions. This results in increased complexity in defining a CP sensitive observable. Deep learning tools (through neural networks) have been employed to extract as much sensitivity as possible. This neural network approach has been shown to effectively separate scalar and pseudoscalar hypothesis with decays of $τ$ to three pions. Assessing the effectiveness of this approach involves studies into detector resolution and $τ$ -decay modelling. Improvements to the approach are sought through the use of $E^{miss}_T$ .

  16. Emerging technology and techniques

    Directory of Open Access Journals (Sweden)

    Gopi Naveen Chander

    2015-01-01

    Full Text Available A technique of fabricating feldspathic porcelain pressable ingots was proposed. A 5 ml disposable syringe was used to condense the powder slurry. The condensed porcelain was sintered at 900΀C to produce porcelain ingots. The fabricated porcelain ingots were used in pressable ceramic machines. The technological advantages of pressable system improve the properties, and the fabricated ingot enhances the application of feldspathic porcelain.

  17. Is There a Relationship between the Usage of Active and Collaborative Learning Techniques and International Students' Study Anxiety?

    Science.gov (United States)

    Khoshlessan, Rezvan

    2013-01-01

    This study was designed to explore the relationships between the international students' perception of professors' instructional practices (the usage of active and collaborative learning techniques in class) and the international students' study anxiety. The dominant goal of this research was to investigate whether the professors' usage of active…

  18. Using the IGCRA (Individual, Group, Classroom Reflective Action) Technique to Enhance Teaching and Learning in Large Accountancy Classes

    Science.gov (United States)

    Poyatos Matas, Cristina; Ng, Chew; Muurlink, Olav

    2011-01-01

    First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs) have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an…

  19. Lymphocyte labelling technique for the exploration of kidney transplants

    International Nuclear Information System (INIS)

    Guey, A.; Touraine, J.L.; Collard, M.; Claveyrolas, P.; Bouteiller, O. de; Traeger, J.

    The labelling technique is developed with a precise clinical exploration in view and has to take into account the following rules or conditions: - the blood sample must be smaller than 20 ml; - the manipulation must not last more than 3 hours; - the immunological properties of the labelled lymphocytes must be kept intact; - the solution reinjected into the patient must contain no aggregates, be absolutely sterile and possess a radioactivity above 1mCi. The technique of extraction and labelling from a sample of about 15ml is described. The main factors responsible for the quality of the labelling are analysed, together with the labelling and irradiation dose effects on certain properties of the lymphocytes (viability, rosette E formation, proliferative response to mitogens) [fr

  20. Machine Learning for High-Throughput Stress Phenotyping in Plants.

    Science.gov (United States)

    Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh Kumar; Sarkar, Soumik

    2016-02-01

    Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  2. Statistical Techniques Used in Three Applied Linguistics Journals: "Language Learning,""Applied Linguistics" and "TESOL Quarterly," 1980-1986: Implications for Readers and Researchers.

    Science.gov (United States)

    Teleni, Vicki; Baldauf, Richard B., Jr.

    A study investigated the statistical techniques used by applied linguists and reported in three journals, "Language Learning,""Applied Linguistics," and "TESOL Quarterly," between 1980 and 1986. It was found that 47% of the published articles used statistical procedures. In these articles, 63% of the techniques used could be called basic, 28%…

  3. 3D CT cerebral angiography technique using a 320-detector machine with a time–density curve and low contrast medium volume: Comparison with fixed time delay technique

    International Nuclear Information System (INIS)

    Das, K.; Biswas, S.; Roughley, S.; Bhojak, M.; Niven, S.

    2014-01-01

    Aim: To describe a cerebral computed tomography angiography (CTA) technique using a 320-detector CT machine and a small contrast medium volume (35 ml, 15 ml for test bolus). Also, to compare the quality of these images with that of the images acquired using a larger contrast medium volume (90 or 120 ml) and a fixed time delay (FTD) of 18 s using a 16-detector CT machine. Materials and methods: Cerebral CTA images were acquired using a 320-detector machine by synchronizing the scanning time with the time of peak enhancement as determined from the time–density curve (TDC) using a test bolus dose. The quality of CTA images acquired using this technique was compared with that obtained using a FTD of 18 s (by 16-detector CT), retrospectively. Average densities in four different intracranial arteries, overall opacification of arteries, and the degree of venous contamination were graded and compared. Results: Thirty-eight patients were scanned using the TDC technique and 40 patients using the FTD technique. The arterial densities achieved by the TDC technique were higher (significant for supraclinoid and basilar arteries, p < 0.05). The proportion of images deemed as having “good” arterial opacification was 95% for TDC and 90% for FTD. The degree of venous contamination was significantly higher in images produced by the FTD technique (p < 0.001%). Conclusion: Good diagnostic quality CTA images with significant reduction of venous contamination can be achieved with a low contrast medium dose using a 320-detector machine by coupling the time of data acquisition with the time of peak enhancement

  4. Supporting visual quality assessment with machine learning

    NARCIS (Netherlands)

    Gastaldo, P.; Zunino, R.; Redi, J.

    2013-01-01

    Objective metrics for visual quality assessment often base their reliability on the explicit modeling of the highly non-linear behavior of human perception; as a result, they may be complex and computationally expensive. Conversely, machine learning (ML) paradigms allow to tackle the quality

  5. Overview of manifold learning techniques for the investigation of disruptions on JET

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2014-01-01

    Identifying a low-dimensional embedding of a high-dimensional data set allows exploration of the data structure. In this paper we tested some existing manifold learning techniques for discovering such embedding within the multidimensional operational space of a nuclear fusion tokamak. Among the manifold learning methods, the following approaches have been investigated: linear methods, such as principal component analysis and grand tour, and nonlinear methods, such as self-organizing map and its probabilistic variant, generative topographic mapping. In particular, the last two methods allow us to obtain a low-dimensional (typically two-dimensional) map of the high-dimensional operational space of the tokamak. These maps provide a way of visualizing the structure of the high-dimensional plasma parameter space and allow discrimination between regions characterized by a high risk of disruption and those with a low risk of disruption. The data for this study comes from plasma discharges selected from 2005 and up to 2009 at JET. The self-organizing map and generative topographic mapping provide the most benefits in the visualization of very large and high-dimensional datasets. Some measures have been used to evaluate their performance. Special emphasis has been put on the position of outliers and extreme points, map composition, quantization errors and topological errors. (paper)

  6. Machine Learning Approaches for Predicting Radiation Therapy Outcomes: A Clinician's Perspective.

    Science.gov (United States)

    Kang, John; Schwartz, Russell; Flickinger, John; Beriwal, Sushil

    2015-12-01

    Radiation oncology has always been deeply rooted in modeling, from the early days of isoeffect curves to the contemporary Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) initiative. In recent years, medical modeling for both prognostic and therapeutic purposes has exploded thanks to increasing availability of electronic data and genomics. One promising direction that medical modeling is moving toward is adopting the same machine learning methods used by companies such as Google and Facebook to combat disease. Broadly defined, machine learning is a branch of computer science that deals with making predictions from complex data through statistical models. These methods serve to uncover patterns in data and are actively used in areas such as speech recognition, handwriting recognition, face recognition, "spam" filtering (junk email), and targeted advertising. Although multiple radiation oncology research groups have shown the value of applied machine learning (ML), clinical adoption has been slow due to the high barrier to understanding these complex models by clinicians. Here, we present a review of the use of ML to predict radiation therapy outcomes from the clinician's point of view with the hope that it lowers the "barrier to entry" for those without formal training in ML. We begin by describing 7 principles that one should consider when evaluating (or creating) an ML model in radiation oncology. We next introduce 3 popular ML methods--logistic regression (LR), support vector machine (SVM), and artificial neural network (ANN)--and critique 3 seminal papers in the context of these principles. Although current studies are in exploratory stages, the overall methodology has progressively matured, and the field is ready for larger-scale further investigation. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A modified technique of LAVH with the Biswas uterovaginal elevator.

    Science.gov (United States)

    Lee, Eric Tat Choi; Wong, Felix Wu Shun; Lim, Chi Eung Danforn

    2009-01-01

    This was a review of 512 consecutive cases of laparoscopic-assisted vaginal hysterectomy (LAVH) for benign gynecologic conditions with the Biswas uterovaginal elevator (BUVE) from February 2003 through June 2008. A single operator, using the BUVE and a standard surgical protocol, performed all hysterectomies. Variables analysis included patient demographics, operative times, uterine weight, hospital stay, intraoperative blood loss, and intraoperative and postoperative complications. LAVH was successfully performed for all patients. The median operative time was 62 [corrected] minutes, range 35 to 250 minutes. The median uterine weight was 231 [corrected] g (range 43-1690 g). The median estimated blood loss was 100 [corrected] mL (range 5-1600 mL). The median length of hospital stay was 1 [corrected] days (range 1-6 days). [corrected] No case sustained injury to the ureter or major vessels or required conversion. LAVH with the BUVE eliminates the need for laparotomy in performing hysterectomies for benign gynecologic disorders. The BUVE can achieve a full range of uterine manipulation. It allows safe and easy dissection of the bladder and precise colpotomy through simultaneous uterine elevation and delineation of vaginal fornices. Prevention of ureteric injury is made possible by moving the surgical field away from the ureter. The technique described can be used to handle a wide variety of diseases and situations and has been shown to be safe, fast, easy to learn, and reproducible and carries few complications.

  8. Machine learning \\& artificial intelligence in the quantum domain

    OpenAIRE

    Dunjko, Vedran; Briegel, Hans J.

    2017-01-01

    Quantum information technologies, and intelligent learning systems, are both emergent technologies that will likely have a transforming impact on our society. The respective underlying fields of research -- quantum information (QI) versus machine learning (ML) and artificial intelligence (AI) -- have their own specific challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent th...

  9. TumorML: Concept and requirements of an in silico cancer modelling markup language.

    Science.gov (United States)

    Johnson, David; Cooper, Jonathan; McKeever, Steve

    2011-01-01

    This paper describes the initial groundwork carried out as part of the European Commission funded Transatlantic Tumor Model Repositories project, to develop a new markup language for computational cancer modelling, TumorML. In this paper we describe the motivations for such a language, arguing that current state-of-the-art biomodelling languages are not suited to the cancer modelling domain. We go on to describe the work that needs to be done to develop TumorML, the conceptual design, and a description of what existing markup languages will be used to compose the language specification.

  10. Representing Misalignments of the STAR Geometry Model using AgML

    Science.gov (United States)

    Webb, Jason C.; Lauret, Jérôme; Perevotchikov, Victor; Smirnov, Dmitri; Van Buren, Gene

    2017-10-01

    The STAR Heavy Flavor Tracker (HFT) was designed to provide high-precision tracking for the identification of charmed hadron decays in heavy-ion collisions at RHIC. It consists of three independently mounted subsystems, providing four precision measurements along the track trajectory, with the goal of pointing decay daughters back to vertices displaced by less than 100 microns from the primary event vertex. The ultimate efficiency and resolution of the physics analysis will be driven by the quality of the simulation and reconstruction of events in heavy-ion collisions. In particular, it is important that the geometry model properly accounts for the relative misalignments of the HFT subsystems, along with the alignment of the HFT relative to STARs primary tracking detector, the Time Projection Chamber (TPC). The Abstract Geometry Modeling Language (AgML) provides a single description of the STAR geometry, generating both our simulation (GEANT 3) and reconstruction geometries (ROOT). AgML implements an ideal detector model, while misalignments are stored separately in database tables. These have historically been applied at the hit level. Simulated detector hits are projected from their ideal position along the track’s trajectory, until they intersect the misaligned detector volume, where the struck detector element is calculated for hit digitization. This scheme has worked well as hit errors have been negligible compared with the size of sensitive volumes. The precision and complexity of the HFT detector require us to apply misalignments to the detector volumes themselves. In this paper we summarize the extension of the AgML language and support libraries to enable the static misalignment of our reconstruction and simulation geometries, discussing the design goals, limitations and path to full misalignment support in ROOT/VMC-based simulation.

  11. Negotiating the Rules of Engagement: Exploring Perceptions of Dance Technique Learning through Bourdieu's Concept of "Doxa"

    Science.gov (United States)

    Rimmer, Rachel

    2017-01-01

    This article presents the findings from a focus group discussion conducted with first year undergraduate dance students in March 2015. The focus group concluded a cycle of action research during which the researcher explored the use of enquiry-based learning approaches to teaching dance technique in higher education. Grounded in transformative and…

  12. Estimating Global Seafloor Total Organic Carbon Using a Machine Learning Technique and Its Relevance to Methane Hydrates

    Science.gov (United States)

    Lee, T. R.; Wood, W. T.; Dale, J.

    2017-12-01

    Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.

  13. Airway Clearance Techniques (ACTs)

    Medline Plus

    Full Text Available ... many challenges, including medical, social, and financial. By learning more about how you can manage your disease every day, you can ultimately help find a ... Cycle of Breathing Technique Airway Clearance Techniques Autogenic ...

  14. Robotic radical cystectomy and intracorporeal urinary diversion: The USC technique

    Directory of Open Access Journals (Sweden)

    Andre Luis de Castro Abreu

    2014-01-01

    Full Text Available Introduction: Radical cystectomy is the gold-standard treatment for muscle-invasive and refractory nonmuscle-invasive bladder cancer. We describe our technique for robotic radical cystectomy (RRC and intracorporeal urinary diversion (ICUD, that replicates open surgical principles, and present our preliminary results. Materials and Methods: Specific descriptions for preoperative planning, surgical technique, and postoperative care are provided. Demographics, perioperative and 30-day complications data were collected prospectively and retrospectively analyzed. Learning curve trends were analyzed individually for ileal conduits (IC and neobladders (NB. SAS ® Software Version 9.3 was used for statistical analyses with statistical significance set at P < 0.05. Results: Between July 2010 and September 2013, RRC and lymph node dissection with ICUD were performed in 103 consecutive patients (orthotopic NB=46, IC 57. All procedures were completed robotically replicating the open surgical principles. The learning curve trends showed a significant reduction in hospital stay for both IC (11 vs. 6-day, P < 0.01 and orthotopic NB (13 vs. 7.5-day, P < 0.01 when comparing the first third of the cohort with the rest of the group. Overall median (range operative time and estimated blood loss was 7 h (4.8-13 and 200 mL (50-1200, respectively. Within 30-day postoperatively, complications occurred in 61 (59% patients, with the majority being low grade (n = 43, and no patient died. Median (range nodes yield was 36 (0-106 and 4 (3.9% specimens had positive surgical margins. Conclusions: Robotic radical cystectomy with totally ICUD is safe and feasible. It can be performed using the established open surgical principles with encouraging perioperative outcomes.

  15. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    Science.gov (United States)

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  16. A Photometric Technique for Determining Fluid Concentration using Consumer-Grade Hardware

    Science.gov (United States)

    Leslie, F.; Ramachandran, N.

    1999-01-01

    In support of a separate study to produce an exponential concentration gradient in a magnetic fluid, a noninvasive technique for determining, species concentration from off-the-shelf hardware has been developed. The approach uses a backlighted fluid test cell photographed with a commercial digital camcorder. Because the light extinction coefficient is wavelength dependent, tests were conducted to determine the best filter color to use, although some guidance was also provided using an absorption spectrophotometer. With the appropriate filter in place, the provide attenuation of the light passing, through the test cell was captured by the camcorder. The digital image was analyzed for intensity using, software from Scion Image Corp. downloaded from the Internet. The analysis provides a two-dimensional array of concentration with an average error of 0.0095 ml/ml. This technique is superior to invasive techniques, which require extraction of a sample that disturbs the concentration distribution in the test cell. Refinements of this technique using a true monochromatic laser light Source are also discussed.

  17. Sodium corrosion tests in the ML 1 circuit

    International Nuclear Information System (INIS)

    Borgstedt, H.U.

    1977-01-01

    In the ML-1 circuit of the 'Juan Vigon' research centre in Madrid, sodium corrosion tests are being carried out on the austenitic steels DIN 1.4970 (X10NiCrMoTiB1515) and DIN 1.4301 (X5CrNi189) at temperatures between 500 and 700 0 C. The exposure time of the samples amounts to 6,000 h by now. Every 1,000 h, the samples were weighed in order to measure corrosion and deposition effects. After 3,000 and 6,000 h, some selected samples were destroyed for inspection. The results are given. (GSC) [de

  18. Technique adaptation, strategic replanning, and team learning during implementation of MR-guided brachytherapy for cervical cancer.

    Science.gov (United States)

    Skliarenko, Julia; Carlone, Marco; Tanderup, Kari; Han, Kathy; Beiki-Ardakani, Akbar; Borg, Jette; Chan, Kitty; Croke, Jennifer; Rink, Alexandra; Simeonov, Anna; Ujaimi, Reem; Xie, Jason; Fyles, Anthony; Milosevic, Michael

    MR-guided brachytherapy (MRgBT) with interstitial needles is associated with improved outcomes in cervical cancer patients. However, there are implementation barriers, including magnetic resonance (MR) access, practitioner familiarity/comfort, and efficiency. This study explores a graded MRgBT implementation strategy that included the adaptive use of needles, strategic use of MR imaging/planning, and team learning. Twenty patients with cervical cancer were treated with high-dose-rate MRgBT (28 Gy in four fractions, two insertions, daily MR imaging/planning). A tandem/ring applicator alone was used for the first insertion in most patients. Needles were added for the second insertion based on evaluation of the initial dosimetry. An interdisciplinary expert team reviewed and discussed the MR images and treatment plans. Dosimetry-trigger technique adaptation with the addition of needles for the second insertion improved target coverage in all patients with suboptimal dosimetry initially without compromising organ-at-risk (OAR) sparing. Target and OAR planning objectives were achieved in most patients. There were small or no systematic differences in tumor or OAR dosimetry between imaging/planning once per insertion vs. daily and only small random variations. Peer review and discussion of images, contours, and plans promoted learning and process development. Technique adaptation based on the initial dosimetry is an efficient approach to implementing MRgBT while gaining comfort with the use of needles. MR imaging and planning once per insertion is safe in most patients as long as applicator shifts, and large anatomical changes are excluded. Team learning is essential to building individual and programmatic competencies. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  19. Indigenous technique of fabricating vaginal mould for vaginal reconstruction and uterine drainage in McIndoe vaginoplasty using 10 ml syringe

    Directory of Open Access Journals (Sweden)

    Brijesh Mishra

    2016-01-01

    Full Text Available Absence of vagina poses multitude of physical and psychosocial problems in woman's life. 10% of Mayer- Rokitansky-üster-Hauser (MRKH syndrome patients with high vaginal septum and vaginal atreisa has additional issue of draining uterine cavity. MC Indoe vaginoplasty is universally acceptable and widely practiced procedure for neocolposis reconstruction. Simultaneous reconstruction of vagina with simultaneous continued uterine drainage presents surgical challenge. We offer a simple solution of creating a vaginal mould using a 10 ml disposable syringe, which enables graft application of neovaginal cavity with simultaneous protected uterine drainage per vaginum. Total 10 patients were included in this study of which 4 needed uterine drainage procedure in addition to neovaginal creation. All the patients fared well, there were no problems regarding graft loss or vaginal mould extrusion etc. Fabrication of mould for graft enables easy dressing changes with out disturbing the skin graft. This innovation offers a simple easily reproducible and cheap way of fabricating vaginal mould for McIndoe vaginoplasty. It is especially useful for neovaginal graft application and simultaneous uterine drainage.

  20. Lower tidal volume strategy (?3?ml/kg) combined with extracorporeal CO2 removal versus ?conventional? protective ventilation (6?ml/kg) in severe ARDS

    OpenAIRE

    Bein, Thomas; Weber-Carstens, Steffen; Goldmann, Anton; M?ller, Thomas; Staudinger, Thomas; Brederlau, J?rg; Muellenbach, Ralf; Dembinski, Rolf; Graf, Bernhard M.; Wewalka, Marlene; Philipp, Alois; Wernecke, Klaus-Dieter; Lubnow, Matthias; Slutsky, Arthur S.

    2013-01-01

    Background Acute respiratory distress syndrome is characterized by damage to the lung caused by various insults, including ventilation itself, and tidal hyperinflation can lead to ventilator induced lung injury (VILI). We investigated the effects of a low tidal volume (V T) strategy (V T???3?ml/kg/predicted body weight [PBW]) using pumpless extracorporeal lung assist in established ARDS. Methods Seventy-nine patients were enrolled after a ?stabilization period? (24?h with optimized therapy an...

  1. The Effectiveness of Using WhatsApp Messenger as One of Mobile Learning Techniques to Develop Students' Writing Skills

    Science.gov (United States)

    Fattah, Said Fathy El Said Abdul

    2015-01-01

    The present study was an attempt to determine the effectiveness of using a WhatsApp Messenger as one of mobile learning techniques to develop students' writing skills. Participants were 30 second year college students, English department from a private university in Saudi Arabia. The experimental group (N = 15) used WhatsApp technology to develop…

  2. Video Quality Assessment and Machine Learning: Performance and Interpretability

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    In this work we compare a simple and a complex Machine Learning (ML) method used for the purpose of Video Quality Assessment (VQA). The simple ML method chosen is the Elastic Net (EN), which is a regularized linear regression model and easier to interpret. The more complex method chosen is Support...... Vector Regression (SVR), which has gained popularity in VQA research. Additionally, we present an ML-based feature selection method. Also, it is investigated how well the methods perform when tested on videos from other datasets. Our results show that content-independent cross-validation performance...... on a single dataset can be misleading and that in the case of very limited training and test data, especially in regards to different content as is the case for many video datasets, a simple ML approach is the better choice....

  3. Teacher Formation in Super Learning Techniques Applied to the Teaching of the Mathematic in the Education Secondary

    Directory of Open Access Journals (Sweden)

    Avilner Rafael Páez Pereira

    2017-11-01

    Full Text Available The purpose of the study was to train LB "José Véliz" teacher for the teaching of mathematics through the application of super-learning techniques, based on the Research Participatory Action modality, proposed by López de Ceballos, (2008, following the model of the Lewin cycles of action (1946, quoted by Latorre (2007, based on the theories of humanism, Martínez (2009; multiple intelligence, Armstrong (2006; the Super learning of Sambrano and Stainer, (2003. Within the framework of the Critical - Social paradigm, in the type Qualitative Research, a plan of approach to the group was made, where through brainstorming and informal interviews the main problems were listed, which were hierarchized and then carried out an awareness - raising process. formulation of an overall plan of action. Among the results were 6 training workshops on techniques of breathing, relaxation, aromatherapy, music therapy, positive programming, color in the classroom, song in mathematical algorithms, in which processes of reflection were established on the benefits or obstacles obtained in the application of these in the transformation of the educational reality, elaborating a didactic strategy product of the experiences reached.

  4. Femtosecond laser-assisted cataract surgery with bimanual technique: learning curve for an experienced cataract surgeon.

    Science.gov (United States)

    Cavallini, Gian Maria; Verdina, Tommaso; De Maria, Michele; Fornasari, Elisa; Volpini, Elisa; Campi, Luca

    2017-11-29

    To describe the intraoperative complications and the learning curve of microincision cataract surgery assisted by femtosecond laser (FLACS) with bimanual technique performed by an experienced surgeon. It is a prospective, observational, comparative case series. A total of 120 eyes which underwent bimanual FLACS by the same experienced surgeon during his first experience were included in the study; we considered the first 60 cases as Group A and the second 60 cases as Group B. In both groups, only nuclear sclerosis of grade 2 or 3 was included; an intraocular lens was implanted through a 1.4-mm incision. Best-corrected visual acuity (BCVA), surgically induced astigmatism (SIA), central corneal thickness and endothelial cell loss (ECL) were evaluated before and at 1 and 3 months after surgery. Intraoperative parameters, and intra- and post-operative complications were recorded. In Group A, we had femtosecond laser-related minor complications in 11 cases (18.3%) and post-operative complications in 2 cases (3.3%); in Group B, we recorded 2 cases (3.3%) of femtosecond laser-related minor complications with no post-operative complications. Mean effective phaco time (EPT) was 5.32 ± 3.68 s in Group A and 4.34 ± 2.39 s in Group B with a significant difference (p = 0.046). We recorded a significant mean BCVA improvement at 3 months in both groups (p  0.05). Finally, we found significant ECL in both groups with a significant difference between the two groups (p = 0.042). FLACS with bimanual technique and low-energy LDV Z8 is associated with a necessary initial learning curve. After the first adjustments in the surgical technique, this technology seems to be safe and effective with rapid visual recovery and it helps surgeons to standardize the crucial steps of cataract surgery.

  5. Enhancement of plant metabolite fingerprinting by machine learning.

    Science.gov (United States)

    Scott, Ian M; Vermeer, Cornelia P; Liakata, Maria; Corol, Delia I; Ward, Jane L; Lin, Wanchang; Johnson, Helen E; Whitehead, Lynne; Kular, Baldeep; Baker, John M; Walsh, Sean; Dave, Anuja; Larson, Tony R; Graham, Ian A; Wang, Trevor L; King, Ross D; Draper, John; Beale, Michael H

    2010-08-01

    Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by (1)H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, (1)H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted.

  6. Machine Learning Approaches for Predicting Radiation Therapy Outcomes: A Clinician's Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Kang, John [Medical Scientist Training Program, University of Pittsburgh-Carnegie Mellon University, Pittsburgh, Pennsylvania (United States); Schwartz, Russell [Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania (United States); Flickinger, John [Departments of Radiation Oncology and Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States); Beriwal, Sushil, E-mail: beriwals@upmc.edu [Department of Radiation Oncology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania (United States)

    2015-12-01

    Radiation oncology has always been deeply rooted in modeling, from the early days of isoeffect curves to the contemporary Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) initiative. In recent years, medical modeling for both prognostic and therapeutic purposes has exploded thanks to increasing availability of electronic data and genomics. One promising direction that medical modeling is moving toward is adopting the same machine learning methods used by companies such as Google and Facebook to combat disease. Broadly defined, machine learning is a branch of computer science that deals with making predictions from complex data through statistical models. These methods serve to uncover patterns in data and are actively used in areas such as speech recognition, handwriting recognition, face recognition, “spam” filtering (junk email), and targeted advertising. Although multiple radiation oncology research groups have shown the value of applied machine learning (ML), clinical adoption has been slow due to the high barrier to understanding these complex models by clinicians. Here, we present a review of the use of ML to predict radiation therapy outcomes from the clinician's point of view with the hope that it lowers the “barrier to entry” for those without formal training in ML. We begin by describing 7 principles that one should consider when evaluating (or creating) an ML model in radiation oncology. We next introduce 3 popular ML methods—logistic regression (LR), support vector machine (SVM), and artificial neural network (ANN)—and critique 3 seminal papers in the context of these principles. Although current studies are in exploratory stages, the overall methodology has progressively matured, and the field is ready for larger-scale further investigation.

  7. Educating patients: understanding barriers, learning styles, and teaching techniques.

    Science.gov (United States)

    Beagley, Linda

    2011-10-01

    Health care delivery and education has become a challenge for providers. Nurses and other professionals are challenged daily to assure that the patient has the necessary information to make informed decisions. Patients and their families are given a multitude of information about their health and commonly must make important decisions from these facts. Obstacles that prevent easy delivery of health care information include literacy, culture, language, and physiological barriers. It is up to the nurse to assess and evaluate the patient's learning needs and readiness to learn because everyone learns differently. This article will examine how each of these barriers impact care delivery along with teaching and learning strategies will be examined. Copyright © 2011 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  8. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Science.gov (United States)

    Luglio, Gaetano; De Palma, Giovanni Domenico; Tarquini, Rachele; Giglio, Mariano Cesare; Sollazzo, Viviana; Esposito, Emanuela; Spadarella, Emanuela; Peltrini, Roberto; Liccardo, Filomena; Bucci, Luigi

    2015-01-01

    Background Despite the proven benefits, laparoscopic colorectal surgery is still under utilized among surgeons. A steep learning is one of the causes of its limited adoption. Aim of the study is to determine the feasibility and morbidity rate after laparoscopic colorectal surgery in a single institution, “learning curve” experience, implementing a well standardized operative technique and recovery protocol. Methods The first 50 patients treated laparoscopically were included. All the procedures were performed by a trainee surgeon, supervised by a consultant surgeon, according to the principle of complete mesocolic excision with central vascular ligation or TME. Patients underwent a fast track recovery programme. Recovery parameters, short-term outcomes, morbidity and mortality have been assessed. Results Type of resections: 20 left side resections, 8 right side resections, 14 low anterior resection/TME, 5 total colectomy and IRA, 3 total panproctocolectomy and pouch. Mean operative time: 227 min; mean number of lymph-nodes: 18.7. Conversion rate: 8%. Mean time to flatus: 1.3 days; Mean time to solid stool: 2.3 days. Mean length of hospital stay: 7.2 days. Overall morbidity: 24%; major morbidity (Dindo–Clavien III): 4%. No anastomotic leak, no mortality, no 30-days readmission. Conclusion Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon. PMID:25859386

  9. Teaching Techniques, Types of Personality, and English Listening Skill

    Directory of Open Access Journals (Sweden)

    Ni Made Ratminingsih

    2013-01-01

    Full Text Available Abstract: Teaching Techniques, Types of Personality, and English Listening Skill. This study inves­tigated the effect of teaching techniques and types of personality on English listening skill. This experi­mental study involved 88 students under investigation, which were determined randomly through multi-stage random sampling technique. The results of the research indicate that there is an interaction effect between the teaching techniques and types of personality on the English listening skill; there is no significant difference in the listening skill between the group of students who learn using the game technique and those who learn using the song technique; the listening skill of students having extrovert personality is better than those having introvert personality; the listening skill of students having extrovert personality who learn using the game technique is lower than those who learn using the song technique; and the listen­ing skill of students having introvert personality who learn using the game technique is higher than those who learn using the song technique. Abstrak: Teknik Pembelajaran, Tipe Kepribadian, dan Keterampilan Mendengarkan Bahasa Inggris. Penelitian ini bertujuan untuk mengetahui pengaruh teknik pembelajaran dan tipe kepribadian terhadap keterampilan mendengarkan bahasa Inggris. Penelitian ini melibatkan 88 orang siswa, yang ditentukan secara acak melalui multi stage random sampling technique. Hasil penelitian menunjukkan bahwa terdapat pengaruh interaksi antara teknik pembelajaran dan tipe kepribadian terhadap keterampilan mendengarkan bahasa Inggris; tidak terdapat perbedaan yang signifikan pada keterampilan mendengarkan antara siswa yang belajar dengan teknik pembelajaran permainan dan lagu; keterampilan mendengarkan siswa yang berkepribadian ekstroversi lebih baik daripada yang berkepribadian introversi; keterampilan mendengarkan siswa yang berkepribadian ekstroversi, yang belajar dengan teknik pembelajaran

  10. The analysis results of EEWS(Earthquake Early Warning System) about Iksan(Ml4.3) and Ulsan(Ml5.0) earthquakes in Korea

    Science.gov (United States)

    Park, J. H.; Chi, H. C.; Lim, I. S.; Seong, Y. J.; Pak, J.

    2016-12-01

    EEW(Earthquake Early Warning) service to the public has been officially operated by KMA (Korea Meteorological Administration) from 2015 in Korea. For the KMA's official EEW service, KIGAM has adopted ElarmS from UC Berkeley BSL and modified local magnitude relation, 1-D travel time curves and association procedures with real time waveforms from about 160 seismic stations of KMA and KIGAM. We have checked the performance of EEWS(Earthquake Early Warning System) reviewing two moderate size earthquakes: one is Iksan Eq.(Ml4.3) inside of networks and the other is Ulsan Eq.(Ml5.0) happened at the southern east sea of Korea outside of networks. The first trigger time at NPR station of the Iksan Eq. took 2.3 sec and BUY and JEO2 stations were associated to produce the first event version in 10.07 sec from the origin time respectively. Because the epicentral distance of JEO2 station is about 30 km and the estimated travel time is 6.2 sec, the delay time including transmission and processing is estimated as 3.87 sec with assumption that P wave velocity is 5 km/sec and the focal depth is 8 km. The first magnitude was M4.9 which was a little bigger than Ml4.3 by KIGAM. After adding 3 more triggers of stations (CHO, KMSA, PORA), the estimated magnitude became to M4.6 and the final was settled down to M4.3 with 10 stations. In the case of Ulsan the first trigger time took 11.04 sec and the first alert time with 3 stations in 14.8 sec from the origin time (OT) respectively. The first magnitude was M5.2, however, the difference between the first EEW epicenter and the manual final result was about 63 km due to the poor azimuth coverage outside of seismic network. After 16.2 sec from OT the fourth station YSB was used to update the location near to the manual results within 6 km with magnitude 5.0 and location and magnitude were stable with more stations. Ulsan Eq. was the first case announced to the public by EEWS and the process and result were successful, however, we have to

  11. N-grams Based Supervised Machine Learning Model for Mobile Agent Platform Protection against Unknown Malicious Mobile Agents

    Directory of Open Access Journals (Sweden)

    Pallavi Bagga

    2017-12-01

    Full Text Available From many past years, the detection of unknown malicious mobile agents before they invade the Mobile Agent Platform has been the subject of much challenging activity. The ever-growing threat of malicious agents calls for techniques for automated malicious agent detection. In this context, the machine learning (ML methods are acknowledged more effective than the Signature-based and Behavior-based detection methods. Therefore, in this paper, the prime contribution has been made to detect the unknown malicious mobile agents based on n-gram features and supervised ML approach, which has not been done so far in the sphere of the Mobile Agents System (MAS security. To carry out the study, the n-grams ranging from 3 to 9 are extracted from a dataset containing 40 malicious and 40 non-malicious mobile agents. Subsequently, the classification is performed using different classifiers. A nested 5-fold cross validation scheme is employed in order to avoid the biasing in the selection of optimal parameters of classifier. The observations of extensive experiments demonstrate that the work done in this paper is suitable for the task of unknown malicious mobile agent detection in a Mobile Agent Environment, and also adds the ML in the interest list of researchers dealing with MAS security.

  12. A best-first tree-searching approach for ML decoding in MIMO system

    KAUST Repository

    Shen, Chung-An

    2012-07-28

    In MIMO communication systems maximum-likelihood (ML) decoding can be formulated as a tree-searching problem. This paper presents a tree-searching approach that combines the features of classical depth-first and breadth-first approaches to achieve close to ML performance while minimizing the number of visited nodes. A detailed outline of the algorithm is given, including the required storage. The effects of storage size on BER performance and complexity in terms of search space are also studied. Our result demonstrates that with a proper choice of storage size the proposed method visits 40% fewer nodes than a sphere decoding algorithm at signal to noise ratio (SNR) = 20dB and by an order of magnitude at 0 dB SNR.

  13. Children's Negotiations of Visualization Skills during a Design-Based Learning Experience Using Nondigital and Digital Techniques

    Science.gov (United States)

    Smith, Shaunna

    2018-01-01

    In the context of a 10-day summer camp makerspace experience that employed design-based learning (DBL) strategies, the purpose of this descriptive case study was to better understand the ways in which children use visualization skills to negotiate design as they move back and forth between the world of nondigital design techniques (i.e., drawing,…

  14. Myosin light chain kinase regulates synaptic plasticity and fear learning in the lateral amygdala.

    Science.gov (United States)

    Lamprecht, R; Margulies, D S; Farb, C R; Hou, M; Johnson, L R; LeDoux, J E

    2006-01-01

    Learning and memory depend on signaling molecules that affect synaptic efficacy. The cytoskeleton has been implicated in regulating synaptic transmission but its role in learning and memory is poorly understood. Fear learning depends on plasticity in the lateral nucleus of the amygdala. We therefore examined whether the cytoskeletal-regulatory protein, myosin light chain kinase, might contribute to fear learning in the rat lateral amygdala. Microinjection of ML-7, a specific inhibitor of myosin light chain kinase, into the lateral nucleus of the amygdala before fear conditioning, but not immediately afterward, enhanced both short-term memory and long-term memory, suggesting that myosin light chain kinase is involved specifically in memory acquisition rather than in posttraining consolidation of memory. Myosin light chain kinase inhibitor had no effect on memory retrieval. Furthermore, ML-7 had no effect on behavior when the training stimuli were presented in a non-associative manner. Anatomical studies showed that myosin light chain kinase is present in cells throughout lateral nucleus of the amygdala and is localized to dendritic shafts and spines that are postsynaptic to the projections from the auditory thalamus to lateral nucleus of the amygdala, a pathway specifically implicated in fear learning. Inhibition of myosin light chain kinase enhanced long-term potentiation, a physiological model of learning, in the auditory thalamic pathway to the lateral nucleus of the amygdala. When ML-7 was applied without associative tetanic stimulation it had no effect on synaptic responses in lateral nucleus of the amygdala. Thus, myosin light chain kinase activity in lateral nucleus of the amygdala appears to normally suppress synaptic plasticity in the circuits underlying fear learning, suggesting that myosin light chain kinase may help prevent the acquisition of irrelevant fears. Impairment of this mechanism could contribute to pathological fear learning.

  15. Modelling risk of tick exposure in southern Scandinavia using machine learning techniques, satellite imagery, and human population density maps

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    30 sites (forests and meadows) in each of Denmark, southern Norway and south-eastern Sweden. At each site we measured presence/absence of ticks, and used the data obtained along with environmental satellite images to run Boosted Regression Tree machine learning algorithms to predict overall spatial...... and Sweden), areas with high population densities tend to overlap with these zones.Machine learning techniques allow us to predict for larger areas without having to perform extensive sampling all over the region in question, and we were able to produce models and maps with high predictive value. The results...

  16. Application of learning techniques based on kernel methods for the fault diagnosis in industrial processes

    Directory of Open Access Journals (Sweden)

    Jose M. Bernal-de-Lázaro

    2016-05-01

    Full Text Available This article summarizes the main contributions of the PhD thesis titled: "Application of learning techniques based on kernel methods for the fault diagnosis in Industrial processes". This thesis focuses on the analysis and design of fault diagnosis systems (DDF based on historical data. Specifically this thesis provides: (1 new criteria for adjustment of the kernel methods used to select features with a high discriminative capacity for the fault diagnosis tasks, (2 a proposed approach process monitoring using statistical techniques multivariate that incorporates a reinforced information concerning to the dynamics of the Hotelling's T2 and SPE statistics, whose combination with kernel methods improves the detection of small-magnitude faults; (3 an robustness index to compare the diagnosis classifiers performance taking into account their insensitivity to possible noise and disturbance on historical data.

  17. Machine learning in Python essential techniques for predictive analysis

    CERN Document Server

    Bowles, Michael

    2015-01-01

    Learn a simpler and more effective way to analyze data and predict outcomes with Python Machine Learning in Python shows you how to successfully analyze data using only two core machine learning algorithms, and how to apply them using Python. By focusing on two algorithm families that effectively predict outcomes, this book is able to provide full descriptions of the mechanisms at work, and the examples that illustrate the machinery with specific, hackable code. The algorithms are explained in simple terms with no complex math and applied using Python, with guidance on algorithm selection, d

  18. ClimateNet: A Machine Learning dataset for Climate Science Research

    Science.gov (United States)

    Prabhat, M.; Biard, J.; Ganguly, S.; Ames, S.; Kashinath, K.; Kim, S. K.; Kahou, S.; Maharaj, T.; Beckham, C.; O'Brien, T. A.; Wehner, M. F.; Williams, D. N.; Kunkel, K.; Collins, W. D.

    2017-12-01

    Deep Learning techniques have revolutionized commercial applications in Computer vision, speech recognition and control systems. The key for all of these developments was the creation of a curated, labeled dataset ImageNet, for enabling multiple research groups around the world to develop methods, benchmark performance and compete with each other. The success of Deep Learning can be largely attributed to the broad availability of this dataset. Our empirical investigations have revealed that Deep Learning is similarly poised to benefit the task of pattern detection in climate science. Unfortunately, labeled datasets, a key pre-requisite for training, are hard to find. Individual research groups are typically interested in specialized weather patterns, making it hard to unify, and share datasets across groups and institutions. In this work, we are proposing ClimateNet: a labeled dataset that provides labeled instances of extreme weather patterns, as well as associated raw fields in model and observational output. We develop a schema in NetCDF to enumerate weather pattern classes/types, store bounding boxes, and pixel-masks. We are also working on a TensorFlow implementation to natively import such NetCDF datasets, and are providing a reference convolutional architecture for binary classification tasks. Our hope is that researchers in Climate Science, as well as ML/DL, will be able to use (and extend) ClimateNet to make rapid progress in the application of Deep Learning for Climate Science research.

  19. Transfer metrics analytics project

    CERN Document Server

    Matonis, Zygimantas

    2016-01-01

    This report represents work done towards predicting transfer rates/latencies on Worldwide LHC Computing Grid (WLCG) sites using Machine Learning techniques. Topic covered are technologies used for the project, data preparation for ML suitable format and attribute selection as well as a comparison of different ML algorithms.

  20. Comparison of Two Different Techniques of Cooperative Learning Approach: Undergraduates' Conceptual Understanding in the Context of Hormone Biochemistry

    Science.gov (United States)

    Mutlu, Ayfer

    2018-01-01

    The purpose of the research was to compare the effects of two different techniques of the cooperative learning approach, namely Team-Game Tournament and Jigsaw, on undergraduates' conceptual understanding in a Hormone Biochemistry course. Undergraduates were randomly assigned to Group 1 (N = 23) and Group 2 (N = 29). Instructions were accomplished…