Graepel, T.; Lauter, K.; Naehrig, M.
We demonstrate that by using a recently proposed somewhat homomorphic encryption (SHE) scheme it is possible to delegate the execution of a machine learning (ML) algorithm to a compute service while retaining confidentiality of the training and test data. Since the computational complexity of the
Casalicchio, G.; Bossek, J.; Lang, M.; Kirchhoff, D.; Kerschke, P.; Hofner, B.; Seibold, H.; Vanschoren, J.; Bischl, B.
OpenML is an online machine learning platform where researchers can easily share data, machine learning tasks and experiments as well as organize them online to work and collaborate more efficiently. In this paper, we present an R package to interface with the OpenML platform and illustrate its
Graepel, T.; Lauter, K.; Naehrig, M.; Kwon, T.; Lee, M.-K.; Kwon, D.
We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining con¿dentiality of the training and test data. Since the computational complexity of the homomorphic
in this area, represented by, for example, the Workflow Management Coalition (Hollingsworth, 1995) and the very widespread standard Business Process Modeling and Notation (BPMN), has been criticized on the basis of research in knowledge work processes. Inspiration for ColeML is found in this research area...
Choi, Suhyeong; Shim, Seongbo; Shin, Youngsoo
With shrinking feature size, runtime has become a limitation of model-based OPC (MB-OPC). A few machine learning-guided OPC (ML-OPC) have been studied as candidates for next-generation OPC, but they all employ too many parameters (e.g. local densities), which set their own limitations. We propose to use basis functions of polar Fourier transform (PFT) as parameters of ML-OPC. Since PFT functions are orthogonal each other and well reflect light phenomena, the number of parameters can significantly be reduced without loss of OPC accuracy. Experiments demonstrate that our new ML-OPC achieves 80% reduction in OPC time and 35% reduction in the error of predicted mask bias when compared to conventional ML-OPC.
Machine learning is becoming ubiquitous across HEP. There is great potential to improve trigger and DAQ performances with it. However, the exploration of such techniques within the field in low latency/power FPGAs has just begun. We present HLS4ML, a user-friendly software, based on High-Level Synthesis (HLS), designed to deploy network architectures on FPGAs. As a case study, we use HLS4ML for boosted-jet tagging with deep networks at the LHC. We show how neural networks can be made fit the resources available on modern FPGAs, thanks to network pruning and quantization. We map out resource usage and latency versus network architectures, to identify the typical problem complexity that HLS4ML could deal with. We discuss possible applications in current and future HEP experiments.
Ivezic, Zeljko; Connolly, Andrew J.; Vanderplas, Jacob
We present AstroML, a Python module for machine learning and data mining built on numpy, scipy, scikit-learn, matplotlib, and astropy, and distributed under an open license. AstroML contains a growing library of statistical and machine learning routines for analyzing astronomical data in Python, loaders for several open astronomical datasets (such as SDSS and other recent major surveys), and a large suite of examples of analyzing and visualizing astronomical datasets. AstroML is especially suitable for introducing undergraduate students to numerical research projects and for graduate students to rapidly undertake cutting-edge research. The long-term goal of astroML is to provide a community repository for fast Python implementations of common tools and routines used for statistical data analysis in astronomy and astrophysics (see http://www.astroml.org).
Full Text Available In the design of multidisciplinary complex products, model-based systems engineering methods are widely used. However, the methodologies only contain only modeling order and simple analysis steps, and lack integrated design analysis methods supporting the whole process. In order to solve the problem, a conceptual design analysis method with integrating modern design methods has been proposed. First, based on the requirement analysis of the quantization matrix, the user’s needs are quantitatively evaluated and translated to system requirements. Then, by the function decomposition of the function knowledge base, the total function is semi-automatically decomposed into the predefined atomic function. The function is matched into predefined structure through the behaviour layer using function-structure mapping based on the interface matching. Finally based on design structure matrix (DSM, the structure reorganization is completed. The process of analysis is implemented with SysML, and illustrated through an aircraft air conditioning system for the system validation.
This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i.e. some of their labels are missing). To handle missing labels, we propose a unified model of label dependencies by constructing a mixed graph, which jointly incorporates (i) instance-level similarity and class co-occurrence as undirected edges and (ii) semantic label hierarchy as directed edges. Unlike most MLML methods, We formulate this learning problem transductively as a convex quadratic matrix optimization problem that encourages training label consistency and encodes both types of label dependencies (i.e. undirected and directed edges) using quadratic terms and hard linear constraints. The alternating direction method of multipliers (ADMM) can be used to exactly and efficiently solve this problem. To evaluate our proposed method, we consider two popular applications (image and video annotation), where the label hierarchy can be derived from Wordnet. Experimental results show that our method achieves a significant improvement over state-of-the-art methods in performance and robustness to missing labels.
Huawei , Intel, Microsoft, NetApp, Pivotal, Splunk, Virdata, VMware, WANdisco and Yahoo!. ML-o-scope: a diagnostic visualization system for deep machine...Facebook, GameOnTalis, Guavus, HP, Huawei , Intel, Microsoft, NetApp, Pivotal, Splunk, Virdata, VMware, WANdisco and Yahoo!. References  Bruna, J., and
Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.
Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…
Wu, Baoyuan; Lyu, Siwei; Ghanem, Bernard
This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i.e. some
Duerr, R.; Ramdeen, S.
One of the interesting features of Polar Science is that it historically has been extremely interdisciplinary, encompassing all of the physical and social sciences. Given the ubiquity of specialized terminology in each field, enabling researchers to find, understand, and use all of the heterogeneous data needed for polar research continues to be a bottleneck. Within the informatics community, semantics has broadly accepted as a solution to these problems, yet progress in developing reusable semantic resources has been slow. The NSF-funded ClearEarth project has been adapting the methods and tools from other communities such as Biomedicine to the Earth sciences with the goal of enhancing progress and the rate at which the needed semantic resources can be created. One of the outcomes of the project has been a better understanding of the differences in the way linguists and physical scientists understand disciplinary text. One example of these differences is the tendency for each discipline and often disciplinary subfields to expend effort in creating discipline specific glossaries where individual terms often are comprised of more than one word (e.g., first-year sea ice). Often each term in a glossary is imbued with substantial contextual or physical meaning - meanings which are rarely explicitly called out within disciplinary texts; meaning which are therefore not immediately accessible to those outside that discipline or subfield; meanings which can often be represented semantically. Here we show how recognition of these difference and the use of glossaries can be used to speed up the annotation processes endemic to NLP, enable inter-community recognition and possible reconciliation of terminology differences. A number of processes and tools will be described, as will progress towards semi-automated generation of ontology structures.
Sun, Zhen; Yang, Zhenyu
The paper proposes a new method for a kind of parametric fault online diagnosis with state estimation jointly. The considered fault affects not only the deterministic part of the system but also the random circumstance. The proposed method first applies Kalman Filter (KF) and Maximum Likelihood (...
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Xiong, Heng; Liu, Dongbing; Li, Qiye
using diverse RNA-seq datasets, we have developed a software tool, RED-ML: RNA Editing Detection based on Machine learning (pronounced as "red ML"). The input to RED-ML can be as simple as a single BAM file, while it can also take advantage of matched genomic variant information when available...... accurately detect novel RNA editing sites without relying on curated RNA editing databases. We have also made this tool freely available via GitHub . We have developed a highly accurate, speedy and general-purpose tool for RNA editing detection using RNA-seq data....... With the availability of RED-ML, it is now possible to conveniently make RNA editing a routine analysis of RNA-seq. We believe this can greatly benefit the RNA editing research community and has profound impact to accelerate our understanding of this intriguing posttranscriptional modification process....
Karp Peter D
Full Text Available Abstract Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations.
Background A key challenge in systems biology is the reconstruction of an organism's metabolic network from its genome sequence. One strategy for addressing this problem is to predict which metabolic pathways, from a reference database of known pathways, are present in the organism, based on the annotated genome of the organism. Results To quantitatively validate methods for pathway prediction, we developed a large "gold standard" dataset of 5,610 pathway instances known to be present or absent in curated metabolic pathway databases for six organisms. We defined a collection of 123 pathway features, whose information content we evaluated with respect to the gold standard. Feature data were used as input to an extensive collection of machine learning (ML) methods, including naïve Bayes, decision trees, and logistic regression, together with feature selection and ensemble methods. We compared the ML methods to the previous PathoLogic algorithm for pathway prediction using the gold standard dataset. We found that ML-based prediction methods can match the performance of the PathoLogic algorithm. PathoLogic achieved an accuracy of 91% and an F-measure of 0.786. The ML-based prediction methods achieved accuracy as high as 91.2% and F-measure as high as 0.787. The ML-based methods output a probability for each predicted pathway, whereas PathoLogic does not, which provides more information to the user and facilitates filtering of predicted pathways. Conclusions ML methods for pathway prediction perform as well as existing methods, and have qualitative advantages in terms of extensibility, tunability, and explainability. More advanced prediction methods and/or more sophisticated input features may improve the performance of ML methods. However, pathway prediction performance appears to be limited largely by the ability to correctly match enzymes to the reactions they catalyze based on genome annotations. PMID:20064214
Erbil, Deniz Gökçe; Kocabas, Ayfer
In this study, the effects of applying the cooperative learning method on the students' attitude toward democracy in an elementary 3rd-grade life studies course was examined. Over the course of 8 weeks, the cooperative learning method was applied with an experimental group, and traditional methods of teaching life studies in 2009, which was still…
Machine Learning Methods for Planning provides information pertinent to learning methods for planning and scheduling. This book covers a wide variety of learning methods and learning architectures, including analogical, case-based, decision-tree, explanation-based, and reinforcement learning.Organized into 15 chapters, this book begins with an overview of planning and scheduling and describes some representative learning systems that have been developed for these tasks. This text then describes a learning apprentice for calendar management. Other chapters consider the problem of temporal credi
Zayapragassarazan, Z.; Kumar, Santosh
Present generation students are primarily active learners with varied learning experiences and lecture courses may not suit all their learning needs. Effective learning involves providing students with a sense of progress and control over their own learning. This requires creating a situation where learners have a chance to try out or test their…
Methods of learning in the workplace will be introduced. The methods are connect to competence development and to the process of conducting development discussions in a dialogical way. The tools developed and applied are a fourfold table, a cycle of work identity, a plan of personal development targets, a learning meeting and a learning map. The methods introduced will aim to better learning at work.
Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara
provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible...... use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml....
... individuals about MPS and ML, the National MPS Society has created a central location for more information on MPS. Click here to go to the MPS Library. Share Tweet Our Mission The National MPS Society exists to cure, support and advocate for MPS ...
Maher, Bridget; Hartkopf, Kathleen; Stieger, Lina; Schroeder, Hanna; Sopka, Sasa; Orrego, Carola; Drachsler, Hendrik
This is a multi-language (ML) update of the CLAS App original design by Bridget Maher from the School of Medicine at University College Cork, Ireland. The current version has an improve counting mechanism and has been translated from English to Spanish, Catalan and German languages within the
Burdet, G.; Combe, Ph.; Nencka, H.
The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine
What. This chapter concerns how visual methods and visual materials can support visually oriented, collaborative, and creative learning processes in education. The focus is on facilitation (guiding, teaching) with visual methods in learning processes that are designerly or involve design. Visual...... methods are exemplified through two university classroom cases about collaborative idea generation processes. The visual methods and materials in the cases are photo elicitation using photo cards, and modeling with LEGO Serious Play sets. Why. The goal is to encourage the reader, whether student...... or professional, to facilitate with visual methods in a critical, reflective, and experimental way. The chapter offers recommendations for facilitating with visual methods to support playful, emergent designerly processes. The chapter also has a critical, situated perspective. Where. This chapter offers case...
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: email@example.com
The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.
The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms
Niggemann, Oliver; Kühnert, Christian
The work presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains some selected papers from the international Conference ML4CPS – Machine Learning for Cyber Physical Systems, which was held in Karlsruhe, September 29th, 2016. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments. The Editors Prof. Dr.-Ing. Jürgen Beyerer is Professor at the Department for Interactive Real-Time Systems at the Karlsruhe Institute of Technology. In addition he manages the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB. Prof. Dr. Oliver Niggemann is Professor for Embedded Software Engineering. His research interests are in the field of Di...
Strauss, John; Peguero, Arturo Martinez; Hirst, Graeme
In preparation for a clinical information system implementation, the Centre for Addiction and Mental Health (CAMH) Clinical Information Transformation project completed multiple preparation steps. An automated process was desired to supplement the onerous task of manual analysis of clinical forms. We used natural language processing (NLP) and machine learning (ML) methods for a series of 266 separate clinical forms. For the investigation, documents were represented by feature vectors. We used four ML algorithms for our examination of the forms: cluster analysis, k-nearest neigh-bours (kNN), decision trees and support vector machines (SVM). Parameters for each algorithm were optimized. SVM had the best performance with a precision of 64.6%. Though we did not find any method sufficiently accurate for practical use, to our knowledge this approach to forms has not been used previously in mental health.
Dobchev, Dimitar A; Pillai, Girinath G; Karelson, Mati
Machine learning (ML) computational methods for predicting compounds with pharmacological activity, specific pharmacodynamic and ADMET (absorption, distribution, metabolism, excretion and toxicity) properties are being increasingly applied in drug discovery and evaluation. Recently, machine learning techniques such as artificial neural networks, support vector machines and genetic programming have been explored for predicting inhibitors, antagonists, blockers, agonists, activators and substrates of proteins related to specific therapeutic targets. These methods are particularly useful for screening compound libraries of diverse chemical structures, "noisy" and high-dimensional data to complement QSAR methods, and in cases of unavailable receptor 3D structure to complement structure-based methods. A variety of studies have demonstrated the potential of machine-learning methods for predicting compounds as potential drug candidates. The present review is intended to give an overview of the strategies and current progress in using machine learning methods for drug design and the potential of the respective model development tools. We also regard a number of applications of the machine learning algorithms based on common classes of diseases.
Makridakis, Spyros; Spiliotis, Evangelos; Assimakopoulos, Vassilios
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.
Makridakis, Spyros; Assimakopoulos, Vassilios
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784
Saito, R; Fujinaga, T; Tada, N; Kimura, H
A magneticaly levitated experimental vehicle (Ml-100) was designed and constructed in commemoration of the centenary of the Japanese National Railways. For magnetic levitation the vehicle is provided with two superconducting magnets. In the test operation of the vehicle, these superconducting magnets showed stable performance in levitating vehicle body.
Full Text Available The objective of this paper is to present a new e-learning method that use databases. The solution could pe implemented for any typeof e-learning system in any domain. The article will purpose a solution to improve the learning process for virtual classes.
Reactive systems are systems that maintain an ongoing interaction with their environment, activated by receiving input events from the environment and producing output events in response. Modern programming languages designed to program such systems use a paradigm based on the notions of instants and activations. We describe a library for Standard ML that provides basic primitives for programming reactive systems. The library is a low-level system upon which more sophisticated reactive behavi...
This opinion piece paper urges teachers and teacher educators to draw careful distinctions among four basic learning goals: learning science, learning about science, doing science and learning to address socio-scientific issues. In elaboration, the author urges that careful attention is paid to the selection of teaching/learning methods that…
Mitsel, A. A.; Cherniaeva, N. V.
The article discusses models, methods and algorithms of determining student's optimal individual educational trajectory. A new method of controlling the learning trajectory has been developed as a dynamic model of learning trajectory control, which uses score assessment to construct a sequence of studied subjects.
Thornton, James E.
This article discusses the proposition that learning is an unexplored feature of the guided autobiography method and its developmental exchange. Learning, conceptualized and explored as the embedded and embodied processes, is essential in narrative activities of the guided autobiography method leading to psychosocial development and growth in…
Christensen, Hans Peter; Vigild, Martin Etchells; Thomsen, Erik Vilain
Students’ study strategies when exposed to activating teaching methods are measured, analysed and compared to study strategies in more traditional lecture-based teaching. The resulting learning outcome is discussed.......Students’ study strategies when exposed to activating teaching methods are measured, analysed and compared to study strategies in more traditional lecture-based teaching. The resulting learning outcome is discussed....
Yong, A.; Herrick, J.; Cochran, E. S.; Andrews, J. R.; Yu, E.
Currently, new seismic stations added to a regional seismic network cannot be used to calculate local or Richter magnitude (ML) until a revised region-wide amplitude decay function is developed. The new station must record a minimum number of local and regional events that meet specific amplitude requirements prior to re-calibration of the amplitude decay function. Therefore, there can be significant delay between when a new station starts contributing real-time waveform packets and when the data can be included in magnitude estimation. The station component adjustments (dML; Uhrhammer et al., 2011) are calculated after first inverting for a new regional amplitude decay function, constrained by the sum of dML for long-running stations. Here, we propose a method to calculate an initial dML using known or proxy values of seismic site conditions. For site conditions, we use the time-averaged shear-wave velocity (VS) of the upper 30 m (VS30). We solve for dML as described in Equation (1) by Uhrhammer et al. (2011): ML = log (A) - log A0 (r) + dML, where A is the maximum Wood and Anderson (1925) trace amplitude (mm), r is the distance (km), and dML is the station adjustment. Measured VS30 and estimated dML data are comprised of records from 887 horizontal components (east-west and north-south orientations) from 93 seismic monitoring stations in the California Integrated Seismic Network. VS30 values range from 202 m/s to 1464 m/s and dML range from -1.10 to 0.39. VS30 and dML exhibit a positive correlation coefficient (R = 0.72), indicating that as VS30 increases, dML increases. This implies that greater site amplification (i.e., lower VS30) results in smaller ML. When we restrict VS30 regional network ML estimates immediately without the need to wait until a minimum set of earthquake data has been recorded.
We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...
Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima
To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.
Full Text Available To evaluate the use of ultrasound coupled with machine learning (ML and deep learning (DL techniques for automated or semi-automated classification of myositis.Eighty subjects comprised of 19 with inclusion body myositis (IBM, 14 with polymyositis (PM, 14 with dermatomyositis (DM, and 33 normal (N subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally were acquired. We considered three problems of classification including (A normal vs. affected (DM, PM, IBM; (B normal vs. IBM patients; and (C IBM vs. other types of myositis (DM or PM. We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification.The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A, 86.6% ± 2.4% for (B and 74.8% ± 3.9% for (C, while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A, 84.3% ± 2.3% for (B and 68.9% ± 2.5% for (C.This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.
Christensen, Hans Peter; Vigild, Martin E.; Thomsen, Erik; Szabo, Peter; Horsewell, Andy
Students’ study strategies when exposed to activating teaching methods are measured, analysed and compared to study strategies in more traditional lecture-based teaching. The resulting learning outcome is discussed. Peer Reviewed
Tjalla, Awaluddin; Sofiah, Evi
This research aims to reveal the influence of learning methods and self-regulated learning on students learning scores for Social Studies object. The research was done in Islamic Junior High School (MTs Manba'ul Ulum), Batuceper City Tangerang using quasi-experimental method. The research employed simple random technique to 28 students. Data were…
Gosselin, Philippe Henri; Cord, Matthieu
Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.
Prosper Harrison B.
Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.
Jackson, Dontae L.
In the world of aviation, air traffic controllers are an integral part in the overall level of safety that is provided. With a number of controllers reaching retirement age, the Air Traffic Collegiate Training Initiative (AT-CTI) was created to provide a stronger candidate pool. However, AT-CTI Instructors have found that a number of AT-CTI students are unable to memorize types of aircraft effectively. This study focused on the basic learning styles (auditory, visual, and kinesthetic) of students and created a teaching method to try to increase memorization in AT-CTI students. The participants were asked to take a questionnaire to determine their learning style. Upon knowing their learning styles, participants attended two classroom sessions. The participants were given a presentation in the first class, and divided into a control and experimental group for the second class. The control group was given the same presentation from the first classroom session while the experimental group had a group discussion and utilized Middle Tennessee State University's Air Traffic Control simulator to learn the aircraft types. Participants took a quiz and filled out a survey, which tested the new teaching method. An appropriate statistical analysis was applied to determine if there was a significant difference between the control and experimental groups. The results showed that even though the participants felt that the method increased their learning, there was no significant difference between the two groups.
Ryszard Józef Panfil
Full Text Available The dynamics of the environment in which educational institutions operate have a significant influence on the basic activity of these institutions, i.e. the process of educating, and particularly teaching and learning methods used during that process: traditional teaching, tutoring, mentoring and coaching. The identity of an educational institution and the appeal of its services depend on how flexible, diverse and adaptable is the educational process it offers as a core element of its services. Such a process is determined by how its pragmatism is displayed in the operational relativism of methods, their applicability, as well as practical dimension of achieved results and values. Based on the above premises, this publication offers a pragmatic-systemic identification of contemporary teaching and learning methods, while taking into account the differences between them and the scope of their compatibility. Secondly, using the case of sport coaches’ education, the author exemplifies the pragmatic theory of perception of contemporary teaching and learning methods.
This paper outlines the development of a generic Business Research Methods course from a simple name in a box to a full e-Learning web based module. It highlights particular issues surrounding the nature of the discipline and the integration of a large number of cross faculty subject specific research methods courses into a single generic module.…
Prosper Harrison B.
A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...
extraneous. The agent could potentially adapt these representational aspects by applying methods from feature selection ( Kolter and Ng, 2009; Petrik et al...611–616. AAAI Press. Kolter , J. Z. and Ng, A. Y. (2009). Regularization and feature selection in least-squares temporal difference learning. In A. P
Current track reconstructing methods start with two points and then for each layer loop through all possible hits to find proper hits to add to that track. Another idea would be to use this large number of already reconstructed events and/or simulated data and train a machine on this data to find tracks given hit pixels. Training time could be long but real time tracking is really fast Simulation might not be as realistic as real data but tacking has been done for that with 100 percent efficiency while by using real data we would probably be limited to current efficiency.
Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank
Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability
Kocabas, Ayfer; Erbil, Deniz Gokce
Cooperative learning method is a learning method studied both in Turkey and in the world for long years as an active learning method. Although cooperative learning method takes place in training programs, it cannot be implemented completely in the direction of its principles. The results of the researches point out that teachers have problems with…
Naresh N. Vempala
Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.
Pamungkas, Bian Dwi
This study aims to examine the contribution of learning methods on learning output, the contribution of facilities and infrastructure on output learning, the contribution of learning resources on learning output, and the contribution of learning methods, the facilities and infrastructure, and learning resources on learning output. The research design is descriptive causative, using a goal-oriented assessment approach in which the assessment focuses on assessing the achievement of a goal. The ...
Frankenhuis, Willem E; Panchanathan, Karthik; Barto, Andrew G
This article focuses on the division of labor between evolution and development in solving sequential, state-dependent decision problems. Currently, behavioral ecologists tend to use dynamic programming methods to study such problems. These methods are successful at predicting animal behavior in a variety of contexts. However, they depend on a distinct set of assumptions. Here, we argue that behavioral ecology will benefit from drawing more than it currently does on a complementary collection of tools, called reinforcement learning methods. These methods allow for the study of behavior in highly complex environments, which conventional dynamic programming methods do not feasibly address. In addition, reinforcement learning methods are well-suited to studying how biological mechanisms solve developmental and learning problems. For instance, we can use them to study simple rules that perform well in complex environments. Or to investigate under what conditions natural selection favors fixed, non-plastic traits (which do not vary across individuals), cue-driven-switch plasticity (innate instructions for adaptive behavioral development based on experience), or developmental selection (the incremental acquisition of adaptive behavior based on experience). If natural selection favors developmental selection, which includes learning from environmental feedback, we can also make predictions about the design of reward systems. Our paper is written in an accessible manner and for a broad audience, though we believe some novel insights can be drawn from our discussion. We hope our paper will help advance the emerging bridge connecting the fields of behavioral ecology and reinforcement learning. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Enever, Janet, Ed.; Lindgren, Eva, Ed.
This is the first collection of research studies to explore the potential for mixed methods to shed light on foreign or second language learning by young learners in instructed contexts. It brings together recent studies undertaken in Cameroon, China, Croatia, Ethiopia, France, Germany, Italy, Kenya, Mexico, Slovenia, Spain, Sweden, Tanzania and…
Siadat, M. Vali; Musial, Paul M.; Sagher, Yoram
This study reports the effects of an integrated instructional program (the Keystone Method) on the students' performance in mathematics and reading, and tracks students' persistence and retention. The subject of the study was a large group of students in remedial mathematics classes at the college, willing to learn but lacking basic educational…
Aim of this study is to investigate students' ideas on cooperative learning method. For that purpose students who are studying at elementary science education program are distributed into two groups through an experimental design. Factors threaten the internal validity are either eliminated or reduced to minimum value. Data analysis is done…
MaCoy, Katherine W.
The methods used and the results obtained by means of the accelerated language learning techniques developed by Georgi Lozanov, Director of the Institute of Suggestology in Bulgaria, are discussed. The following topics are included: (1) discussion of hypermnesia, "super memory," and the reasons foreign languages were chosen for purposes…
Dwi Nur Rachmah
Jigsaw learning as a cooperative learning method, according to the results of some studies, can improve academic skills, social competence, behavior in learning, and motivation to learn. However, in some other studies, there are different findings regarding the effect of jigsaw learning method on self-efficacy. The purpose of this study is to examine the effects of jigsaw learning method on self-efficacy and motivation to learn in psychology students at the Faculty of Medicine, Universitas La...
Liu, Di; Li, YingChun
In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.
Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua
Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random
Icaza, José I.; Heredia, Yolanda; Borch, Ole M.
A pedagogical approach called “project oriented immersion learning” is presented and tested on a graduate online course. The approach combines the Project Oriented Learning method with immersion learning in a virtual enterprise. Students assumed the role of authors hired by a fictitious publishing...... house that develops digital products including e-books, tutorials, web sites and so on. The students defined the problem that their product was to solve; choose the type of product and the content; and built the product following a strict project methodology. A wiki server was used as a platform to hold...
Full Text Available The authors describe the method of global learning of foreign languages, which is based on the principles of neurolinguistic programming (NLP. According to this theory, the educator should use the method of the so-called periphery learning, where students learn relaxation techniques and at the same time they »incidentally « or subconsciously learn a foreign language. The method of global learning imitates successful strategies of learning in early childhood and therefore creates a relaxed attitude towards learning. Global learning is also compared with standard methods.
Groselj, C.; Kukar, M.
Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)
Today computation is an inseparable part of scientific research. Specially in Particle Physics when there is a classification problem like discrimination of Signals from Backgrounds originating from the collisions of particles. On the other hand, Monte Carlo simulations can be used in order to generate a known data set of Signals and Backgrounds based on theoretical physics. The aim of Machine Learning is to train some algorithms on known data set and then apply these trained algorithms to the unknown data sets. However, the most common framework for data analysis in Particle Physics is ROOT. In order to use Machine Learning methods, a Toolkit for Multivariate Data Analysis (TMVA) has been added to ROOT. The major consideration in this report is the parallelization of some TMVA methods, specially Cross-Validation and BDT.
Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.
Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.
Tilak, Omkar; Martin, Ryan; Mukhopadhyay, Snehasis
We discuss the application of indirect learning methods in zero-sum and identical payoff learning automata games. We propose a novel decentralized version of the well-known pursuit learning algorithm. Such a decentralized algorithm has significant computational advantages over its centralized counterpart. The theoretical study of such a decentralized algorithm requires the analysis to be carried out in a nonstationary environment. We use a novel bootstrapping argument to prove the convergence of the algorithm. To our knowledge, this is the first time that such analysis has been carried out for zero-sum and identical payoff games. Extensive simulation studies are reported, which demonstrate the proposed algorithm's fast and accurate convergence in a variety of game scenarios. We also introduce the framework of partial communication in the context of identical payoff games of learning automata. In such games, the automata may not communicate with each other or may communicate selectively. This comprehensive framework has the capability to model both centralized and decentralized games discussed in this paper.
Collaborative learning is one, among other, active learning methods, widely acclaimed in higher education. Consequently, instructors in fields that lack pedagogical training often implement new learning methods such as collaborative learning on the basis of trial and error. Moreover, even though the benefits in academic circles are broadly touted,…
Torre, de la M.; Melches, I; Lapena, J.; Martinez, T.A.; Miguel, de D.; Duran, F.
The sodium loop ML-3 is described. The main objective of this facility is to obtain mechanical property data for LMFBR materials in creep and low cycle fatigue testing in flowing sodium. ML-3 includes 10 test stations for creep and two for fatigue. It is possible to operate simultaneously at three different temperature levels. The maximum operating temperature is 650 deg C at flow velocities up to 5 m/s. The ML-3 loop has been located in a manner that permits the fill/dump tank cover gas and security systems to be shared with an earlier circuit, the ML-1. (author)
Full Text Available Background and Purpose: The process of business to business (B2B sales forecasting is a complex decision-making process. There are many approaches to support this process, but mainly it is still based on the subjective judgment of a decision-maker. The problem of B2B sales forecasting can be modeled as a classification problem. However, top performing machine learning (ML models are black boxes and do not support transparent reasoning. The purpose of this research is to develop an organizational model using ML model coupled with general explanation methods. The goal is to support the decision-maker in the process of B2B sales forecasting.
Verpoorten, Dominique; Poumay, M; Leclercq, D
Please, cite this publication as: Verpoorten, D., Poumay, M., & Leclercq, D. (2006). The 8 Learning Events Model: a Pedagogic Conceptual Tool Supporting Diversification of Learning Methods. Proceedings of International Workshop in Learning Networks for Lifelong Competence Development, TENCompetence
Full Text Available Introduction: One of the most significant elements of entrepreneurship curriculum design is teaching-learning methods, which plays a key role in studies and researches related to such a curriculum. It is the teaching method, and systematic, organized and logical ways of providing lessons that should be consistent with entrepreneurship goals and contents, and should also be developed according to the learners’ needs. Therefore, the current study aimed to introduce appropriate, modern, and effective methods of teaching entrepreneurship and their validation Methods: This is a mixed method research of a sequential exploratory kind conducted through two stages: a developing teaching methods of entrepreneurship curriculum, and b validating developed framework. Data were collected through “triangulation” (study of documents, investigating theoretical basics and the literature, and semi-structured interviews with key experts. Since the literature on this topic is very rich, and views of the key experts are vast, directed and summative content analysis was used. In the second stage, qualitative credibility of research findings was obtained using qualitative validation criteria (credibility, confirmability, and transferability, and applying various techniques. Moreover, in order to make sure that the qualitative part is reliable, reliability test was used. Moreover, quantitative validation of the developed framework was conducted utilizing exploratory and confirmatory factor analysis methods and Cronbach’s alpha. The data were gathered through distributing a three-aspect questionnaire (direct presentation teaching methods, interactive, and practical-operational aspects with 29 items among 90 curriculum scholars. Target population was selected by means of purposive sampling and representative sample. Results: Results obtained from exploratory factor analysis showed that a three factor structure is an appropriate method for describing elements of
Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: firstname.lastname@example.org; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.
Zimmermann, J.; Kiesling, C.
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application
Full Text Available We report the learning curves of three eye surgeons converting from sutureless extracapsular cataract extraction to phacoemulsification using different teaching methods. Posterior capsule rupture (PCR as a per-operative complication and visual outcome of the first 100 operations were analysed. The PCR rate was 4% and 15% in supervised and unsupervised surgery respectively. Likewise, an uncorrected visual acuity of > or = 6/18 on the first postoperative day was seen in 62 (62% of patients and in 22 (22% in supervised and unsupervised surgery respectively.
Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen
Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.
Esmi, Keramat; Marzoughi, Rahmatallah; Torkzadeh, Jafar
One of the most significant elements of entrepreneurship curriculum design is teaching-learning methods, which plays a key role in studies and researches related to such a curriculum. It is the teaching method, and systematic, organized and logical ways of providing lessons that should be consistent with entrepreneurship goals and contents, and should also be developed according to the learners' needs. Therefore, the current study aimed to introduce appropriate, modern, and effective methods of teaching entrepreneurship and their validation. This is a mixed method research of a sequential exploratory kind conducted through two stages: a) developing teaching methods of entrepreneurship curriculum, and b) validating developed framework. Data were collected through "triangulation" (study of documents, investigating theoretical basics and the literature, and semi-structured interviews with key experts). Since the literature on this topic is very rich, and views of the key experts are vast, directed and summative content analysis was used. In the second stage, qualitative credibility of research findings was obtained using qualitative validation criteria (credibility, confirmability, and transferability), and applying various techniques. Moreover, in order to make sure that the qualitative part is reliable, reliability test was used. Moreover, quantitative validation of the developed framework was conducted utilizing exploratory and confirmatory factor analysis methods and Cronbach's alpha. The data were gathered through distributing a three-aspect questionnaire (direct presentation teaching methods, interactive, and practical-operational aspects) with 29 items among 90 curriculum scholars. Target population was selected by means of purposive sampling and representative sample. Results obtained from exploratory factor analysis showed that a three factor structure is an appropriate method for describing elements of teaching-learning methods of entrepreneurship curriculum
Takeda, Kayoko; Takahashi, Kiyoshi; Masukawa, Hiroyuki; Shimamori, Yoshimitsu
Recently, the practice of active learning has spread, increasingly recognized as an essential component of academic studies. Classes incorporating small group discussion (SGD) are conducted at many universities. At present, assessments of the effectiveness of SGD have mostly involved evaluation by questionnaires conducted by teachers, by peer assessment, and by self-evaluation of students. However, qualitative data, such as open-ended descriptions by students, have not been widely evaluated. As a result, we have been unable to analyze the processes and methods involved in how students acquire knowledge in SGD. In recent years, due to advances in information and communication technology (ICT), text mining has enabled the analysis of qualitative data. We therefore investigated whether the introduction of a learning system comprising the jigsaw method and problem-based learning (PBL) would improve student attitudes toward learning; we did this by text mining analysis of the content of student reports. We found that by applying the jigsaw method before PBL, we were able to improve student attitudes toward learning and increase the depth of their understanding of the area of study as a result of working with others. The use of text mining to analyze qualitative data also allowed us to understand the processes and methods by which students acquired knowledge in SGD and also changes in students' understanding and performance based on improvements to the class. This finding suggests that the use of text mining to analyze qualitative data could enable teachers to evaluate the effectiveness of various methods employed to improve learning.
Full Text Available Distance learning has facilitated innovative means to include Cooperative Learning (CL in virtual settings. This study, conducted at a Hispanic-Serving Institution, compared the effectiveness of online CL strategies in discussion forums with traditional online forums. Quantitative and qualitative data were collected from 56 graduate student participants. Quantitative results revealed no significant difference on student success between CL and Traditional formats. The qualitative data revealed that students in the cooperative learning groups found more learning benefits than the Traditional group. The study will benefit instructors and students in distance learning to improve teaching and learning practices in a virtual classroom.
Brattain, Laura J; Telfer, Brian A; Dhyani, Manish; Grajo, Joseph R; Samir, Anthony E
Ultrasound (US) imaging is the most commonly performed cross-sectional diagnostic imaging modality in the practice of medicine. It is low-cost, non-ionizing, portable, and capable of real-time image acquisition and display. US is a rapidly evolving technology with significant challenges and opportunities. Challenges include high inter- and intra-operator variability and limited image quality control. Tremendous opportunities have arisen in the last decade as a result of exponential growth in available computational power coupled with progressive miniaturization of US devices. As US devices become smaller, enhanced computational capability can contribute significantly to decreasing variability through advanced image processing. In this paper, we review leading machine learning (ML) approaches and research directions in US, with an emphasis on recent ML advances. We also present our outlook on future opportunities for ML techniques to further improve clinical workflow and US-based disease diagnosis and characterization.
From the assumption that matching a student's learning style with the learning method best suited for the student, it follows that developing courses that correlate learning method with learning style would be more successful for students. Albuquerque Technical Vocational Institute (TVI) in New Mexico has attempted to provide students with more…
Soroush, Masoud; Weinberger, Charles B.
This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…
Most Chinese students are not interested in English learning, especially English words. In this paper, I focus on English vocabulary learning, for example, the study of high school students English word learning method, and also introduce several ways to make vocabulary memory becomes more effective. The purpose is to make high school students grasp more English word learning skills.
Saxe, Glenn N; Ma, Sisi; Ren, Jiwen; Aliferis, Constantin
The care of traumatized children would benefit significantly from accurate predictive models for Posttraumatic Stress Disorder (PTSD), using information available around the time of trauma. Machine Learning (ML) computational methods have yielded strong results in recent applications across many diseases and data types, yet they have not been previously applied to childhood PTSD. Since these methods have not been applied to this complex and debilitating disorder, there is a great deal that remains to be learned about their application. The first step is to prove the concept: Can ML methods - as applied in other fields - produce predictive classification models for childhood PTSD? Additionally, we seek to determine if specific variables can be identified - from the aforementioned predictive classification models - with putative causal relations to PTSD. ML predictive classification methods - with causal discovery feature selection - were applied to a data set of 163 children hospitalized with an injury and PTSD was determined three months after hospital discharge. At the time of hospitalization, 105 risk factor variables were collected spanning a range of biopsychosocial domains. Seven percent of subjects had a high level of PTSD symptoms. A predictive classification model was discovered with significant predictive accuracy. A predictive model constructed based on subsets of potentially causally relevant features achieves similar predictivity compared to the best predictive model constructed with all variables. Causal Discovery feature selection methods identified 58 variables of which 10 were identified as most stable. In this first proof-of-concept application of ML methods to predict childhood Posttraumatic Stress we were able to determine both predictive classification models for childhood PTSD and identify several causal variables. This set of techniques has great potential for enhancing the methodological toolkit in the field and future studies should seek to
Oja, Anne, 1970-
ML Arvutid omanik Aivar Paalberg tõstis aktsiakapitali seniselt 10 miljonilt 24 miljonile eesmärgiga tugevdada oma positsioone Eesti turul ja kasvada kiiremas tempos kui kogu turg. Diagramm: Majandusnäitajad
The cases deals about learner centered learning in a commercial program and a technical program.......The cases deals about learner centered learning in a commercial program and a technical program....
Invited presentation delivered at COMBINE 2016.CellML, SED-ML, and the Physiome Model Repository.David Nickerson, Auckland Bioengineering Institute, University of Auckland, New Zealand.CellML is an XML-based protocol for storing and exchanging computer-based mathematical models in an unambiguous, modular, and reusable manner. In addition to introducing CellML, in this presentation I will provide some of physiological examples that have help drive the development and adoption of CellML. I will...
The ml-o genes in barley are important sources in breeding for resistance against the barley powdery mildew fungus (Erysiphe graminis). The resistance mechanism is a rapid formation of a large callose containing cell wall apposition at the site of the pathogen's infection attempt. This reduces the chances of infection to almost nil in all epidermal cells, except in the small subsidiary cells, in which appositions are rarely formed. Small mildew colonies from infections in subsidiary cells may be seen on the otherwise resistant leaf. This is described by the infection type 0/(4). Mildew isolate HL 3 selected by SCHWARZBACH has increased aggressiveness. No ml-o-virulent isolates are known. However, ml-o-resistant varieties when grown extensively in Europe, will introduce field selection for mildew pathotypes with aggressiveness or virulence to ml-o resistance. Studies on increased aggressiveness require new methods. The material comprises two powdery mildew isolates: GE 3 without ml-o aggressiveness and the aggressive HL 3/5; and two near-isogenic barley lines in Carlsberg II: Riso 5678(R) with the recessive mutant resistance gene ml-o5 and Riso 5678(S) with the wild-type gene for susceptibility. Latent period and disease efficiency show no significant differences between the two isolates on the susceptible barley line (S) but the isolates differ from each other on the resistant barley line
Topal, Kenan; Sarıkaya, Özlem; Basturk, Ramazan; Buke, Akile
Objectives: The process of development and evaluation of undergraduate medical education programs should include analysis of learners’ characteristics, needs, and perceptions about learning methods. This study aims to evaluate medical students’ perceptions about problem-based learning methods and to compare these results with their individual learning styles.Materials and Methods: The survey was conducted at Marmara University Medical School where problem-based learning was implemented in the...
Domínguez-Rodrigo, Manuel; Baquedano, Enrique
All models of evolution of human behaviour depend on the correct identification and interpretation of bone surface modifications (BSM) on archaeofaunal assemblages. Crucial evolutionary features, such as the origin of stone tool use, meat-eating, food-sharing, cooperation and sociality can only be addressed through confident identification and interpretation of BSM, and more specifically, cut marks. Recently, it has been argued that linear marks with the same properties as cut marks can be created by crocodiles, thereby questioning whether secure cut mark identifications can be made in the Early Pleistocene fossil record. Powerful classification methods based on multivariate statistics and machine learning (ML) algorithms have previously successfully discriminated cut marks from most other potentially confounding BSM. However, crocodile-made marks were marginal to or played no role in these comparative analyses. Here, for the first time, we apply state-of-the-art ML methods on crocodile linear BSM and experimental butchery cut marks, showing that the combination of multivariate taphonomy and ML methods provides accurate identification of BSM, including cut and crocodile bite marks. This enables empirically-supported hominin behavioural modelling, provided that these methods are applied to fossil assemblages.
CERN. Geneva; CHEN, Tianqi
Tianqi Chen and Tong He (team crowwork) have provided very early in the challenge to all participants XGBoost (for eXtreme Gradient Boosted). It is a parallelised software to train boost decision trees, which has been effectively used by many participants to the challenge. For this, they have won the "HEP meets ML" award which is the invitation to CERN happening today.
Paterakis, N.G.; Mocanu, E.; Gibescu, M.; Stappers, B.; van Alst, W.
In this paper the more advanced, in comparison with traditional machine learning approaches, deep learning methods are explored with the purpose of accurately predicting the aggregated energy consumption. Despite the fact that a wide range of machine learning methods have been applied to
Balan, Peter; Clark, Michele; Restall, Gregory
Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…
Dwi Nur Rachmah
Full Text Available Jigsaw learning as a cooperative learning method, according to the results of some studies, can improve academic skills, social competence, behavior in learning, and motivation to learn. However, in some other studies, there are different findings regarding the effect of jigsaw learning method on self-efficacy. The purpose of this study is to examine the effects of jigsaw learning method on self-efficacy and motivation to learn in psychology students at the Faculty of Medicine, Universitas Lambung Mangkurat. The method used in the study is the experimental method using one group pre-test and post-test design. The results of the measurements before and after the use of jigsaw learning method were compared using paired samples t-test. The results showed that there is a difference in students’ self-efficacy and motivation to learn before and after subjected to the treatments; therefore, it can be said that jigsaw learning method had significant effects on self-efficacy and motivation to learn. The application of jigsaw learning model in a classroom with large number of students was the discussion of this study.
He, Xiaoxian; Zhu, Yunlong; Hu, Kunyuan; Niu, Ben
Inspired by cooperative transport behaviors of ants, on the basis of Q-learning, a new learning method, Neighbor-Information-Reference (NIR) learning method, is present in the paper. This is a swarm-based learning method, in which principles of swarm intelligence are strictly complied with. In NIR learning, the i-interval neighbor's information, namely its discounted reward, is referenced when an individual selects the next state, so that it can make the best decision in a computable local neighborhood. In application, different policies of NIR learning are recommended by controlling the parameters according to time-relativity of concrete tasks. NIR learning can remarkably improve individual efficiency, and make swarm more "intelligent".
This thematic volume explores the relationship between the arts and learning in various educational contexts and across cultures, but with a focus on higher education and organizational learning. Arts-based interventions are at the heart of this volume, which addresses how they are conceived, des...
Li, Hongxin; Ding, Mengchun
Reasons for learning the management include (1) perfecting the knowledge structure, (2) the management is the base of all organizations, (3) one person may be the manager or the managed person, (4) the management is absolutely not simple knowledge, and (5) the learning of the theoretical knowledge of the management can not be replaced by the…
Aghababyan, Ani; Martin, Taylor; Janisiewicz, Philip; Close, Kevin
Learning analytics is an emerging discipline and, as such, benefits from new tools and methodological approaches. This work reviews and summarizes our workshop on microgenetic data analysis techniques using R, held at the second annual Learning Analytics Summer Institute in Cambridge, Massachusetts, on 30 June 2014. Specifically, this paper…
Richard, S. M.; Commissionthe Management; Application Inte, I.
CGI Interoperability Working Group activities during 2012 include deployment of services using the GeoSciML-Portrayal schema, addition of new vocabularies to support properties added in version 3.0, improvements to server software for deploying services, introduction of EarthResourceML v.2 for mineral resources, and collaboration with the IUSS on a markup language for soils information. GeoSciML and EarthResourceML have been used as the basis for the INSPIRE Geology and Mineral Resources specifications respectively. GeoSciML-Portrayal is an OGC GML simple-feature application schema for presentation of geologic map unit, contact, and shear displacement structure (fault and ductile shear zone) descriptions in web map services. Use of standard vocabularies for geologic age and lithology enables map services using shared legends to achieve visual harmonization of maps provided by different services. New vocabularies have been added to the collection of CGI vocabularies provided to support interoperable GeoSciML services, and can be accessed through http://resource.geosciml.org. Concept URIs can be dereferenced to obtain SKOS rdf or html representations using the SISSVoc vocabulary service. New releases of the FOSS GeoServer application greatly improve support for complex XML feature schemas like GeoSciML, and the ArcGIS for INSPIRE extension implements similar complex feature support for ArcGIS Server. These improved server implementations greatly facilitate deploying GeoSciML services. EarthResourceML v2 adds features for information related to mining activities. SoilML provides an interchange format for soil material, soil profile, and terrain information. Work is underway to add GeoSciML to the portfolio of Open Geospatial Consortium (OGC) specifications.
Leif, Robert C.
Cytology automation and research will be enhanced by the creation of a common data format. This data format would provide the pathology and research communities with a uniform way for annotating and exchanging images, flow cytometry, and associated data. This specification and/or standard will include descriptions of the acquisition device, staining, the binary representations of the image and list-mode data, the measurements derived from the image and/or the list-mode data, and descriptors for clinical/pathology and research. An international, vendor-supported, non-proprietary specification will allow pathologists, researchers, and companies to develop and use image capture/analysis software, as well as list-mode analysis software, without worrying about incompatibilities between proprietary vendor formats. Presently, efforts to create specifications and/or descriptions of these formats include the Laboratory Digital Imaging Project (LDIP) Data Exchange Specification; extensions to the Digital Imaging and Communications in Medicine (DICOM); Open Microscopy Environment (OME); Flowcyt, an extension to the present Flow Cytometry Standard (FCS); and CytometryML. The feasibility of creating a common data specification for digital microscopy and flow cytometry in a manner consistent with its use for medical devices and interoperability with both hospital information and picture archiving systems has been demonstrated by the creation of the CytometryML schemas. The feasibility of creating a software system for digital microscopy has been demonstrated by the OME. CytometryML consists of schemas that describe instruments and their measurements. These instruments include digital microscopes and flow cytometers. Optical components including the instruments' excitation and emission parts are described. The description of the measurements made by these instruments includes the tagged molecule, data acquisition subsystem, and the format of the list-mode and/or image data. Many
Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner
Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.
Engel, F.L.; Geerings, M.P.W.
Four different methods of question presentation, in interactive computeraided learning of Dutch-English word pairs are evaluated experimentally. These methods are: 1) the 'open-question method', 2) the 'multiple-choice method', 3) the 'sequential method' and 4) the 'true/ false method'. When
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...
Euchner, Fabian; Kästli, Philipp; Heiniger, Lukas; Saul, Joachim; Schorlemmer, Danijel; Clinton, John
QuakeML is a community-backed data model for seismic event parameter description. Its current version 1.2, released in 2013, has become the gold standard for parametric data dissemination at seismological data centers, and has been adopted as an FDSN standard. It is supported by several popular software products and data services, such as FDSN event web services, QuakePy, and SeisComP3. Work on the successor version 2.0 is under way since 2015. The scope of QuakeML has been expanded beyond event parameter description. Thanks to a modular architecture, many thematic packages have been added, which cover peak ground motion, site and station characterization, hydraulic parameters of borehole injection processes, and macroseismics. The first three packages can be considered near final and implementations of program codes and SQL databases are in productive use at various institutions. A public community review process has been initiated in order to turn them into community-approved standards. The most recent addition is a package for single station quake location, which allows a detailed probabilistic description of event parameters recorded at a single station. This package adds some information elements such as angle of incidence, frequency-dependent phase picks, and dispersion relations. The package containing common data types has been extended with a generic type for probability density functions. While on Earth, single station methods are niche applications, they are of prominent interest in planetary seismology, e.g., the NASA InSight mission to Mars. So far, QuakeML is lacking a description of seismic instrumentation (inventory). There are two existing standards of younger age (FDSN StationXML and SeisComP3 Inventory XML). We discuss their respective strengths, differences, and how they could be combined into an inventory package for QuakeML, thus allowing full interoperability with other QuakeML data types. QuakeML is accompanied by QuakePy, a Python package
Burgos, Daniel; Specht, Marcus
Please, cite this publication as: Burgos, D., & Specht, M. (2006). Adaptive e-learning methods and IMS Learning Design. In Kinshuk, R. Koper, P. Kommers, P. Kirschner, D. G. Sampson & W. Didderen (Eds.), Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (pp.
Traditional teaching practice based on the textbook-whiteboard- lecture-homework-test paradigm is not very effective in helping students with diverse academic backgrounds achieve higher-order critical thinking skills such as analysis, synthesis, and evaluation. Consequently, there is a critical need for developing a new pedagogical approach to create a collaborative and interactive learning environment in which students with complementary academic backgrounds and learning skills can work together to enhance their learning outcomes. In this presentation, I will discuss an innovative teaching method ('Team-Based Learning (TBL)") which I recently developed at National University of Singapore to promote active learning among students in the environmental engineering program with learning abilities. I implemented this new educational activity in a graduate course. Student feedback indicates that this pedagogical approach is appealing to most students, and promotes active & interactive learning in class. Data will be presented to show that the innovative teaching method has contributed to improved student learning and achievement.
Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao
With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.
Full Text Available Virtual learning is a type of electronic learning system based on the web. It models traditional in- person learning by providing virtual access to classes, tests, homework, feedbacks and etc. Students and teachers can interact through chat rooms or other virtual environments. Web 2.0 services are usually used for this method. Internet audio-visual tools, multimedia systems, a disco CD-ROMs, videotapes, animation, video conferencing, and interactive phones can all be used to deliver data to the students. E-learning can occur in or out of the classroom. It is time saving with lower costs compared to traditional methods. It can be self-paced, it is suitable for distance learning and it is flexible. It is a great learning style for continuing education and students can independently solve their problems but it has its disadvantages too. Thereby, blended learning (combination of conventional and virtual education is being used worldwide and has improved knowledge, skills and confidence of pharmacy students.The aim of this study is to review, discuss and introduce different methods of virtual learning for pharmacy students.Google scholar, Pubmed and Scupus databases were searched for topics related to virtual, electronic and blended learning and different styles like computer simulators, virtual practice environment technology, virtual mentor, virtual patient, 3D simulators, etc. are discussed in this article.Our review on different studies on these areas shows that the students are highly satisfied withvirtual and blended types of learning.
Mohammadjani, Farzad; Tonkaboni, Forouzan
The aim of the present research is to investigate a comparison between the effect of cooperative learning teaching method and lecture teaching method on students' learning and satisfaction level. The research population consisted of all the fourth grade elementary school students of educational district 4 in Shiraz. The statistical population…
This unique text/reference describes in detail the latest advances in unsupervised process monitoring and fault diagnosis with machine learning methods. Abundant case studies throughout the text demonstrate the efficacy of each method in real-world settings. The broad coverage examines such cutting-edge topics as the use of information theory to enhance unsupervised learning in tree-based methods, the extension of kernel methods to multiple kernel learning for feature extraction from data, and the incremental training of multilayer perceptrons to construct deep architectures for enhanced data
Yang, Haoyu; An, Zheng; Zhou, Haotian; Hou, Yawen
Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data.  Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.
Everly, Marcee C
To report the transformation from lecture to more active learning methods in a maternity nursing course and to evaluate whether student perception of improved learning through active-learning methods is supported by improved test scores. The process of transforming a course into an active-learning model of teaching is described. A voluntary mid-semester survey for student acceptance of the new teaching method was conducted. Course examination results, from both a standardized exam and a cumulative final exam, among students who received lecture in the classroom and students who had active learning activities in the classroom were compared. Active learning activities were very acceptable to students. The majority of students reported learning more from having active-learning activities in the classroom rather than lecture-only and this belief was supported by improved test scores. Students who had active learning activities in the classroom scored significantly higher on a standardized assessment test than students who received lecture only. The findings support the use of student reflection to evaluate the effectiveness of active-learning methods and help validate the use of student reflection of improved learning in other research projects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Full text: Some times, it becomes difficult to draw 20 mL blood from a patient with bad veins. On two occasions, we could collect only about 4 mL of blood, that too with a great deal of struggle, and then we carried out the routine labelling procedure. A labelling efficiency of 98.2% and 95.6% was achieved. The white cell scan was negative in one patient, but positive in the next one. In a third patient, a comparison of labelling efficiency was done between 5 and 20 mLs of blood volumes separately and the results were found to be identical, 98.5% and 98.4%, respectively. As we have achieved the usual pattern of white cell scan with as low as 4-5 mL of blood, it appears that enough number of white cells is present even in the 4-5 mL of blood that is capable of generating a white cell scan and so, it seems rational to reduce the blood volume from 20 mL to 4 or 5 mL. However, further studies are warranted before adopting this modification. The procedure appears to carry the following advantages: ease of blood collection, handling and re-injection and less risk to the patient
Indira I; Latha G; Lakshmi Narayanamma V
Background: The ripeness of the cervix is an important determinant of the success of induction of labour. One of the mechanical methods of cervical ripening is the use of a transcervical Foley catheter. In this study we compared the efficacy in induction of labour of two insufflation volumes of Foley catheter bulb 30 mL and 60mL. Methods: This was a randomized, single-blind study conducted in 100 women, randomly allocated to the 30 mL group (n=50) and 60 mL group (n=50). Foley’s cath...
Scholz, G.; Dewulf, A.; Pahl-Wostl, C.
Social learning among different stakeholders is often a goal in problem solving contexts such as environmental management. Participatory methods (e.g., group model-building and role playing games) are frequently assumed to stimulate social learning. Yet understanding if and why this assumption is
Introduction to Machine learning covering Statistical Inference (Bayes, EM, ML/MaxEnt duality), algebraic and spectral methods (PCA, LDA, CCA, Clustering), and PAC learning (the Formal model, VC dimension, Double Sampling theorem).
J.G. Bagi; N.K. Hashilkar
Background: Blended learning includes an integration of face to face classroom learning with technology enhanced online material. It provides the convenience, speed and cost effectiveness of e-learning with the personal touch of traditional learning. Objective: The objective of the present study was to assess the effectiveness of a combination of e-learning module and traditional teaching (Blended learning) as compared to traditional teaching alone to teach acid base homeostasis to Phase I MB...
Neruda, Roman; Kudová, Petra
Roč. 21, - (2005), s. 1131-1142 ISSN 0167-739X R&D Projects: GA ČR GP201/03/P163; GA ČR GA201/02/0428 Institutional research plan: CEZ:AV0Z10300504 Keywords : radial basis function networks * hybrid supervised learning * genetic algorithms * benchmarking Subject RIV: BA - General Mathematics Impact factor: 0.555, year: 2005
Dagliati, Arianna; Marini, Simone; Sacchi, Lucia; Cogni, Giulia; Teliti, Marsida; Tibollo, Valentina; De Cata, Pasquale; Chiovato, Luca; Bellazzi, Riccardo
One of the areas where Artificial Intelligence is having more impact is machine learning, which develops algorithms able to learn patterns and decision rules from data. Machine learning algorithms have been embedded into data mining pipelines, which can combine them with classical statistical strategies, to extract knowledge from data. Within the EU-funded MOSAIC project, a data mining pipeline has been used to derive a set of predictive models of type 2 diabetes mellitus (T2DM) complications based on electronic health record data of nearly one thousand patients. Such pipeline comprises clinical center profiling, predictive model targeting, predictive model construction and model validation. After having dealt with missing data by means of random forest (RF) and having applied suitable strategies to handle class imbalance, we have used Logistic Regression with stepwise feature selection to predict the onset of retinopathy, neuropathy, or nephropathy, at different time scenarios, at 3, 5, and 7 years from the first visit at the Hospital Center for Diabetes (not from the diagnosis). Considered variables are gender, age, time from diagnosis, body mass index (BMI), glycated hemoglobin (HbA1c), hypertension, and smoking habit. Final models, tailored in accordance with the complications, provided an accuracy up to 0.838. Different variables were selected for each complication and time scenario, leading to specialized models easy to translate to the clinical practice.
McCabe, Jessica; Monkiewicz, Michelle; Holcomb, John; Pundik, Svetlana; Daly, Janis J
To compare response to upper-limb treatment using robotics plus motor learning (ML) versus functional electrical stimulation (FES) plus ML versus ML alone, according to a measure of complex functional everyday tasks for chronic, severely impaired stroke survivors. Single-blind, randomized trial. Medical center. Enrolled subjects (N=39) were >1 year postsingle stroke (attrition rate=10%; 35 completed the study). All groups received treatment 5d/wk for 5h/d (60 sessions), with unique treatment as follows: ML alone (n=11) (5h/d partial- and whole-task practice of complex functional tasks), robotics plus ML (n=12) (3.5h/d of ML and 1.5h/d of shoulder/elbow robotics), and FES plus ML (n=12) (3.5h/d of ML and 1.5h/d of FES wrist/hand coordination training). Primary measure: Arm Motor Ability Test (AMAT), with 13 complex functional tasks; secondary measure: upper-limb Fugl-Meyer coordination scale (FM). There was no significant difference found in treatment response across groups (AMAT: P≥.584; FM coordination: P≥.590). All 3 treatment groups demonstrated clinically and statistically significant improvement in response to treatment (AMAT and FM coordination: P≤.009). A group treatment paradigm of 1:3 (therapist/patient) ratio proved feasible for provision of the intensive treatment. No adverse effects. Severely impaired stroke survivors with persistent (>1y) upper-extremity dysfunction can make clinically and statistically significant gains in coordination and functional task performance in response to robotics plus ML, FES plus ML, and ML alone in an intensive and long-duration intervention; no group differences were found. Additional studies are warranted to determine the effectiveness of these methods in the clinical setting. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences. Learning in Non-Stationary Environments: Methods and Applications offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dyna...
Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen
.... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...
Specht, Marcus; Burgos, Daniel
Please, cite this publication as: Specht, M. & Burgos, D. (2006). Implementing Adaptive Educational Methods with IMS Learning Design. Proceedings of Adaptive Hypermedia. June, Dublin, Ireland. Retrieved June 30th, 2006, from http://dspace.learningnetworks.org
Mwakigonja Amos R
Full Text Available Abstract Background HIV infection is reported to be associated with some malignant lymphomas (ML so called AIDS-related lymphomas (ARL, with an aggressive behavior and poor prognosis. The ML frequency, pathogenicity, clinical patterns and possible association with AIDS in Tanzania, are not well documented impeding the development of preventive and therapeutic strategies. Methods Sections of 176 archival formalin-fixed paraffin-embedded biopsies of ML patients at Muhimbili National Hospital (MNH/Muhimbili University of Health and Allied Sciences (MUHAS, Tanzania from 1996–2001 were stained for hematoxylin and eosin and selected (70 cases for expression of pan-leucocytic (CD45, B-cell (CD20, T-cell (CD3, Hodgkin/RS cell (CD30, histiocyte (CD68 and proliferation (Ki-67 antigen markers. Corresponding clinical records were also evaluated. Available sera from 38 ML patients were screened (ELISA for HIV antibodies. Results The proportion of ML out of all diagnosed tumors at MNH during the 6 year period was 4.2% (176/4200 comprising 77.84% non-Hodgkin (NHL including 19.32% Burkitt's (BL and 22.16% Hodgkin's disease (HD. The ML tumors frequency increased from 0.42% (1997 to 0.70% (2001 and 23.7% of tested sera from these patients were HIV positive. The mean age for all ML was 30, age-range 3–91 and peak age was 1–20 years. The male:female ratio was 1.8:1. Supra-diaphragmatic presentation was commonest and histological sub-types were mostly aggressive B-cell lymphomas however, no clear cases of primary effusion lymphoma (PEL and primary central nervous system lymphoma (PCNSL were diagnosed. Conclusion Malignant lymphomas apparently, increased significantly among diagnosed tumors at MNH between 1996 and 2001, predominantly among the young, HIV infected and AIDS patients. The frequent aggressive clinical and histological presentation as well as the dominant B-immunophenotype and the HIV serology indicate a pathogenic association with AIDS. Therefore
ABSTRACT: Active Learning Method which requires students to take an active role in the process of learning in the classroom has been applied in Department of Chemical Engineering, Faculty of Industrial Technology, Islamic University of Indonesia for Unit Operations II subject in the Even Semester of Academic Year 2015/2016. The purpose of implementation of the learning method is to assist students in achieving competencies associated with the Unit Operations II subject and to help in creating...
Meenal J. Patel
Full Text Available Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1 presents a background on depression, imaging, and machine learning methodologies; (2 reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3 suggests directions for future depression-related studies.
Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J
Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.
Soil erosion and sediment transport of natural gravel bed streams are important processes which affect both the morphology as well as the ecology of earth's surface. For gravel bed rivers at near incipient flow conditions, particle entrainment dynamics are highly intermittent. This contribution reviews the use of modern Machine Learning (ML) methods implemented for short term prediction of entrainment instances of individual grains exposed in fully developed near boundary turbulent flows. Results obtained by network architectures of variable complexity based on two different ML methods namely the Artificial Neural Network (ANN) and the Adaptive Neuro-Fuzzy Inference System (ANFIS) are compared in terms of different error and performance indices, computational efficiency and complexity as well as predictive accuracy and forecast ability. Different model architectures are trained and tested with experimental time series obtained from mobile particle flume experiments. The experimental setup consists of a Laser Doppler Velocimeter (LDV) and a laser optics system, which acquire data for the instantaneous flow and particle response respectively, synchronously. The first is used to record the flow velocity components directly upstream of the test particle, while the later tracks the particle's displacements. The lengthy experimental data sets (millions of data points) are split into the training and validation subsets used to perform the corresponding learning and testing of the models. It is demonstrated that the ANFIS hybrid model, which is based on neural learning and fuzzy inference principles, better predicts the critical flow conditions above which sediment transport is initiated. In addition, it is illustrated that empirical knowledge can be extracted, validating the theoretical assumption that particle ejections occur due to energetic turbulent flow events. Such a tool may find application in management and regulation of stream flows downstream of dams for stream
Christensen, Hans Peter; Vigild, Martin Etchells; Thomsen, Erik Vilain
Students’ study strategies when exposed to activating teaching methods are measured, analysed and compared to study strategies in more traditional lecture-based teaching.......Students’ study strategies when exposed to activating teaching methods are measured, analysed and compared to study strategies in more traditional lecture-based teaching....
Dwiyogo, Wasis D.
The main objectives of the study were to develop and investigate the implementation of blended learning based method for problem-solving. Three experts were involved in the study and all three had stated that the model was ready to be applied in the classroom. The implementation of the blended learning-based design for problem-solving was…
Full Text Available Building relationships in the classroom is an essential part of any teacher's career. Having healthy teacher-to-learner and learner-to-learner relationships is an effective way to help prevent pedagogical failure, social conflict and quarrelsome behavior. Many strategies are available that can be used to achieve good long-lasting relationships in the classroom setting. Successful teachers’ pedagogical work in the classroom requires detailed knowledge of learners’ relationships. Good understanding of the relationships is necessary, especially in the case of teenagers’ class. This sensitive period of adolescence demands attention of all teachers who should deal with the problems of their learners. Special care should be focused on children that are out of their classmates’ interest (so called isolated learners or isolates in such class and on possibilities to integrate them into the class. Natural idea how to do it is that of using some modern non-traditional teaching/learning methods, especially the methods based on work in small groups involving learners’ cooperation. Small group education (especially problem-based learning, project-based learning, cooperative learning, collaborative learning or inquire-based learning as one of these methods involves a high degree of interaction. The effectiveness of learning groups is determined by the extent to which the interaction enables members to clarify their own understanding, build upon each other's contributions, sift out meanings, ask and answer questions. An influence of this kind of methods (especially cooperative learning (CL on learners’ relationships was a subject of the further described research. Within the small group education, students work with their classmates to solve complex and authentic problems that help develop content knowledge as well as problem-solving, reasoning, communication, and self-assessment skills. The aim of the research was to answer the question: Can the
Full Text Available This research aims to find out the application of Think Pair Share (TPS learning method in improving learning motivation and learning achievement in the subject of Introduction to Accounting I of the Accounting Study Program students of Politeknik Harapan Bersama. The Method of data collection in this study used observation method, test method, and documentation method. The research instruments used observation sheet, questionnaire and test question. This research used Class Action Research Design which is an action implementation oriented research, with the aim of improving quality or problem solving in a group by carefully and observing the success rate due to the action. The method of analysis used descriptive qualitative and quantitative analysis method. The results showed that the application of Think Pair Share Learning (TPS Method can improve the Learning Motivation and Achievement. Before the implementation of the action, the obtained score is 67% then in the first cycle increases to 72%, and in the second cycle increasws to 80%. In addition, based on questionnaires distributed to students, it also increases the score of Accounting Learning Motivation where the score in the first cycle of 76% increases to 79%. In addition, in the first cycle, the score of pre test and post test of the students has increased from 68.86 to 76.71 while in the second cycle the score of pre test and post test of students has increased from 79.86 to 84.86.
Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin
Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.
Full Text Available Teaching methods in MBA and Lifelong Learning Programmes (LLP for managers should be topically relevant in terms of content as well as the teaching methods used. In terms of the content, the integral part of MBA and Lifelong Learning Programmes for managers should be the development of participants’ leadership competencies and their understanding of current leadership concepts. The teaching methods in educational programmes for managers as adult learners should correspond to the strategy of learner-centred teaching that focuses on the participants’ learning process and their active involvement in class. The focus on the participants’ learning process also raises questions about whether the programme’s participants perceive the teaching methods used as useful and relevant for their development as leaders. The paper presents the results of the analysis of the responses to these questions in a sample of 54 Czech participants in the MBA programme and of lifelong learning programmes at the University of Economics, Prague. The data was acquired based on written or electronically submitted questionnaires. The data was analysed in relation to the usefulness of the teaching methods for understanding the concepts of leadership, leadership skills development as well as respondents’ personal growth. The results show that the respondents most valued the methods that enabled them to get feedback, activated them throughout the programme and got them involved in discussions with others in class. Implications for managerial education practices are discussed.
Ni Putu Wulan Purnama Sari
Full Text Available Background and Purpose: Caring is the essence of nursing profession. Stimulation of caring attitude should start early. Effective teaching methods needed to foster caring attitude and improve learning achievement. This study aimed to explain the effect of applying flipped classroom learning method for improving caring attitude and learning achievement of new student nurses at nursing institutions in Surabaya. Method: This is a pre-experimental study using the one group pretest posttest and posttest only design. Population was all new student nurses on nursing institutions in Surabaya. Inclusion criteria: female, 18-21 years old, majoring in nursing on their own volition and being first choice during students selection process, status were active in the even semester of 2015/2016 academic year. Sample size was 67 selected by total sampling. Variables: 1 independent: application of flipped classroom learning method; 2 dependent: caring attitude, learning achievement. Instruments: teaching plan, assignment descriptions, presence list, assignment assessment rubrics, study materials, questionnaires of caring attitude. Data analysis: paired and one sample t test. Ethical clearance was available. Results: Most respondents were 20 years old (44.8%, graduated from high school in Surabaya (38.8%, living with parents (68.7% in their homes (64.2%. All data were normally distributed. Flipped classroom learning method could improve caring attitude by 4.13%. Flipped classroom learning method was proved to be effective for improving caring attitude (p=0.021 and learning achievement (p=0.000. Conclusion and Recommendation: Flipped classroom was effective for improving caring attitude and learning achievement of new student nurse. It is recommended to use mix-method and larger sample for further study.
Sulisworo, Dwi; Sutadi, Novitasari
There have been many studies related to the implementation of cooperative learning. However, there are still many problems in school related to the learning outcomes on science lesson, especially in physics. The aim of this study is to observe the application of science learning cycle (SLC) model on improving scientific literacy for secondary…
Jaime Leonardo Bobadilla Molina
Full Text Available The increasing amount of protein three-dimensional (3D structures determined by x-ray and NMR technologies as well as structures predicted by computational methods results in the need for automated methods to provide inital annotations. We have developed a new method for recognizing sites in three-dimensional protein structures. Our method is based on a previosly reported algorithm for creating descriptions of protein microenviroments using physical and chemical properties at multiple levels of detail. The recognition method takes three inputs: 1. A set of control nonsites that share some structural or functional role. 2. A set of control nonsites that lack this role. 3. A single query site. A support vector machine classifier is built using feature vectors where each component represents a property in a given volume. Validation against an independent test set shows that this recognition approach has high sensitivity and specificity. We also describe the results of scanning four calcium binding proteins (with the calcium removed using a three dimensional grid of probe points at 1.25 angstrom spacing. The system finds the sites in the proteins giving points at or near the blinding sites. Our results show that property based descriptions along with support vector machines can be used for recognizing protein sites in unannotated structures.
Jæger, Pia; Koscielniak-Nielsen, Zbigniew J; Hilsted, Karen Lisa
weakness. METHODS: We performed a paired, blinded, randomized trial including healthy men. All subjects received bilateral ACBs with ropivacaine 0.1%; 10 mL in 1 leg and 30 mL in the other leg. The primary outcome was the difference in number of subjects with quadriceps strength reduced by more than 25...... of the predefined time points or in sensory block. The only statistically significant difference between volumes was found in the 30-Second Chair Stand Test at 2 hours (P = 0.02), but this difference had disappeared at 4 hours (P = 0.06). CONCLUSIONS: Varying the volume of ropivacaine 0.1% used for ACB between 10...
This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.
Johansen, Steffen Kjær
T becomes a learning method rather than a teaching method. Besides discussing the pedagogical characteristics of EiT, the study also gives a general introduction to EiT as it was taught at SDU fall 2016 as well as a brief review of the basic theory behind experiential learning. As such this study serves...... courses. Most of the practical courses are group work along the lines of project based learning. EiT is in a way both. It is a practical course in as much as our students get hands-on experience with interdisciplinary team work and innovation processes. EiT is a theoretical course in as much as our...... both as an introduction to e.g. new teachers of EiT but also as a starting point for a clarification of the features that makes EiT an experiential learning endeavor....
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts
Full Text Available Business English is integrated with visual-audio-oral English, which focuses on the application for English listening and speaking skills in common business occasions, and acquire business knowledge and improve skills through English. This paper analyzes the Business English Visual-audio-oral Course, and learning situation of higher vocational students’ learning objectives, interests, vocabulary, listening and speaking, and focuses on the research of effective methods to guide the higher vocational students to learn Business English Visual-audio-oral Course, master Business English knowledge, and improve communicative competence of Business English.
Röhrig, S; Hempel, D; Stenger, T; Armbruster, W; Seibel, A; Walcher, F; Breitkreutz, R
Current teaching methods in graduate and postgraduate training often include frontal presentations. Especially in ultrasound education not only knowledge but also sensomotory and visual skills need to be taught. This requires new learning methods. This study examined which types of teaching methods are preferred by participants in ultrasound training courses before, during and after the course by analyzing a blended learning concept. It also investigated how much time trainees are willing to spend on such activities. A survey was conducted at the end of a certified ultrasound training course. Participants were asked to complete a questionnaire based on a visual analogue scale (VAS) in which three categories were defined: category (1) vote for acceptance with a two thirds majority (VAS 67-100%), category (2) simple acceptance (50-67%) and category (3) rejection (learning program with interactive elements, short presentations (less than 20 min), incorporating interaction with the audience, hands-on sessions in small groups, an alternation between presentations and hands-on-sessions, live demonstrations and quizzes. For post-course learning, interactive and media-assisted approaches were preferred, such as e-learning, films of the presentations and the possibility to stay in contact with instructors in order to discuss the results. Participants also voted for maintaining a logbook for documentation of results. The results of this study indicate the need for interactive learning concepts and blended learning activities. Directors of ultrasound courses may consider these aspects and are encouraged to develop sustainable learning pathways.
Business English is integrated with visual-audio-oral English, which focuses on the application for English listening and speaking skills in common business occasions, and acquire business knowledge and improve skills through English. This paper analyzes the Business English Visual-audio-oral Course, and learning situation of higher vocational students’ learning objectives, interests, vocabulary, listening and speaking, and focuses on the research of effective methods to guide the higher voca...
What are the competencies for tommorow´s enginnering education and the implications of these regarding the choice of teaching content and learning methods? The paper analyses two trends: the traditional and the techo-science approach. These two trends are based on technological innovation...... and change processes and impact on educational content and methods....
Liu, Shuang; Breit, Rhonda
The capacity to conduct research is essential for university graduates to survive and thrive in their future career. However, research methods courses have often been considered by students as "abstract", "uninteresting", and "hard". Thus, motivating students to engage in the process of learning research methods has become a crucial challenge for…
Geary, W.J.; James, A.M. (ed.)
This book presents the analytical uses of radioactive isotopes within the context of radiochemistry as a whole. It is designed for scientists with relatively little background knowledge of the subject. Thus the initial emphasis is on developing the basic concepts of radioactive decay, particularly as they affect the potential usage of radioisotopes. Discussion of the properties of various types of radiation, and of factors such as half-life, is related to practical considerations such as counting and preparation methods, and handling/disposal problems. Practical aspects are then considered in more detail, and the various radioanalytical methods are outlined with particular reference to their applicability. The approach is 'user friendly' and the use of self assessment questions allows the reader to test his/her understanding of individual sections easily. For those who wish to develop their knowledge further, a reading list is provided.
The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multi-modal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.
Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando
Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
Full Text Available We study the problem of fitting probabilistic graphical models to the given data when the structure is not known. More specifically, we focus on learning unknown structure in conditional random fields, especially learning both the structure and parameters of a conditional random field model simultaneously. To do this, we first formulate the learning problem as a convex minimization problem by adding an l_2-regularization to the node parameters and a group l_1-regularization to the edge parameters, and then a gradient-based projection method is proposed to solve it which combines an adaptive stepsize selection strategy with a nonmonotone line search. Extensive simulation experiments are presented to show the performance of our approach in solving unknown structure learning problems.
Romero García, Cristian
[EN] In a world in which accessible information grows exponentially, the selection of the appropriate information turns out to be an extremely relevant problem. In this context, the idea of Machine Learning (ML), a subfield of Artificial Intelligence, emerged to face problems in data mining, pattern recognition, automatic prediction, among others. Quantum Machine Learning is an interdisciplinary research area combining quantum mechanics with methods of ML, in which quantum properties allow fo...
Sala, Marzio; Hu, Jonathan Joseph (Sandia National Laboratories, Livermore, CA); Tuminaro, Raymond Stephen (Sandia National Laboratories, Livermore, CA)
ML development was started in 1997 by Ray Tuminaro and Charles Tong. Currently, there are several full- and part-time developers. The kernel of ML is written in ANSI C, and there is a rich C++ interface for Trilinos users and developers. ML can be customized to run geometric and algebraic multigrid; it can solve a scalar or a vector equation (with constant number of equations per grid node), and it can solve a form of Maxwell's equations. For a general introduction to ML and its applications, we refer to the Users Guide [SHT04], and to the ML web site, http://software.sandia.gov/ml.
Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.
Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong
Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.
Leif, Robert C.
Abstract: Flow Cytometry Standard, FCS, and Digital Imaging and Communications in Medicine standard, DICOM, are based on extensive, superb domain knowledge, However, they are isolated systems, do not take advantage of data structures, require special programs to read and write the data, lack the capability to interoperate or work with other standards and FCS lacks many of the datatypes necessary for clinical laboratory data. The large overlap between imaging and flow cytometry provides strong evidence that both modalities should be covered by the same standard. Method: The XML Schema Definition Language, XSD 1.1 was used to translate FCS and/or DICOM objects. A MIFlowCyt file was tested with published values. Results: Previously, a significant part of an XML standard based upon a combination of FCS and DICOM has been implemented and validated with MIFlowCyt data. Strongly typed translations of FCS keywords have been constructed in XML. These keywords contain links to their DICOM and FCS equivalents.
Coya, Liliam de Barbosa; Perez-Coffie, Jorge
"Mastery Learning" was compared with the "conventional" method of teaching reading skills to Puerto Rican children with specific learning disabilities. The "Mastery Learning" group showed significant gains in the cognitive and affective domains. Results suggested Mastery Learning is a more effective method of teaching…
Lu, Jiamei; Li, Daqi; Stevens, Carla; Ye, Renmin
Using PISA 2009, an international education database, this study compares gifted and talented (GT) students in three groups with normal (non-GT) students by examining student characteristics, reading, schooling, learning methods, and use of strategies for understanding and memorizing. Results indicate that the GT and non-GT gender distributions…
This study examines alternative method of teaching and learning of the concept of diffusion. An improvised U-shape glass tube called ionic mobility tube was used to observed and measure the rate of movement of divalent metal ions in an aqueous medium in the absence of an electric current. The study revealed that the ...
Abrahamsen, Trine Julie
Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...
Oxford, Rebecca; Crookall, David
Surveys research on formal and informal second-language learning strategies, covering the effectiveness of research methods involving making lists, interviews and thinking aloud, note-taking, diaries, surveys, and training. Suggestions for future and improved research are presented. (131 references) (CB)
Ivanov, V.V.; Purehvdorzh, B.; Puzynin, I.V.
First- and second-order learning methods for feed-forward multilayer neural networks are studied. Newton-type and quasi-Newton algorithms are considered and compared with commonly used back-propagation algorithm. It is shown that, although second-order algorithms require enhanced computer facilities, they provide better convergence and simplicity in usage. 13 refs., 2 figs., 2 tabs
Иван Николаевич Куринин
Full Text Available The article describes a method of interactive learning based on educational integrating projects. Some examples of content of such projects for the disciplines related to the study of information and Internet technologies and their application in management are presented.
Jönsson, Lise Høgh
, and people with learning disabilities worked together to develop five new visual and digital methods for interviewing in special education. Thereby not only enhancing the students’ competences, knowledge and proficiency in innovation and research, but also proposing a new teaching paradigm for university...
Sanan, Majed; Rammal, Mahmoud; Zreik, Khaldoun
Purpose: Recently, classification of Arabic documents is a real problem for juridical centers. In this case, some of the Lebanese official journal documents are classified, and the center has to classify new documents based on these documents. This paper aims to study and explain the useful application of supervised learning method on Arabic texts…
Audio visual education that incorporates devices and materials which involve sight, sound, or both has become a sine qua non in recent times in the teaching and learning process. An automated physical model of mining methods aided with video instructions was designed and constructed by harnessing locally available ...
Full Text Available I documented my strategies for learning sound-symbol correspondences during a Khmer course. I used a mnemonic strategy that I call the keyimage method. In this method, a character evokes an image (the keyimage, which evokes the corresponding sound. For example, the keyimage for the character 2 could be a swan with its head tucked in. This evokes the sound "kaw" that a swan makes, which sounds similar to the Khmer sound corresponding to 2. The method has some similarities to the keyword method. Considering the results of keyword studies, I hypothesize that the keyimage method is more effective than rote learning and that peer-generated keyimages are more effective than researcher- or teacher-generated keyimages, which are more effective than learner-generated ones. In Dr. Andrew Cohen's plenary presentation at the Hawaii TESOL 2007 conference, he mentioned that more case studies are needed on learning strategies (LSs. One reason to study LSs is that what learners do with input to produce output is unclear, and knowing what strategies learners use may help us understand that process (Dornyei, 2005, p. 170. Hopefully, we can use that knowledge to improve language learning, perhaps by teaching learners to use the strategies that we find. With that in mind, I have examined the LSs that I used in studying Khmer as a foreign language, focusing on learning the syllabic alphabet.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
Minowa, Hirotsugu; Gofuku, Akio
Study of diagnostic system using machine learning to reduce the incidents of the plant is in advance because an accident causes large damage about human, economic and social loss. There is a problem that 2 performances between a classification performance and generalization performance on the machine diagnostic machine is exclusive. However, multi agent diagnostic system makes it possible to use a diagnostic machine specialized either performance by multi diagnostic machines can be used. We propose method to select optimized variables to improve classification performance. The method can also be used for other supervised learning machine but Support Vector Machine. This paper reports that our method and result of evaluation experiment applied our method to output 40% of Monju. (author)
De Ferrari Luna
Full Text Available Abstract Background Manual annotation of enzymatic functions cannot keep up with automatic genome sequencing. In this work we explore the capacity of InterPro sequence signatures to automatically predict enzymatic function. Results We present EnzML, a multi-label classification method that can efficiently account also for proteins with multiple enzymatic functions: 50,000 in UniProt. EnzML was evaluated using a standard set of 300,747 proteins for which the manually curated Swiss-Prot and KEGG databases have agreeing Enzyme Commission (EC annotations. EnzML achieved more than 98% subset accuracy (exact match of all correct Enzyme Commission classes of a protein for the entire dataset and between 87 and 97% subset accuracy in reannotating eight entire proteomes: human, mouse, rat, mouse-ear cress, fruit fly, the S. pombe yeast, the E. coli bacterium and the M. jannaschii archaebacterium. To understand the role played by the dataset size, we compared the cross-evaluation results of smaller datasets, either constructed at random or from specific taxonomic domains such as archaea, bacteria, fungi, invertebrates, plants and vertebrates. The results were confirmed even when the redundancy in the dataset was reduced using UniRef100, UniRef90 or UniRef50 clusters. Conclusions InterPro signatures are a compact and powerful attribute space for the prediction of enzymatic function. This representation makes multi-label machine learning feasible in reasonable time (30 minutes to train on 300,747 instances with 10,852 attributes and 2,201 class values using the Mulan Binary Relevance Nearest Neighbours algorithm implementation (BR-kNN.
Maier, Don; Wymore, Farrell; Sherlock, Gavin; Ball, Catherine A
MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large - too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML. We have developed XBabelPhish - an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML. XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.
Patel, Meenal J.; Khalaf, Alexander; Aizenstein, Howard J.
Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presen...
NONE DECLARED Distance learning refers to use of technologies based on health care delivered on distance and covers areas such as electronic health, tele-health (e-health), telematics, telemedicine, tele-education, etc. For the need of e-health, telemedicine, tele-education and distance learning there are various technologies and communication systems from standard telephone lines to the system of transmission digitalized signals with modem, optical fiber, satellite links, wireless technologies, etc. Tele-education represents health education on distance, using Information Communication Technologies (ICT), as well as continuous education of a health system beneficiaries and use of electronic libraries, data bases or electronic data with data bases of knowledge. Distance learning (E-learning) as a part of tele-education has gained popularity in the past decade; however, its use is highly variable among medical schools and appears to be more common in basic medical science courses than in clinical education. Distance learning does not preclude traditional learning processes; frequently it is used in conjunction with in-person classroom or professional training procedures and practices. Tele-education has mostly been used in biomedical education as a blended learning method, which combines tele-education technology with traditional instructor-led training, where, for example, a lecture or demonstration is supplemented by an online tutorial. Distance learning is used for self-education, tests, services and for examinations in medicine i.e. in terms of self-education and individual examination services. The possibility of working in the exercise mode with image files and questions is an attractive way of self education. Automated tracking and reporting of learners' activities lessen faculty administrative burden. Moreover, e-learning can be designed to include outcomes assessment to determine whether learning has occurred. This review article evaluates the current
Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L
To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care
Towill, Denis R
The purpose of this article is to look at method study, as devised by the Gilbreths at the beginning of the twentieth century, which found early application in hospital quality assurance and surgical "best practice". It has since become a core activity in all modern methods, as applied to healthcare delivery improvement programmes. The article traces the origin of what is now currently and variously called "business process re-engineering", "business process improvement" and "lean healthcare" etc., by different management gurus back to the century-old pioneering work of Frank Gilbreth. The outcome is a consistent framework involving "width", "length" and "depth" dimensions within which healthcare delivery systems can be analysed, designed and successfully implemented to achieve better and more consistent performance. Healthcare method (saving time plus saving motion) study is best practised as co-joint action learning activity "owned" by all "players" involved in the re-engineering process. However, although process mapping is a key step forward, in itself it is no guarantee of effective re-engineering. It is not even the beginning of the end of the change challenge, although it should be the end of the beginning. What is needed is innovative exploitation of method study within a healthcare organisational learning culture accelerated via the Gilbreth Knowledge Flywheel. It is shown that effective healthcare delivery pipeline improvement is anchored into a team approach involving all "players" in the system especially physicians. A comprehensive process study, constructive dialogue, proper and highly professional re-engineering plus managed implementation are essential components. Experience suggests "learning" is thereby achieved via "natural groups" actively involved in healthcare processes. The article provides a proven method for exploiting Gilbreths' outputs and their many successors in enabling more productive evidence-based healthcare delivery as summarised
Lundsgaard, C.; Dufour, N.; Fallentin, E.
, Knee Injury and Osteoarthritis Outcome Score (KOOS), Osteoarthritis Research Society International (OARSI) criteria, and global assessment of the patient's condition. Results: The mean age of the patients was 69.4 years; 55% were women. The effects of hyaluronate 2 mL, physiological saline 20 m......Objective: Methodological constraints weaken previous evidence on intra-articular viscosupplementation and physiological saline distention for osteoarthritis. We conducted a randomized, patient- and observer-blind trial to evaluate these interventions in patients with painful knee osteoarthritis....... Methods: We centrally randomized 251 patients with knee ostcoarthritis to four weekly intra-articular injections of sodium hyaluronate 2 mL (Hyalgan(R) 10.3 mg/mL) versus physiological saline 20 mL (distention) versus physiological saline 2 mL (placebo) and followed patients for 26 weeks. Inclusion...
Causal structure learning is one of the most exciting new topics in the fields of machine learning and statistics. In many empirical sciences including prevention science, the causal mechanisms underlying various phenomena need to be studied. Nevertheless, in many cases, classical methods for causal structure learning are not capable of estimating the causal structure of variables. This is because it explicitly or implicitly assumes Gaussianity of data and typically utilizes only the covariance structure. In many applications, however, non-Gaussian data are often obtained, which means that more information may be contained in the data distribution than the covariance matrix is capable of containing. Thus, many new methods have recently been proposed for using the non-Gaussian structure of data and inferring the causal structure of variables. This paper introduces prevention scientists to such causal structure learning methods, particularly those based on the linear, non-Gaussian, acyclic model known as LiNGAM. These non-Gaussian data analysis tools can fully estimate the underlying causal structures of variables under assumptions even in the presence of unobserved common causes. This feature is in contrast to other approaches. A simulated example is also provided.
Akhmetov, Dauren F.; Kotaki, Minoru
In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.
Deslauriers, Louis; Wieman, Carl
We measured mastery and retention of conceptual understanding of quantum mechanics in a modern physics course. This was studied for two equivalent cohorts of students taught with different pedagogical approaches using the Quantum Mechanics Conceptual Survey. We measured the impact of pedagogical approach both on the original conceptual learning and on long-term retention. The cohort of students who had a very highly rated traditional lecturer scored 19% lower than the equivalent cohort that was taught using interactive engagement methods. However, the amount of retention was very high for both cohorts, showing only a few percent decrease in scores when retested 6 and 18 months after completion of the course and with no exposure to the material in the interim period. This high level of retention is in striking contrast to the retention measured for more factual learning from university courses and argues for the value of emphasizing conceptual learning.
Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li
High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.
Fu, Songping; Lu, Xiaoqing; Liu, Lu; Qu, Jingwei; Tang, Zhi
In recent years, the retrieval of plane geometry figures (PGFs) has attracted increasing attention in the fields of mathematics education and computer science. However, the high cost of matching complex PGF features leads to the low efficiency of most retrieval systems. This paper proposes an indirect classification method based on multi-label learning, which improves retrieval efficiency by reducing the scope of compare operation from the whole database to small candidate groups. Label correlations among PGFs are taken into account for the multi-label classification task. The primitive feature selection for multi-label learning and the feature description of visual geometric elements are conducted individually to match similar PGFs. The experiment results show the competitive performance of the proposed method compared with existing PGF retrieval methods in terms of both time consumption and retrieval quality.
Samsudin; Nugraha, Bayu
This study aimed to know the difference between playing and learning methods of exploratory learning methods to learning outcomes throwing the ball. In addition, this study also aimed to determine the effect of nutritional status of these two learning methods mentioned above. This research was conducted at SDN Cipinang Besar Selatan 16 Pagi East…
Azimi, Seyed Majid; Britz, Dominik; Engstler, Michael; Fritz, Mario; Mücklich, Frank
The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.
da Costa Tavares, Ofelia Cizela; Suyoto; Pranowo
In the modern world today the decision support system is very useful to help in solving a problem, so this study discusses the learning process of savings and loan cooperatives in Timor Leste. The purpose of the observation is that the people of Timor Leste are still in the process of learning the use DSS for good saving and loan cooperative process. Based on existing research on the Timor Leste community on credit cooperatives, a mobile application will be built that will help the cooperative learning process in East Timorese society. The methods used for decision making are AHP (Analytical Hierarchy Process) and SAW (simple additive Weighting) method to see the result of each criterion and the weight of the value. The result of this research is mobile leaning cooperative in decision support system by using SAW and AHP method. Originality Value: Changed the two methods of mobile application development using AHP and SAW methods to help the decision support system process of a savings and credit cooperative in Timor Leste.
da Costa Tavares Ofelia Cizela
Full Text Available In the modern world today the decision support system is very useful to help in solving a problem, so this study discusses the learning process of savings and loan cooperatives in Timor Leste. The purpose of the observation is that the people of Timor Leste are still in the process of learning the use DSS for good saving and loan cooperative process. Based on existing research on the Timor Leste community on credit cooperatives, a mobile application will be built that will help the cooperative learning process in East Timorese society. The methods used for decision making are AHP (Analytical Hierarchy Process and SAW (simple additive Weighting method to see the result of each criterion and the weight of the value. The result of this research is mobile leaning cooperative in decision support system by using SAW and AHP method. Originality Value: Changed the two methods of mobile application development using AHP and SAW methods to help the decision support system process of a savings and credit cooperative in Timor Leste.
Giese, Nanna Henriette; Jensen, Hans Peter; Jørgensen, Jørgen Helms
Seven barley lines or varieties, each with a different gene at the Ml-a locus for resistance to Erysiphe graminis were intercrossed. Progeny testing of the F2s using two different fungal isolates per cross provided evidence that there are two or more loci in the Ml-a region. Apparent recombinants...... were also screened for recombination between the Hor1 and Hor2 loci which are situated either side of the Ml-a locus. The cross between Ricardo and Iso42R (Rupee) yielded one possible recombinant, with Ml-a3 and Ml-a(Rul) in the coupling phase; other recombinants had wild-type genes in the coupling...... phase. Iso20R, derived from Hordeum spontaneum 'H204', carrying Ml-a6, had an additional gene, in close coupling with Ml-a6, tentatively named Ml-aSp2 or Reglv, causing an intermediate infection type with isolate EmA30. It is suggested that Ml-a(Ar) in Emir and Ml-a(Rul), shown to differ from other Ml...
Olden, Julian D; Lawler, Joshua J; Poff, N LeRoy
Machine learning methods, a family of statistical techniques with origins in the field of artificial intelligence, are recognized as holding great promise for the advancement of understanding and prediction about ecological phenomena. These modeling techniques are flexible enough to handle complex problems with multiple interacting elements and typically outcompete traditional approaches (e.g., generalized linear models), making them ideal for modeling ecological systems. Despite their inherent advantages, a review of the literature reveals only a modest use of these approaches in ecology as compared to other disciplines. One potential explanation for this lack of interest is that machine learning techniques do not fall neatly into the class of statistical modeling approaches with which most ecologists are familiar. In this paper, we provide an introduction to three machine learning approaches that can be broadly used by ecologists: classification and regression trees, artificial neural networks, and evolutionary computation. For each approach, we provide a brief background to the methodology, give examples of its application in ecology, describe model development and implementation, discuss strengths and weaknesses, explore the availability of statistical software, and provide an illustrative example. Although the ecological application of machine learning approaches has increased, there remains considerable skepticism with respect to the role of these techniques in ecology. Our review encourages a greater understanding of machin learning approaches and promotes their future application and utilization, while also providing a basis from which ecologists can make informed decisions about whether to select or avoid these approaches in their future modeling endeavors.
The study investigated the effect of using cooperative learning method on tenth grade students' learning achievement in biology and their attitude towards the subject in a Higher Secondary School in Bhutan. The study used a mixed method approach. The quantitative component included an experimental design where cooperative learning was the…
Gilkar, Suhail Ahmad; Lone, Shabiruddin; Lone, Riyaz Ahmad
Active learning has received considerable attention over the past several years, often presented or perceived as a radical change from traditional instruction methods. Current research on learning indicates that using a variety of teaching strategies in the classroom increases student participation and learning. To introduce active learning methodology, i.e., "jigsaw technique" in undergraduate medical education and assess the student and faculty response to it. This study was carried out in the Department of Physiology in a Medical College of North India. A topic was chosen and taught using one of the active learning methods (ALMs), i.e., jigsaw technique. An instrument (questionnaire) was developed in English through an extensive review of literature and was properly validated. The students were asked to give their response on a five-point Likert scale. The feedback was kept anonymous. Faculty also provided their feedback in a separately provided feedback proforma. The data were collected, compiled, and analyzed. Of 150 students of MBBS-first year batch 2014, 142 participated in this study along with 14 faculty members of the Physiology Department. The majority of the students (>90%) did welcome the introduction of ALM and strongly recommended the use of such methods in teaching many more topics in future. 100% faculty members were of the opinion that many more topics shall be taken up using ALMs. This study establishes the fact that both the medical students and faculty want a change from the traditional way of passive, teacher-centric learning, to the more active teaching-learning techniques.
Patient data in clinical research often includes large amounts of structured information, such as neuroimaging data, neuropsychological test results, and demographic variables. Given the various sources of information, we can develop computerized methods that can be a great help to clinicians to discover hidden patterns in the data. The computerized methods often employ data mining and machine learning algorithms, lending themselves as the computer-aided diagnosis (CAD) tool that assists clinicians in making diagnostic decisions. In this chapter, we review state-of-the-art methods used in dementia research, and briefly introduce some recently proposed algorithms subsequently.
McDowell, Jenny; Marriott, Jennifer L.; Calandra, Angela; Duncan, Gregory
Objective To design and evaluate a preregistration course utilizing asynchronous online learning as the primary distance education delivery method. Design Online course components including tutorials, quizzes, and moderated small-group asynchronous case-based discussions were implemented. Online delivery was supplemented with self-directed and face-to-face learning. Assessment Pharmacy graduates who had completed the course in 2004 and 2005 were surveyed. The majority felt they had benefited from all components of the course, and that online delivery provided benefits including increased peer support, shared learning, and immediate feedback on performance. A majority of the first cohort reported that the workload associated with asynchronous online discussions was too great. The course was altered in 2005 to reduce the online component. Participant satisfaction improved, and most felt that the balance of online to face-to-face delivery was appropriate. Conclusion A new pharmacy preregistration course was successfully implemented. Online teaching and learning was well accepted and appeared to deliver benefits over traditional distance education methods once workload issues were addressed. PMID:19777092
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.
PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator
Côté, Richard G; Reisinger, Florian; Martens, Lennart
We here present jmzML, a Java API for the Proteomics Standards Initiative mzML data standard. Based on the Java Architecture for XML Binding and XPath-based XML indexer random-access XML parser, jmzML can handle arbitrarily large files in minimal memory, allowing easy and efficient processing of mzML files using the Java programming language. jmzML also automatically resolves internal XML references on-the-fly. The library (which includes a viewer) can be downloaded from http://jmzml.googlecode.com.
Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor
The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.
Full Text Available This study investigates the learners’ preference of academic, collaborative and social interaction towards interaction methods in e-learning portal. Academic interaction consists of interaction between learners and online learning resources such as online reading, online explanation, online examination and also online question answering. Collaborative interaction occurs when learners interact among themselves using online group discussion. Social interaction happens when learners and instructors participate in the session either via online text chatting or voice chatting. The study employed qualitative methodology where data were collected through questionnaire that was administered to 933 distance education students from Bachelor of Management, Bachelor of Science, Bachelor of Social Science and Bachelor of Art. The survey responses were tabulated in a 5-point Likert scale and analyzed using the Statistical Package for Social Science (SPSS Version 12.0 based on frequency and percentage distribution. The result of the study suggest that among three types of interaction, most of the student prefer academic interaction for their learning supports in e-learning portal compared to collaborative and social interaction. They wish to interact with learning content rather than interact with people. They prefer to read and learn from the resources rather than sharing knowledge among themselves and instructors via collaborative and social interaction.
Wang, Xuefei; Wang, Mingjiang; Zhang, Qiquan
In recent years, with the rapid development of deep learning, it has been widely used in the field of natural language processing. In this paper, I use the method of deep learning to achieve Chinese word segmentation, with large-scale corpus, eliminating the need to construct additional manual characteristics. In the process of Chinese word segmentation, the first step is to deal with the corpus, use word2vec to get word embedding of the corpus, each character is 50. After the word is embedded, the word embedding feature is fed to the bidirectional LSTM, add a linear layer to the hidden layer of the output, and then add a CRF to get the model implemented in this paper. Experimental results show that the method used in the 2014 People's Daily corpus to achieve a satisfactory accuracy.
Miller, Adam A.
Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..
Miller, Adam A.
Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' machine-learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..
Koponen, Jonna; Pyörälä, Eeva; Isotalus, Pekka
Despite numerous studies exploring medical students' attitudes to communication skills learning (CSL), there are apparently no studies comparing different experiential learning methods and their influence on students' attitudes. We compared medical students' attitudes to learning communication skills before and after a communication course in the data as a whole, by gender and when divided into three groups using different methods. Second-year medical students (n = 129) were randomly assigned to three groups. In group A (n = 42) the theatre in education method, in group B (n = 44) simulated patients and in group C (n = 43) role-play were used. The data were gathered before and after the course using Communication Skills Attitude Scale. Students' positive attitudes to learning communication skills (PAS; positive attitude scale) increased significantly and their negative attitudes (NAS; negative attitude scale) decreased significantly between the beginning and end of the course. Female students had more positive attitudes than the male students. There were no significant differences in the three groups in the mean scores for PAS or NAS measured before or after the course. The use of experiential methods and integrating communication skills training with visits to health centres may help medical students to appreciate the importance of CSL.
Mu, Jingyi; Wu, Fang; Zhang, Aihua
In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing...
Kůrková, Věra; Sanguineti, M.
Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005
Full Text Available New qualitative research methods continue to emerge in response to factors such as renewed interest in mixed methods, better understanding of the importance of a researcher’s philosophical stance, as well as the increased use of technology in data collection and analysis, to name a few. As a result, those facilitating research methods courses must revisit content and instructional strategies in order to prepare well-informed researchers. Approaches range from paradigm to pragmatic emphasis. This descriptive case study of a doctoral seminar for novice qualitative researchers describes the intricacies of the syllabus of a pragmatic approach in a constructivist/social constructionist learning environment. The purpose was to document the delivery and faculty/student interactions and reactions. Noteworthy were the contradictions and frustrations in the delivery as well as in student experiences. In the end, student input led to seminal learning experiences. The confirmation of the effectiveness of a constructivist/social constructivist learning environment is applicable to higher education pedagogy in general.
Full Text Available The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.
Dipnall, J F; Pasco, J A; Berk, M; Williams, L J; Dodd, S; Jacka, F N; Meyer, D
Key lifestyle-environ risk factors are operative for depression, but it is unclear how risk factors cluster. Machine-learning (ML) algorithms exist that learn, extract, identify and map underlying patterns to identify groupings of depressed individuals without constraints. The aim of this research was to use a large epidemiological study to identify and characterise depression clusters through "Graphing lifestyle-environs using machine-learning methods" (GLUMM). Two ML algorithms were implemented: unsupervised Self-organised mapping (SOM) to create GLUMM clusters and a supervised boosted regression algorithm to describe clusters. Ninety-six "lifestyle-environ" variables were used from the National health and nutrition examination study (2009-2010). Multivariate logistic regression validated clusters and controlled for possible sociodemographic confounders. The SOM identified two GLUMM cluster solutions. These solutions contained one dominant depressed cluster (GLUMM5-1, GLUMM7-1). Equal proportions of members in each cluster rated as highly depressed (17%). Alcohol consumption and demographics validated clusters. Boosted regression identified GLUMM5-1 as more informative than GLUMM7-1. Members were more likely to: have problems sleeping; unhealthy eating; ≤2 years in their home; an old home; perceive themselves underweight; exposed to work fumes; experienced sex at ≤14 years; not perform moderate recreational activities. A positive relationship between GLUMM5-1 (OR: 7.50, Pdepression was found, with significant interactions with those married/living with partner (P=0.001). Using ML based GLUMM to form ordered depressive clusters from multitudinous lifestyle-environ variables enabled a deeper exploration of the heterogeneous data to uncover better understandings into relationships between the complex mental health factors. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Filatov, D. V.; Ignatev, K. V.; Deviatkin, A. V.; Serykh, E. V.
This paper focuses on solving a relevant and pressing safety issue on intercity roads. Two approaches were considered for solving the problem of traffic signs recognition; the approaches involved neural networks to analyze images obtained from a camera in the real-time mode. The first approach is based on a sequential image processing. At the initial stage, with the help of color filters and morphological operations (dilatation and erosion), the area containing the traffic sign is located on the image, then the selected and scaled fragment of the image is analyzed using a feedforward neural network to determine the meaning of the found traffic sign. Learning of the neural network in this approach is carried out using a backpropagation method. The second approach involves convolution neural networks at both stages, i.e. when searching and selecting the area of the image containing the traffic sign, and when determining its meaning. Learning of the neural network in the second approach is carried out using the intersection over union function and a loss function. For neural networks to learn and the proposed algorithms to be tested, a series of videos from a dash cam were used that were shot under various weather and illumination conditions. As a result, the proposed approaches for traffic signs recognition were analyzed and compared by key indicators such as recognition rate percentage and the complexity of neural networks’ learning process.
Ponte, Pedro; Melko, Roger G.
Machine learning is capable of discriminating phases of matter, and finding associated phase transitions, directly from large data sets of raw state configurations. In the context of condensed matter physics, most progress in the field of supervised learning has come from employing neural networks as classifiers. Although very powerful, such algorithms suffer from a lack of interpretability, which is usually desired in scientific applications in order to associate learned features with physical phenomena. In this paper, we explore support vector machines (SVMs), which are a class of supervised kernel methods that provide interpretable decision functions. We find that SVMs can learn the mathematical form of physical discriminators, such as order parameters and Hamiltonian constraints, for a set of two-dimensional spin models: the ferromagnetic Ising model, a conserved-order-parameter Ising model, and the Ising gauge theory. The ability of SVMs to provide interpretable classification highlights their potential for automating feature detection in both synthetic and experimental data sets for condensed matter and other many-body systems.
Kavakiotis, Ioannis; Tsave, Olga; Salifoglou, Athanasios; Maglaveras, Nicos; Vlahavas, Ioannis; Chouvarda, Ioanna
The remarkable advances in biotechnology and health sciences have led to a significant production of data, such as high throughput genetic data and clinical information, generated from large Electronic Health Records (EHRs). To this end, application of machine learning and data mining methods in biosciences is presently, more than ever before, vital and indispensable in efforts to transform intelligently all available information into valuable knowledge. Diabetes mellitus (DM) is defined as a group of metabolic disorders exerting significant pressure on human health worldwide. Extensive research in all aspects of diabetes (diagnosis, etiopathophysiology, therapy, etc.) has led to the generation of huge amounts of data. The aim of the present study is to conduct a systematic review of the applications of machine learning, data mining techniques and tools in the field of diabetes research with respect to a) Prediction and Diagnosis, b) Diabetic Complications, c) Genetic Background and Environment, and e) Health Care and Management with the first category appearing to be the most popular. A wide range of machine learning algorithms were employed. In general, 85% of those used were characterized by supervised learning approaches and 15% by unsupervised ones, and more specifically, association rules. Support vector machines (SVM) arise as the most successful and widely used algorithm. Concerning the type of data, clinical datasets were mainly used. The title applications in the selected articles project the usefulness of extracting valuable knowledge leading to new hypotheses targeting deeper understanding and further investigation in DM.
Tax, N.; Bockting, S.; Hiemstra, D.
Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered
Trivette, Carol M.; Dunst, Carl J.; Hamby, Deborah W.; O'Herin, Chainey E.
The effectiveness of four adult learning methods (accelerated learning, coaching, guided design, and just-in-time training) constituted the focus of this research synthesis. Findings reported in "How People Learn" (Bransford et al., 2000) were used to operationally define six adult learning method characteristics, and to code and analyze…
Ryberg, Thomas; Buus, Lillian; Nyvang, Tom
In this chapter, a specific learning design method is introduced and explained, namely the Collaborative E-learning Design method (CoED), which has been developed through various projects in “e-Learning Lab – Centre for User Driven Innovation, Learning and Design” (Nyvang & Georgsen, 2007). We br...
Mullins, Mary H.
Active learning approaches have shown to improve student learning outcomes and improve the experience of students in the classroom. This article compares a Process Oriented Guided Inquiry Learning style approach to a more traditional teaching method in an undergraduate research methods course. Moving from a more traditional learning environment to…
Kiselyov, Oleg; Garrigue, Jacques
This volume collects the extended versions of selected papers originally presented at the two ACM SIGPLAN workshops: ML Family Workshop 2014 and OCaml 2014. Both were affiliated with ICFP 2014 and took place on two consecutive days, on September 4 and 5, 2014 in Gothenburg, Sweden. The ML Family workshop aims to recognize the entire extended family of ML and ML-like languages: languages that are Higher-order, Typed, Inferred, and Strict. It provides the forum to discuss common issues, both pr...
Latisma D, L.; Kurniawan, W.; Seprima, S.; Nirbayani, E. S.; Ellizar, E.; Hardeli, H.
The purpose of this study was to see which method are well used with the Chemistry Triangle-oriented learning media. This quasi experimental research involves first grade of senior high school students in six schools namely each two SMA N in Solok city, in Pasaman and two SMKN in Pariaman. The sampling technique was done by Cluster Random Sampling. Data were collected by test and analyzed by one-way anova and Kruskall Wallish test. The results showed that the high school students in Solok learning taught by cooperative method is better than the results of student learning taught by conventional and Individual methods, both for students who have high initial ability and low-ability. Research in SMK showed that the overall student learning outcomes taught by conventional method is better than the student learning outcomes taught by cooperative and individual methods. Student learning outcomes that have high initial ability taught by individual method is better than student learning outcomes that are taught by cooperative method and for students who have low initial ability, there is no difference in student learning outcomes taught by cooperative, individual and conventional methods. Learning in high school in Pasaman showed no significant difference in learning outcomes of the three methods undertaken.
Friendenthal,Sanford; Steiner, Rick
This book is the bestselling, authoritative guide to SysML for systems and software engineers, providing a comprehensive and practical resource for modeling systems with SysML. Fully updated to cover newly released version 1.3, it includes a full description of the modeling language along with a quick reference guide, and shows how an organization or project can transition to model-based systems engineering using SysML, with considerations for processes, methods, tools, and training. Numerous examples help readers understand how SysML can be used in practice, while reference material facilitates studying for the OMG Systems Modeling Professional (OCSMP) Certification Program, designed to test candidates' knowledge of SysML and their ability to use models to represent real-world systems.
Kang, Do Young [College of Medicine, Donga Univ., Busan (Korea, Republic of)
Cardiac neurotransmission imaging allows in vivo assessment of presynaptic reuptake, neurotransmitter storage and postsynaptic receptors. Among the various neurotransmitter, I-123 MlBG is most available and relatively well-established. Metaiodobenzylguanidine (MIBG) is an analogue of the false neurotransmitter guanethidine. It is taken up to adrenergic neurons by uptake-1 mechanism as same as norepinephrine. As tagged with I-123, it can be used to image sympathetic function in various organs including heart with planar or SPECT techniques. I-123 MIBG imaging has a unique advantage to evaluate myocardial neuronal activity in which the heart has no significant structural abnormality or even no functional derangement measured with other conventional examination. In patients with cardiomyopathy and heart failure, this imaging has most sensitive technique to predict prognosis and treatment response of betablocker or ACE inhibitor. In diabetic patients, it allow very early detection of autonomic neuropathy. In patients with dangerous arrhythmia such as ventricular tachycardia or fibrillation, MIBG imaging may be only an abnormal result among various exams. In patients with ischemic heart disease, sympathetic derangement may be used as the method of risk stratification. In heart transplanted patients, sympathetic reinnervation is well evaluated. Adriamycin-induced cardiotoxicity is detected earlier than ventricular dysfunction with sympathetic dysfunction. Neurodegenerative disorder such as Parkinson's disease or dementia with Lewy bodies has also cardiac sympathetic dysfunction. Noninvasive assessment of cardiac sympathetic nerve activity with l-123 MlBG imaging may be improve understanding of the pathophysiology of cardiac disease and make a contribution to predict survival and therapy efficacy.
Kang, Do Young
Cardiac neurotransmission imaging allows in vivo assessment of presynaptic reuptake, neurotransmitter storage and postsynaptic receptors. Among the various neurotransmitter, I-123 MlBG is most available and relatively well-established. Metaiodobenzylguanidine (MIBG) is an analogue of the false neurotransmitter guanethidine. It is taken up to adrenergic neurons by uptake-1 mechanism as same as norepinephrine. As tagged with I-123, it can be used to image sympathetic function in various organs including heart with planar or SPECT techniques. I-123 MIBG imaging has a unique advantage to evaluate myocardial neuronal activity in which the heart has no significant structural abnormality or even no functional derangement measured with other conventional examination. In patients with cardiomyopathy and heart failure, this imaging has most sensitive technique to predict prognosis and treatment response of betablocker or ACE inhibitor. In diabetic patients, it allow very early detection of autonomic neuropathy. In patients with dangerous arrhythmia such as ventricular tachycardia or fibrillation, MIBG imaging may be only an abnormal result among various exams. In patients with ischemic heart disease, sympathetic derangement may be used as the method of risk stratification. In heart transplanted patients, sympathetic reinnervation is well evaluated. Adriamycin-induced cardiotoxicity is detected earlier than ventricular dysfunction with sympathetic dysfunction. Neurodegenerative disorder such as Parkinson's disease or dementia with Lewy bodies has also cardiac sympathetic dysfunction. Noninvasive assessment of cardiac sympathetic nerve activity with l-123 MlBG imaging may be improve understanding of the pathophysiology of cardiac disease and make a contribution to predict survival and therapy efficacy
Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran
To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.
Koevesarki, Peter; Nuncio Quiroz, Adriana Elizabeth; Brock, Ian C. [Physikalisches Institut, Universitaet Bonn, Bonn (Germany)
High energy physics is a home for a variety of multivariate techniques, mainly due to the fundamentally probabilistic behaviour of nature. These methods generally require training based on some theory, in order to discriminate a known signal from a background. Nevertheless, new physics can show itself in ways that previously no one thought about, and in these cases conventional methods give little or no help. A possible way to discriminate between known processes (like vector bosons or top-quark production) or look for new physics is using unsupervised machine learning to extract the features of the data. A technique was developed, based on the combination of neural networks and the method of principal curves, to find a parametrisation of the non-linear correlations of the data. The feasibility of the method is shown on ATLAS data.
Vladimir S. Kublanov
Full Text Available The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components.
Full Text Available The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.
Shimizu, J.; Yamada, K.; Kishimoto, M.; Iwasaki, T.; Miyake, Y.
The contrast effects of three different contrast media preparations (iohexol 180 mgI/ml, iohexol 240 mgI/ml and iotrolan 240 mgI/ml) in conventional and CT myelography were compared. Three beagle dogs were used and the study employed a cross-over method (total of 9) for each contrast media. The result of CT myelography showed that the contrast effect of iohexol (180 mgI/ml), which had low viscosity, was highest in cranial sites, and the contrast effect of high-viscosity iotrolan (240 mgI/ml) was highest in caudal sites 5 min after injection of the contrast media preparations. This shows that the diffusion of contrast media preparations in the subarachnoid space is influenced by viscosity. The results of conventional myelography also showed that the diffusion of contrast media preparations is influenced by viscosity. Therefore, it is important to identify the location of spinal lesions in veterinary practice, and low viscosity contrast medium preparation with wide spread contrast effects is considered suitable for myelography
Mohammed Abdallh Otair
Full Text Available Attempting to deliver a monolithic mobile learning system is too inflexible in view of the heterogeneous mixture of hardware and services available and the desirability of facility blended approaches to learning delivery, and how to build learning materials to run on all platforms. This paper proposes a framework of mobile learning system using an intelligent method (IP-MLI . A fuzzy matching method is used to find suitable learning material design. It will provide a best matching for each specific platform type for each learner. The main contribution of the proposed method is to use software layer to insulate learning materials from device-specific features. Consequently, many versions of learning materials can be designed to work on many platform types.
Malina, Mary A.; Nørreklit, Hanne; Selto, Frank H.
on the use and usefulness of a specialized balanced scorecard; and third, to encourage researchers to actually use multiple methods and sources of data to address the very many accounting phenomena that are not fully understood. Design/methodology/approach – This paper is an opinion piece based...... on the authors' experience conducting a series of longitudinal mixed method studies. Findings – The authors suggest that in many studies, using a mixed method approach provides the best opportunity for addressing research questions. Originality/value – This paper provides encouragement to those who may wish......Purpose – The purpose of this paper is first, to discuss the theoretical assumptions, qualities, problems and myopia of the dominating quantitative and qualitative approaches; second, to describe the methodological lessons that the authors learned while conducting a series of longitudinal studies...
Full Text Available Abstract Background Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. Results We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%. The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile to 0.5 CPU-hours (simplified 3D profile to seconds (machine learning. Conclusions We showed that the simplified profile generation method does not introduce an error with regard to the original method, while
Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd
Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset
Sheikhaboumasoudi, Rouhollah; Bagheri, Maryam; Hosseini, Sayed Abbas; Ashouri, Elaheh; Elahi, Nasrin
Fundamentals of nursing course are prerequisite to providing comprehensive nursing care. Despite development of technology on nursing education, effectiveness of using e-learning methods in fundamentals of nursing course is unclear in clinical skills laboratory for nursing students. The aim of this study was to compare the effect of blended learning (combining e-learning with traditional learning methods) with traditional learning alone on nursing students' scores. A two-group post-test experimental study was administered from February 2014 to February 2015. Two groups of nursing students who were taking the fundamentals of nursing course in Iran were compared. Sixty nursing students were selected as control group (just traditional learning methods) and experimental group (combining e-learning with traditional learning methods) for two consecutive semesters. Both groups participated in Objective Structured Clinical Examination (OSCE) and were evaluated in the same way using a prepared checklist and questionnaire of satisfaction. Statistical analysis was conducted through SPSS software version 16. Findings of this study reflected that mean of midterm (t = 2.00, p = 0.04) and final score (t = 2.50, p = 0.01) of the intervention group (combining e-learning with traditional learning methods) were significantly higher than the control group (traditional learning methods). The satisfaction of male students in intervention group was higher than in females (t = 2.60, p = 0.01). Based on the findings, this study suggests that the use of combining traditional learning methods with e-learning methods such as applying educational website and interactive online resources for fundamentals of nursing course instruction can be an effective supplement for improving nursing students' clinical skills.
Anderson, Lisa C; Krichbaum, Kathleen E
Physiology is a requisite course for many professional allied health programs and is a foundational science for learning pathophysiology, health assessment, and pharmacology. Given the demand for online learning in the health sciences, it is important to evaluate the efficacy of online and in-class teaching methods, especially as they are combined to form hybrid courses. The purpose of this study was to compare two hybrid physiology sections in which one section was offered mostly in-class (85% in-class), and the other section was offered mostly online (85% online). The two sections in 2 yr ( year 1 and year 2 ) were compared in terms of knowledge of physiology measured in exam scores and pretest-posttest improvement, and in measures of student satisfaction with teaching. In year 1 , there were some differences on individual exam scores between the two sections, but no significant differences in mean exam scores or in pretest-posttest improvements. However, in terms of student satisfaction, the mostly in-class students in year 1 rated the instructor significantly higher than did the mostly online students. Comparisons between in-class and online students in the year 2 cohort yielded data that showed that mean exam scores were not statistically different, but pre-post changes were significantly greater in the mostly online section; student satisfaction among mostly online students also improved significantly. Education researchers must investigate effective combinations of in-class and online methods for student learning outcomes, while maintaining the flexibility and convenience that online methods provide. Copyright © 2017 the American Physiological Society.
Aleksandr Vasilyevich Koshkarov
Full Text Available Ensuring food security is a major challenge in many countries. With a growing global population, the issues of improving the efficiency of agriculture have become most relevant. Farmers are looking for new ways to increase yields, and governments of different countries are developing new programs to support agriculture. This contributes to a more active implementation of digital technologies in agriculture, helping farmers to make better decisions, increase yields and take care of the environment. The central point is the collection and analysis of data. In the industry of agriculture, data can be collected from different sources and may contain useful patterns that identify potential problems or opportunities. Data should be analyzed using machine learning algorithms to extract useful insights. Such methods of precision farming allow the farmer to monitor individual parts of the field, optimize the consumption of water and chemicals, and identify problems quickly. Purpose: to make an overview of the machine learning algorithms used for data analysis in agriculture. Methodology: an overview of the relevant literature; a survey of farmers. Results: relevant algorithms of machine learning for the analysis of data in agriculture at various levels were identified: soil analysis (soil assessment, soil classification, soil fertility predictions, weather forecast (simulation of climate change, temperature and precipitation prediction, and analysis of vegetation (weed identification, vegetation classification, plant disease identification, crop forecasting. Practical implications: agriculture, crop production.
Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey
The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.
Full Text Available Nowadays there are different evaluation methods focused in the assessment of the usability of telematic methods. The assessment of 3rd generation web environments evaluates the effectiveness and usability of application with regard to the user needs. Wireless usability and, specifically in mobile phones, is concentrated in the validation of the features and tools management using conventional interactive environments. There is not a specific and suitable criterion to evaluate created environments and m-learning platforms, where the restricted and sequential representation is a fundamental aspect to be considered.The present paper exposes the importance of the conventional usability methods to verify both: the employed contents in wireless formats, and the possible interfaces from the conception phases, to the validations of the platform with such characteristics.The development of usability adapted inspection could be complemented with the Remote’s techniques of usability testing, which are being carried out these days in the mobile devices field and which pointed out the need to apply common criteria in the validation of non-located learning scenarios.
There are many different methods that individuals use to learn languages like reading books or writing essays. Not all methods are equally successful for second language learners but nor do all successful learners of a second language show identical preferences for learning methods. Additionally, at the highest level of language learning various…
Magana, Alejandra J.; Vieira, Camilo; Boutin, Mireille
This paper studies electrical engineering learners' preferences for learning methods with various degrees of activity. Less active learning methods such as homework and peer reviews are investigated, as well as a newly introduced very active (constructive) learning method called "slectures," and some others. The results suggest that…
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of
In the early days of the LHC the canonical problems of classification and regression were mostly addressed using simple cut-based techniques. Today, ML techniques (some already pioneered in pre-LHC or non collider experiments) play a fundamental role in the toolbox of any experimentalist. The talk will introduce, through a representative collection of examples, the problems addressed with ML techniques at the LHC. The goal of the talk is to set the stage for a constructive discussion with non-HEP ML practitioners, focusing on the specificities of HEP applications.
Elvesæter, Brian; Carrez, Cyril; Mohagheghi, Parastoo; Berre, Arne-Jørgen; Johnsen, Svein G.; Solberg, Arnor
This chapter presents a model-driven service engineering (MDSE) methodology that uses OMG MDA specifications such as BMM, BPMN and SoaML to identify and specify services within a service-oriented architecture. The methodology takes advantage of business modelling practices and provides a guide to service modelling with SoaML. The presentation is case-driven and illuminated using the telecommunication example. The chapter focuses in particular on the use of the SoaML modelling language as a means for expressing service specifications that are aligned with business models and can be realized in different platform technologies.
Nicolás Fernández Losa
Full Text Available This paper describes a teaching experience about experimental field work as practical learning method implemented in the subject of Organizational Behaviour. With this teaching experience we pretend to change the practical training, as well as in its evaluation process, in order to favour the development of transversal skills of students. For this purpose, the use of a practice plan, tackled through an experimental field work and carried out with the collaboration of a business organization within a work team (as organic unity of learning, arises as an alternative to the traditional method of practical teachings and allows the approach of business reality into the classroom, as well as actively promote the use of transversal skills. In particular, we develop the experience in three phases. Initially, the students, after forming a working group and define a field work project, should get the collaboration of a nearby business organization in which to obtain data on one or more functional areas of organizational behaviour. Subsequently, students carry out the field work with the realization of the scheduled visits and elaboration of a memory to establish a diagnosis of the strategy followed by the company in these functional areas in order to propose and justify alternative actions that improve existing ones. Finally, teachers assess the different field work memories and their public presentations according to evaluation rubrics, which try to objectify and unify to the maximum the evaluation criteria and serve to guide the learning process of students. The results of implementation of this teaching experience, measured through a Likert questionnaire, are very satisfactory for students.
Full Text Available The paper presents the application of a hybrid method (blended learning - linking traditional education with on-line education to teach selected problems of mathematical statistics. This includes the teaching of the application of mathematical statistics to evaluate laboratory experimental results. An on-line statistics course was developed to form an integral part of the module ‘methods of statistical evaluation of experimental results’. The course complies with the principles outlined in the Polish National Framework of Qualifications with respect to the scope of knowledge, skills and competencies that students should have acquired at course completion. The paper presents the structure of the course and the educational content provided through multimedia lessons made accessible on the Moodle platform. Following courses which used the traditional method of teaching and courses which used the hybrid method of teaching, students test results were compared and discussed to evaluate the effectiveness of the hybrid method of teaching when compared to the effectiveness of the traditional method of teaching.
We want to discuss the methods of efficient study habits and how they can be used by students to help them improve learning physics. In particular, we deal with the most efficient techniques needed to help students improve their study skills. We focus on topics such as the skills of how to develop long term memory, how to improve concentration power, how to take class notes, how to prepare for and take exams, how to study scientific subjects such as physics. We argue that the students who conscientiously use the methods of efficient study habits achieve higher results than those students who do not; moreover, a student equipped with the proper study skills will spend much less time to learn a subject than a student who has no good study habits. The underlying issue here is not the quantity of time allocated to the study efforts by the students, but the efficiency and quality of actions so that the student can function at peak efficiency. These ideas were developed as part of Project IMPACTSEED (IMproving Physics And Chemistry Teaching in SEcondary Education), an outreach grant funded by the Alabama Commission on Higher Education. This project is motivated by a major pressing local need: A large number of high school physics teachers teach out of field. )
Miller, Adam A.
Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' < or = 18 mag), with 4500 K < or = Teff < or = 7000 K, corresponding to those with the most reliable SSPP estimates, I find that the model predicts [Fe/H] values with a root-mean-squared-error (RMSE) of approx.0.27 dex. The RMSE from this machine-learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..
When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen
Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.
Full Text Available This paper considers the detection of spatial domain least significant bit (LSB matching steganography in gray images. Natural images hold some inherent properties, such as histogram, dependence between neighboring pixels, and dependence among pixels that are not adjacent to each other. These properties are likely to be disturbed by LSB matching. Firstly, histogram will become smoother after LSB matching. Secondly, the two kinds of dependence will be weakened by the message embedding. Accordingly, three features, which are respectively based on image histogram, neighborhood degree histogram and run-length histogram, are extracted at first. Then, support vector machine is utilized to learn and discriminate the difference of features between cover and stego images. Experimental results prove that the proposed method possesses reliable detection ability and outperforms the two previous state-of-the-art methods. Further more, the conclusions are drawn by analyzing the individual performance of three features and their fused feature.
Wu, Wen; Mammone, Richard J.
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.
We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.
ML is an established tool in HEP and there are many examples which demonstrate its importance for the kind of classification and regression problem we have in our field. However, there is also a big potential for future applications in yet untapped areas. I will summarise these opportunities and highlight recent, ongoing and planned studies of novel ML applications in HEP. Certain aspects of the problems we are faced with in HEP are quite unique and represent interesting benchmark problems for the ML community as a whole. Hence, efficient communication and close interaction between the ML and HEP community is expected to lead to promising cross-fertilisation. This talk attempts to serve as a starting point for such a prospective collaboration.
Tan, Meng; Hew, Khe Foon
In this study, we investigated how the use of meaningful gamification affects student learning, engagement, and affective outcomes in a short, 3-day blended learning research methods class using a combination of experimental and qualitative research methods. Twenty-two postgraduates were randomly split into two groups taught by the same…
Qi, Da; Krishna, Ritesh; Jones, Andrew R
The mzQuantML standard from the HUPO Proteomics Standards Initiative has recently been released, capturing quantitative data about peptides and proteins, following analysis of MS data. We present a Java application programming interface (API) for mzQuantML called jmzQuantML. The API provides robust bridges between Java classes and elements in mzQuantML files and allows random access to any part of the file. The API provides read and write capabilities, and is designed to be embedded in other software packages, enabling mzQuantML support to be added to proteomics software tools (http://code.google.com/p/jmzquantml/). The mzQuantML standard is designed around a multilevel validation system to ensure that files are structurally and semantically correct for different proteomics quantitative techniques. In this article, we also describe a Java software tool (http://code.google.com/p/mzquantml-validator/) for validating mzQuantML files, which is a formal part of the data standard. © 2014 The Authors. Proteomics published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
WeedML is a proposed standard to formulate models of weed demography, or maybe even complex models in general, that are both transparent and straightforward to re-use as building blocks for new models. The paper describes the design and thoughts behind WeedML which relies on XML and object-oriented systems development. Proof-of-concept software is provided as open-source C++ code and executables that can be downloaded freely.
Full Text Available The high rate of dropout is a serious problem in E-learning program. Thus it has received extensive concern from the education administrators and researchers. Predicting the potential dropout students is a workable solution to prevent dropout. Based on the analysis of related literature, this study selected student’s personal characteristic and academic performance as input attributions. Prediction models were developed using Artificial Neural Network (ANN, Decision Tree (DT and Bayesian Networks (BNs. A large sample of 62375 students was utilized in the procedures of model training and testing. The results of each model were presented in confusion matrix, and analyzed by calculating the rates of accuracy, precision, recall, and F-measure. The results suggested all of the three machine learning methods were effective in student dropout prediction, and DT presented a better performance. Finally, some suggestions were made for considerable future research.
Yang, Kai; Wu, Haifeng; Zeng, Yu
Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.
Дмитрий Васильевич Сенашенко
Full Text Available The article deals with modern methods of distance learning in the corporate sector. On the specifics of the application of the described methods is their classification and be subject to review their specific differences based on the features and applications of these techniques given the characteristics of the organization of teaching in higher education, a conclusion about their preferred sides, which can be used in distance education. Later in the article, taking into account the above factors, it is proposed an innovative method of formation of educational programs. In view of the similarity of the rendered appearance of the pyramids, this technique proposed name “pyramid”. Offered by the authors, this technique is best synthesis of the best features of the previously described in the article for the online teaching methods. In the future, we are given a detailed description and conducted a preliminary analysis of the applicability of this technique to the training process in the Russian Federation. The analysis describes the eight alleged authors of distance education problems of high school that this method can help to solve.
Kilbrink, Nina; Bjurulf, Veronica; Blomberg, Ingela; Heidkamp, Anja; Hollsten, Ann-Christin
This article describes the process of a learning study conducted in technology education in a Swedish preschool class. The learning study method used in this study is a collaborative method, where researchers and teachers work together as a team concerning teaching and learning about a specific learning object. The object of learning in this study…
Full Text Available A theoretical formulation of a fast learning method based on a pseudoinverse technique is presented. The efficiency and robustness of the method are verified with the help of an Exclusive OR problem and a dynamic system identification of a linear single degree of freedom mass–spring problem. It is observed that, compared with the conventional backpropagation method, the proposed method has a better convergence rate and a higher degree of learning accuracy with a lower equivalent learning coefficient. It is also found that unlike the steepest descent method, the learning capability of which is dependent on the value of the learning coefficient ν, the proposed pseudoinverse based backpropagation algorithm is comparatively robust with respect to its equivalent variable learning coefficient. A combination of the pseudoinverse method and the steepest descent method is proposed for a faster, more accurate learning capability.
Guarino, Salvatore; Leopardi, Eleonora; Sorrenti, Salvatore; De Antoni, Enrico; Catania, Antonio; Alagaratnam, Swethan
The rapid and dramatic incursion of the Internet and social networks in everyday life has revolutionised the methods of exchanging data. Web 2.0 represents the evolution of the Internet as we know it. Internet users are no longer passive receivers, and actively participate in the delivery of information. Medical education cannot evade this process. Increasingly, students are using tablets and smartphones to instantly retrieve medical information on the web or are exchanging materials on their Facebook pages. Medical educators cannot ignore this continuing revolution, and therefore the traditional academic schedules and didactic schemes should be questioned. Analysing opinions collected from medical students regarding old and new teaching methods and tools has become mandatory, with a view towards renovating the process of medical education. A cross-sectional online survey was created with Google® docs and administrated to all students of our medical school. Students were asked to express their opinion on their favourite teaching methods, learning tools, Internet websites and Internet delivery devices. Data analysis was performed using spss. The online survey was completed by 368 students. Although textbooks remain a cornerstone for training, students also identified Internet websites, multimedia non-online material, such as the Encyclopaedia on CD-ROM, and other non-online computer resources as being useful. The Internet represented an important aid to support students' learning needs, but textbooks are still their resource of choice. Among the websites noted, Google and Wikipedia significantly surpassed the peer-reviewed medical databases, and access to the Internet was primarily through personal computers in preference to other Internet access devices, such as mobile phones and tablet computers. Increasingly, students are using tablets and smartphones to instantly retrieve medical information. © 2014 John Wiley & Sons Ltd.
Pressley, Michael; And Others
In five experiments, college-age students of differing foreign language-learning abilities were asked to learn Latin word translations to determine the effectiveness of the keyword method of foreign language vocabulary learning. The Latin words were the types for which it has been argued that the keyword method effects would be maximized (the…
Vick, Brianna M; Pollak, Adrianna; Welsh, Cynthia; Liang, Jennifer O
Here we describe projects that used GloFish, brightly colored, fluorescent, transgenic zebrafish, in experiments that enabled students to carry out all steps in the scientific method. In the first project, students in an undergraduate genetics laboratory course successfully tested hypotheses about the relationships between GloFish phenotypes and genotypes using PCR, fluorescence microscopy, and test crosses. In the second and third projects, students doing independent research carried out hypothesis-driven experiments that also developed new GloFish projects for future genetics laboratory students. Brianna Vick, an undergraduate student, identified causes of the different shades of color found in orange GloFish. Adrianna Pollak, as part of a high school science fair project, characterized the fluorescence emission patterns of all of the commercially available colors of GloFish (red, orange, yellow, green, blue, and purple). The genetics laboratory students carrying out the first project found that learning new techniques and applying their knowledge of genetics were valuable. However, assessments of their learning suggest that this project was not challenging to many of the students. Thus, the independent projects will be valuable as bases to widen the scope and range of difficulty of experiments available to future genetics laboratory students.
Full Text Available Nowadays, pervasive computing technologies are paving a promising way for advanced smart health applications. However, a key impediment faced by wide deployment of these assistive smart devices, is the increasing privacy and security issue, such as how to protect access to sensitive patient data in the health record. Focusing on this challenge, biometrics are attracting intense attention in terms of effective user identification to enable confidential health applications. In this paper, we take special interest in two bio-potential-based biometric modalities, electrocardiogram (ECG and electroencephalogram (EEG, considering that they are both unique to individuals, and more reliable than token (identity card and knowledge-based (username/password methods. After extracting effective features in multiple domains from ECG/EEG signals, several advanced machine learning algorithms are introduced to perform the user identification task, including Neural Network, K-nearest Neighbor, Bagging, Random Forest and AdaBoost. Experimental results on two public ECG and EEG datasets show that ECG is a more robust biometric modality compared to EEG, leveraging a higher signal to noise ratio and also more distinguishable morphological patterns. Among different machine learning classifiers, the random forest greatly outperforms the others and owns an identification rate as high as 98%. This study is expected to demonstrate that properly selected biometric empowered by an effective machine learner owns a great potential, to enable confidential biomedicine applications in the era of smart digital health.
Full Text Available Article is describing process of creating and using of e-learning program for graphical solution of linear programming problems that is used in the Economic mathematical methods course on Faculty of Business and Economics, MZLU. The program was created within FRVŠ 788/2008 grant and is intended for practicing of graphical solution of LP problems and allows better understanding of the linear programming problems. In the article is on one hand described the way, how does the program work, it means how were the algorithms implemented, and on the other hand there is described way of use of that program. The program is constructed for working with integer and rational numbers. At the end of the article are shown basic statistics of programs use of students in the present form and the part-time form of study. It is mainly the number of programs downloads and comparison to another programs and students opinion on the e-learning support.
Full Text Available Resolving location expressions in text to the correct physical location, also known as geocoding or grounding, is complicated by the fact that so many places around the world share the same name. Correct resolution is made even more difficult when there is little context to determine which place is intended, as in a 140-character Twitter message, or when location cues from different sources conflict, as may be the case among different metadata fields of a Twitter message. We used supervised machine learning to weigh the different fields of the Twitter message and the features of a world gazetteer to create a model that will prefer the correct gazetteer candidate to resolve the extracted expression. We evaluated our model using the F1 measure and compared it to similar algorithms. Our method achieved results higher than state-of-the-art competitors.
Vast amounts of data exist in the astronomical data archives, and yet a large number of sources remain unclassified. We developed a multi-wavelength pipeline to classify infrared sources. The pipeline uses supervised machine learning methods to classify objects into the appropriate categories. The program is fed data that is already classified to train it, and is then applied to unknown catalogues. The primary use for such a pipeline is the rapid classification and cataloging of data that would take a much longer time to classify otherwise. While our primary goal is to study young stellar objects (YSOs), the applications extend beyond the scope of this project. We present preliminary results from our analysis and discuss future applications.
Full Text Available In Finland the Regional Fire and Rescue Services (RFRS are responsible for near shore oil spill response and shoreline cleanup operations. In addition, they assist in other types of maritime incidents, such as search and rescue operations and fire-fighting on board. These statutory assignments require the RFRS to have capability to act both on land and at sea. As maritime incidents occur infrequently, little routine has been established. In order to improve their performance in maritime operations, the RFRS are participating in a new oil spill training programme to be launched by South-Eastern Finland University of Applied Sciences. This training programme aims to utilize new educational methods; e-learning and simulator based training. In addition to fully exploiting the existing navigational bridge simulator, radio communication simulator and crisis management simulator, an entirely new simulator is developed. This simulator is designed to model the oil recovery process; recovery method, rate and volume in various conditions with different oil types. New simulator enables creation of a comprehensive training programme covering training tasks from a distress call to the completion of an oil spill response operation. Structure of the training programme, as well as the training objectives, are based on the findings from competence and education surveys conducted in spring 2016. In these results, a need for vessel maneuvering and navigation exercises together with actual response measures training were emphasized. Also additional training for maritime radio communication, GMDSS-emergency protocols and collaboration with maritime authorities were seemed important. This paper describes new approach to the maritime operations training designed for rescue authorities, a way of learning by doing, without mobilising the vessels at sea.
Li, Pan; Liu, Qiang; Zhao, Wentao; Wang, Dongxu; Wang, Siqi
In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). However, practical IDSs generally update their decision module by feeding new data then retraining learning models in a periodical way. Hence, some attacks that comprise the data for training or testing classifiers significantly challenge the detecting capability of machine learning-based IDSs. Poisoning attack, which is one of the most recognized security threats towards machine learning...
Wojciech M. Czarnecki
Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.
Full Text Available In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing the real estate on corresponding regions or not. In this paper, support vector machine (SVM, least squares support vector machine (LSSVM, and partial least squares (PLS methods are used to forecast the home values. And these algorithms are compared according to the predicted results. Experiment shows that although the data set exists serious nonlinearity, the experiment result also show SVM and LSSVM methods are superior to PLS on dealing with the problem of nonlinearity. The global optimal solution can be found and best forecasting effect can be achieved by SVM because of solving a quadratic programming problem. In this paper, the different computation efficiencies of the algorithms are compared according to the computing times of relevant algorithms.
Karstoft, Karen-Inge; Statnikov, Alexander; Andersen, Søren B; Madsen, Trine; Galatzer-Levy, Isaac R
Pre-deployment identification of soldiers at risk for long-term posttraumatic stress psychopathology after home coming is important to guide decisions about deployment. Early post-deployment identification can direct early interventions to those in need and thereby prevents the development of chronic psychopathology. Both hold significant public health benefits given large numbers of deployed soldiers, but has so far not been achieved. Here, we aim to assess the potential for pre- and early post-deployment prediction of resilience or posttraumatic stress development in soldiers by application of machine learning (ML) methods. ML feature selection and prediction algorithms were applied to a prospective cohort of 561 Danish soldiers deployed to Afghanistan in 2009 to identify unique risk indicators and forecast long-term posttraumatic stress responses. Robust pre- and early postdeployment risk indicators were identified, and included individual PTSD symptoms as well as total level of PTSD symptoms, previous trauma and treatment, negative emotions, and thought suppression. The predictive performance of these risk indicators combined was assessed by cross-validation. Together, these indicators forecasted long term posttraumatic stress responses with high accuracy (pre-deployment: AUC = 0.84 (95% CI = 0.81-0.87), post-deployment: AUC = 0.88 (95% CI = 0.85-0.91)). This study utilized a previously collected data set and was therefore not designed to exhaust the potential of ML methods. Further, the study relied solely on self-reported measures. Pre-deployment and early post-deployment identification of risk for long-term posttraumatic psychopathology are feasible and could greatly reduce the public health costs of war. Copyright © 2015 Elsevier B.V. All rights reserved.
Andrusyszyn, M A; Cragg, C E; Humbert, J
The relationships among multiple distance delivery methods, preferred learning style, content, and achievement was sought for primary care nurse practitioner students. A researcher-designed questionnaire was completed by 86 (71%) participants, while 6 engaged in follow-up interviews. The results of the study included: participants preferred learning by "considering the big picture"; "setting own learning plans"; and "focusing on concrete examples." Several positive associations were found: learning on own with learning by reading, and setting own learning plans; small group with learning through discussion; large group with learning new things through hearing and with having learning plans set by others. The most preferred method was print-based material and the least preferred method was audio tape. The most suited method for content included video teleconferencing for counseling, political action, and transcultural issues; and video tape for physical assessment. Convenience, self-direction, and timing of learning were more important than delivery method or learning style. Preferred order of learning was reading, discussing, observing, doing, and reflecting. Recommended considerations when designing distance courses include a mix of delivery methods, specific content, outcomes, learner characteristics, and state of technology.
Czimber, Kornél; Gálos, Borbála; Mátyás, Csaba; Bidló, András; Gribovszki, Zoltán
Hungarian forests are highly sensitive to the changing climate, especially to the available precipitation amount. Over the past two decades several drought damages were observed for tree species which are in the lower xeric limit of their distribution. From year to year these affected forest stands become more difficult to reforest with the same native species because these are not able to adapt to the increasing probability of droughts. The climate related parameter set of the Hungarian forest stand database needs updates. Air humidity that was formerly used to define the forest climate zones is not measured anymore and its value based on climate model outputs is highly uncertain. The aim was to develop a novel computerized and objective method to describe the species-specific climate conditions that is essential for survival, growth and optimal production of the forest ecosystems. The method is expected to project the species spatial distribution until 2100 on the basis of regional climate model simulations. Until now, Hungarian forest managers have been using a carefully edited spreadsheet for reforestation purposes. Applying binding regulations this spreadsheet prescribes the stand-forming and admixed tree species and their expected growth rate for each forest site types. We are going to present a new machine learning based method to replace the former spreadsheet. We took into great consideration of various methods, such as maximum likelihood, Bayesian networks, Fuzzy logic. The method calculates distributions, setups classification, which can be validated and modified by experts if necessary. Projected climate change conditions makes necessary to include into this system an additional climate zone that does not exist in our region now, as well as new options for potential tree species. In addition to or instead of the existing ones, the influence of further limiting parameters (climatic extremes, soil water retention) are also investigated. Results will be
Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho
Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bibiano, Luis H.; Pastor Collado, Juan Antonio; Mayol Sarroca, Enric
CRM information systems are valuable tools for enterprises. But CRM implementation projects are risky and present a high failure rate. In this paper we regard CRM implementation projects as services that could be greatly improved by addressing them in a methodological way that can be designed with the help of tools such as SysML. Here we introduce and comment on our first experience on the use of SysML language, not very well known, for modelling the elements involved in the CRM implementatio...
Kang, Sung-Kwon; Albright, Thomas A.; Silvestre, Jerome
Extended Hiickel calculmions were carried out on 171, 'f/ 2, and 'f/3 complexes of P4 to Rh(PH3)2Cl. The 'f/ 1-square planar and an 'f/2 complex with C2v symmetry are the most stable. Geometrical optimizations and a detailed account of the bonding in each have been carried out. d10 'f/1-tetrahedral complexes of P4 are expected to be quite stable. The best candidate for an 'f/3 mode of bonding is the trimer Fe3(C0)9. Alternative complexes at 'f/3 include a d6-ML3 and d4-ML...
Wagner, David A.; Bennett, Matthew B.; Karban, Robert; Rouquette, Nicolas; Jenkins, Steven; Ingham, Michel
State Analysis is a methodology developed over the last decade for architecting, designing and documenting complex control systems. Although it was originally conceived for designing robotic spacecraft, recent applications include the design of control systems for large ground-based telescopes. The European Southern Observatory (ESO) began a project to design the European Extremely Large Telescope (E-ELT), which will require coordinated control of over a thousand articulated mirror segments. The designers are using State Analysis as a methodology and the Systems Modeling Language (SysML) as a modeling and documentation language in this task. To effectively apply the State Analysis methodology in this context it became necessary to provide ontological definitions of the concepts and relations in State Analysis and greater flexibility through a mapping of State Analysis into a practical extension of SysML. The ontology provides the formal basis for verifying compliance with State Analysis semantics including architectural constraints. The SysML extension provides the practical basis for applying the State Analysis methodology with SysML tools. This paper will discuss the method used to develop these formalisms (the ontology), the formalisms themselves, the mapping to SysML and approach to using these formalisms to specify a control system and enforce architectural constraints in a SysML model.
Hommes, J.; Van den Bossche, P.; de Grave, W.; Bos, G.; Schuwirth, L.; Scherpbier, A.
Little is known how time influences collaborative learning groups in medical education. Therefore a thorough exploration of the development of learning processes over time was undertaken in an undergraduate PBL curriculum over 18 months. A mixed-methods triangulation design was used. First, the quantitative study measured how various learning…
Jewpanich, Chaiwat; Piriyasurawong, Pallop
This research aims to 1) develop the project-based learning using discussion and lesson-learned methods via social media model (PBL-DLL SoMe Model) used for enhancing problem solving skills of undergraduate in education student, and 2) evaluate the PBL-DLL SoMe Model used for enhancing problem solving skills of undergraduate in education student.…
Laursen, Torben; Susgaard, Søren; Jensen, Flemming Steen
was to compare the relative bioavailability of two highly concentrated (12 IU/ml versus 56 IU/ml) formulations of biosynthetic human growth hormone administered subcutaneously. After pretreatment with growth hormone for at least four weeks, nine growth hormone deficient patients with a mean age of 26.2 years......AbstractSend to: Pharmacol Toxicol. 1994 Jan;74(1):54-7. Absorption kinetics of two highly concentrated preparations of growth hormone: 12 IU/ml compared to 56 IU/ml. Laursen T1, Susgaard S, Jensen FS, Jørgensen JO, Christiansen JS. Author information Abstract The purpose of this study...... (range 17-43) were studied two times in a randomized design, the two studies being separated by at least one week. At the start of each study period (7 p.m.), growth hormone was injected subcutaneously in a dosage of 3 IU/m2. The 12 IU/ml preparation of growth hormone was administered on one occasion...
Domeniconi, Giacomo; Masseroli, Marco; Moro, Gianluca; Pinoli, Pietro
Knowledge of gene and protein functions is paramount for the understanding of physiological and pathological biological processes, as well as in the development of new drugs and therapies. Analyses for biomedical knowledge discovery greatly benefit from the availability of gene and protein functional feature descriptions expressed through controlled terminologies and ontologies, i.e., of gene and protein biomedical controlled annotations. In the last years, several databases of such annotations have become available; yet, these valuable annotations are incomplete, include errors and only some of them represent highly reliable human curated information. Computational techniques able to reliably predict new gene or protein annotations with an associated likelihood value are thus paramount. Here, we propose a novel cross-organisms learning approach to reliably predict new functionalities for the genes of an organism based on the known controlled annotations of the genes of another, evolutionarily related and better studied, organism. We leverage a new representation of the annotation discovery problem and a random perturbation of the available controlled annotations to allow the application of supervised algorithms to predict with good accuracy unknown gene annotations. Taking advantage of the numerous gene annotations available for a well-studied organism, our cross-organisms learning method creates and trains better prediction models, which can then be applied to predict new gene annotations of a target organism. We tested and compared our method with the equivalent single organism approach on different gene annotation datasets of five evolutionarily related organisms (Homo sapiens, Mus musculus, Bos taurus, Gallus gallus and Dictyostelium discoideum). Results show both the usefulness of the perturbation method of available annotations for better prediction model training and a great improvement of the cross-organism models with respect to the single-organism ones
Ramana, Jayashree; Gupta, Dinesh
Progression through the cell cycle involves the coordinated activities of a suite of cyclin/cyclin-dependent kinase (CDK) complexes. The activities of the complexes are regulated by CDK inhibitors (CDKIs). Apart from its role as cell cycle regulators, CDKIs are involved in apoptosis, transcriptional regulation, cell fate determination, cell migration and cytoskeletal dynamics. As the complexes perform crucial and diverse functions, these are important drug targets for tumour and stem cell therapeutic interventions. However, CDKIs are represented by proteins with considerable sequence heterogeneity and may fail to be identified by simple similarity search methods. In this work we have evaluated and developed machine learning methods for identification of CDKIs. We used different compositional features and evolutionary information in the form of PSSMs, from CDKIs and non-CDKIs for generating SVM and ANN classifiers. In the first stage, both the ANN and SVM models were evaluated using Leave-One-Out Cross-Validation and in the second stage these were tested on independent data sets. The PSSM-based SVM model emerged as the best classifier in both the stages and is publicly available through a user-friendly web interface at http://bioinfo.icgeb.res.in/cdkipred. PMID:20967128
Full Text Available Amphibian species have been considered as useful ecological indicators. They are used as indicators of environmental contamination, ecosystem health and habitat quality., Amphibian species are sensitive to changes in the aquatic environment and therefore, may form the basis for the classification of water bodies. Water bodies in which there are a large number of amphibian species are especially valuable even if they are located in urban areas. The automation of the classification process allows for a faster evaluation of the presence of amphibian species in the water bodies. Three machine-learning methods (artificial neural networks, decision trees and the k-nearest neighbours algorithm have been used to classify water bodies in Chorzów – one of 19 cities in the Upper Silesia Agglomeration. In this case, classification is a supervised data mining method consisting of several stages such as building the model, the testing phase and the prediction. Seven natural and anthropogenic features of water bodies (e.g. the type of water body, aquatic plants, the purpose of the water body (destination, position of the water body in relation to any possible buildings, condition of the water body, the degree of littering, the shore type and fishing activities have been taken into account in the classification. The data set used in this study involved information about 71 different water bodies and 9 amphibian species living in them. The results showed that the best average classification accuracy was obtained with the multilayer perceptron neural network.
Dao, Fu-Ying; Yang, Hui; Su, Zhen-Dong; Yang, Wuritu; Wu, Yun; Hui, Ding; Chen, Wei; Tang, Hua; Lin, Hao
Conotoxins are disulfide-rich small peptides, which are invaluable peptides that target ion channel and neuronal receptors. Conotoxins have been demonstrated as potent pharmaceuticals in the treatment of a series of diseases, such as Alzheimer's disease, Parkinson's disease, and epilepsy. In addition, conotoxins are also ideal molecular templates for the development of new drug lead compounds and play important roles in neurobiological research as well. Thus, the accurate identification of conotoxin types will provide key clues for the biological research and clinical medicine. Generally, conotoxin types are confirmed when their sequence, structure, and function are experimentally validated. However, it is time-consuming and costly to acquire the structure and function information by using biochemical experiments. Therefore, it is important to develop computational tools for efficiently and effectively recognizing conotoxin types based on sequence information. In this work, we reviewed the current progress in computational identification of conotoxins in the following aspects: (i) construction of benchmark dataset; (ii) strategies for extracting sequence features; (iii) feature selection techniques; (iv) machine learning methods for classifying conotoxins; (v) the results obtained by these methods and the published tools; and (vi) future perspectives on conotoxin classification. The paper provides the basis for in-depth study of conotoxins and drug therapy research.
Kilbrink, Nina; Bjurulf, Veronica; Blomberg, Ingela; Heidkamp, Anja; Hollsten, Ann-Christin
This article describes the process of a learning study conducted in technology education in a Swedish preschool class. The learning study method used in this study is a collaborative method, where researchers and teachers work together as a team concerning teaching and learning about a specific learning object. The object of learning in this study concerns strong constructions and framed structures. This article describes how this learning study was conducted and discusses reflections made du...
Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.
Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.
Legrain, Fleur; Carrete, Jesús; van Roekeghem, Ambroise; Madsen, Georg K H; Mingo, Natalio
Machine learning (ML) is increasingly becoming a helpful tool in the search for novel functional compounds. Here we use classification via random forests to predict the stability of half-Heusler (HH) compounds, using only experimentally reported compounds as a training set. Cross-validation yields an excellent agreement between the fraction of compounds classified as stable and the actual fraction of truly stable compounds in the ICSD. The ML model is then employed to screen 71 178 different 1:1:1 compositions, yielding 481 likely stable candidates. The predicted stability of HH compounds from three previous high-throughput ab initio studies is critically analyzed from the perspective of the alternative ML approach. The incomplete consistency among the three separate ab initio studies and between them and the ML predictions suggests that additional factors beyond those considered by ab initio phase stability calculations might be determinant to the stability of the compounds. Such factors can include configurational entropies and quasiharmonic contributions.
Mwakigonja, Amos R; Kaaya, Ephata E; Mgaya, Edward M
HIV infection is reported to be associated with some malignant lymphomas (ML) so called AIDS-related lymphomas (ARL), with an aggressive behavior and poor prognosis. The ML frequency, pathogenicity, clinical patterns and possible association with AIDS in Tanzania, are not well documented impeding the development of preventive and therapeutic strategies. Sections of 176 archival formalin-fixed paraffin-embedded biopsies of ML patients at Muhimbili National Hospital (MNH)/Muhimbili University of Health and Allied Sciences (MUHAS), Tanzania from 1996-2001 were stained for hematoxylin and eosin and selected (70) cases for expression of pan-leucocytic (CD45), B-cell (CD20), T-cell (CD3), Hodgkin/RS cell (CD30), histiocyte (CD68) and proliferation (Ki-67) antigen markers. Corresponding clinical records were also evaluated. Available sera from 38 ML patients were screened (ELISA) for HIV antibodies. The proportion of ML out of all diagnosed tumors at MNH during the 6 year period was 4.2% (176/4200) comprising 77.84% non-Hodgkin (NHL) including 19.32% Burkitt's (BL) and 22.16% Hodgkin's disease (HD). The ML tumors frequency increased from 0.42% (1997) to 0.70% (2001) and 23.7% of tested sera from these patients were HIV positive. The mean age for all ML was 30, age-range 3-91 and peak age was 1-20 years. The male:female ratio was 1.8:1. Supra-diaphragmatic presentation was commonest and histological sub-types were mostly aggressive B-cell lymphomas however, no clear cases of primary effusion lymphoma (PEL) and primary central nervous system lymphoma (PCNSL) were diagnosed. Malignant lymphomas apparently, increased significantly among diagnosed tumors at MNH between 1996 and 2001, predominantly among the young, HIV infected and AIDS patients. The frequent aggressive clinical and histological presentation as well as the dominant B-immunophenotype and the HIV serology indicate a pathogenic association with AIDS. Therefore, routine HIV screening of all malignant lymphoma
Reynolds, Fiona; Stanistreet, Debbi; Elton, Peter
Background Several studies in the UK have suggested that women with learning disabilities may be less likely to receive cervical screening tests and a previous local study in had found that GPs considered screening unnecessary for women with learning disabilities. This study set out to ascertain whether women with learning disabilities are more likely to be ceased from a cervical screening programme than women without; and to examine the reasons given for ceasing women with learning disabilities. It was carried out in Bury, Heywood-and-Middleton and Rochdale. Methods Carried out using retrospective cohort study methods, women with learning disabilities were identified by Read code; and their cervical screening records were compared with the Call-and-Recall records of women without learning disabilities in order to examine their screening histories. Analysis was carried out using case-control methods – 1:2 (women with learning disabilities: women without learning disabilities), calculating odds ratios. Results 267 women's records were compared with the records of 534 women without learning disabilities. Women with learning disabilities had an odds ratio (OR) of 0.48 (Confidence Interval (CI) 0.38 – 0.58; X2: 72.227; p.value learning disabilities. Conclusion The reasons given for ceasing and/or not screening suggest that merely being coded as having a learning disability is not the sole reason for these actions. There are training needs among smear takers regarding appropriate reasons not to screen and providing screening for women with learning disabilities. PMID:18218106
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further
Full Text Available Based on data from the observation of high school students grade XI that daily low student test scores due to a lack of role of students in the learning process. This classroom action research aims to improve learning outcomes and student motivation through discovery learning method in colloidal material. This study uses the approach developed by Lewin consisting of planning, action, observation, and reflection. Data collection techniques used the questionnaires and ability tests end. Based on the research that results for students received a positive influence on learning by discovery learning model by increasing the average value of 74 students from the first cycle to 90.3 in the second cycle and increased student motivation in the form of two statements based competence (KD categories (sometimes on the first cycle and the first statement KD category in the second cycle. Thus the results of this study can be used to improve learning outcomes and student motivation
Full Text Available Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method, short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.
Tonzar, Claudio; Lotto, Lorella; Job, Remo
In this study we investigated the effects of two learning methods (picture- or word-mediated learning) and of word status (cognates vs. noncognates) on the vocabulary acquisition of two foreign languages: English and German. We examined children from fourth and eighth grades in a school setting. After a learning phase during which L2 words were…
Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román
This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…
Cheng, Xusen; Li, Yuanyuan; Sun, Jianshan; Huang, Jianqing
Collaborative case studies and computer-supported collaborative learning (CSCL) play an important role in the modern education environment. A number of researchers have given significant attention to learning design in order to improve the satisfaction of collaborative learning. Although collaboration engineering (CE) is a mature method widely…
A study of 46 management students compared three methods for learning strategic management: cases, simulation, and action learning through consulting projects. Simulation was superior to action learning on all outcomes and equal or superior to cases on two. Simulation gave students a central role in management and greater control of the learning…
Kofoed, Lise B.; Jørgensen, Frances
This paper discusses how Problem-Based Learning (PBL) methods were used to support a Danish company in its efforts to become more of a 'learning organisation', characterized by sharing of knowledge and experiences. One of the central barriers to organisational learning in this company involved...
de Villiers, M. R.; Becker, Daphne
From the perspective of parallel mixed-methods research, this paper describes interactivity research that employed usability-testing technology to analyse cognitive learning processes; personal learning styles and times; and errors-and-recovery of learners using an interactive e-learning tutorial called "Relations." "Relations"…
Warin, Bruno; Talbi, Omar; Kolski, Christophe; Hoogstoel, Frédéric
This paper presents the "Multi-Role Project" method (MRP), a broadly applicable project-based learning method, and describes its implementation and evaluation in the context of a Science, Technology, Engineering, and Mathematics (STEM) course. The MRP method is designed around a meta-principle that considers the project learning activity…
Xu, Yan; Yang, Jing; Zhong, Shuiming
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.
darsih darsih darsih
Full Text Available ABSTRACT Assessing the quality of e-learning courses to measure the success of e-learning systems in online learning is essential. The system can be used to improve education. The study analyzes the quality of e-learning course on the web site www.kulon.undip.ac.id used a questionnaire with questions based on the variables of ISO 9126. Penilaiann Likert scale was used with a web app. Rule-base reasoning method is used to subject the quality of e-learningyang assessed. A case study conducted in four e-learning courses with 133 sample / respondents as users of the e-learning course. From the obtained results of research conducted both for the value of e-learning from each subject tested. In addition, each e-learning courses have different advantages depending on certain variables. Keywords : E-Learning, Rule-Base, Questionnaire, Likert, Measuring.
.... In this research project, we have investigated methods and implemented algorithms for efficiently making certain classes of inference in belief networks, and for automatically learning certain...
.... Our research blends methods from several fields-statistics and probability, signal and image processing, mathematical physics, scientific computing, statistical learning theory, and differential...
Rondon, Silmara; Sassi, Fernanda Chiarion; Furquim de Andrade, Claudia Regina
Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students' prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students' performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students' short and long-term knowledge retention.
Bein, Thomas; Weber-Carstens, Steffen; Goldmann, Anton; M?ller, Thomas; Staudinger, Thomas; Brederlau, J?rg; Muellenbach, Ralf; Dembinski, Rolf; Graf, Bernhard M.; Wewalka, Marlene; Philipp, Alois; Wernecke, Klaus-Dieter; Lubnow, Matthias; Slutsky, Arthur S.
Background Acute respiratory distress syndrome is characterized by damage to the lung caused by various insults, including ventilation itself, and tidal hyperinflation can lead to ventilator induced lung injury (VILI). We investigated the effects of a low tidal volume (V T) strategy (V T???3?ml/kg/predicted body weight [PBW]) using pumpless extracorporeal lung assist in established ARDS. Methods Seventy-nine patients were enrolled after a ?stabilization period? (24?h with optimized therapy an...
D.Ed. The aim of this theses is to find out whether there is any relationship between learners' attitudes and learning difficulties in mathematics: To investigate whether learning difficulties in mathematics are associated with learners' gender. To establish the nature of teachers' perceptions of the learning problem areas in the mathematics curriculum. To find out about the teachers' views on the methods of teaching mathematics, resources, learning of mathematics, extra curricular activit...
Sim, Kevin; Hart, Emma; Paechter, Ben
We describe a novel hyper-heuristic system that continuously learns over time to solve a combinatorial optimisation problem. The system continuously generates new heuristics and samples problems from its environment; and representative problems and heuristics are incorporated into a self-sustaining network of interacting entities inspired by methods in artificial immune systems. The network is plastic in both its structure and content, leading to the following properties: it exploits existing knowledge captured in the network to rapidly produce solutions; it can adapt to new problems with widely differing characteristics; and it is capable of generalising over the problem space. The system is tested on a large corpus of 3,968 new instances of 1D bin-packing problems as well as on 1,370 existing problems from the literature; it shows excellent performance in terms of the quality of solutions obtained across the datasets and in adapting to dynamically changing sets of problem instances compared to previous approaches. As the network self-adapts to sustain a minimal repertoire of both problems and heuristics that form a representative map of the problem space, the system is further shown to be computationally efficient and therefore scalable.
Olden, Peter C
Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.
Rogowsky, Beth A.; Calhoun, Barbara M.; Tallal, Paula
While it is hypothesized that providing instruction based on individuals' preferred learning styles improves learning (i.e., reading for visual learners and listening for auditory learners, also referred to as the "meshing hypothesis"), after a critical review of the literature Pashler, McDaniel, Rohrer, and Bjork (2008) concluded that…
Hansen, Samantha Leigh
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.
Williams van Rooij, Shahron
This study examined the impact of two Problem-Based Learning (PBL) approaches on knowledge transfer, problem-solving self-efficacy, and perceived learning gains among four intact classes of adult learners engaged in a group project in an online undergraduate business research methods course. With two of the classes using a text-only PBL workbook…
Isupova, Olga; Kuzin, Danil; Mihaylova, Lyudmila
Semisupervised and unsupervised systems provide operators with invaluable support and can tremendously reduce the operators' load. In the light of the necessity to process large volumes of video data and provide autonomous decisions, this paper proposes new learning algorithms for activity analysis in video. The activities and behaviors are described by a dynamic topic model. Two novel learning algorithms based on the expectation maximization approach and variational Bayes inference are proposed. Theoretical derivations of the posterior estimates of model parameters are given. The designed learning algorithms are compared with the Gibbs sampling inference scheme introduced earlier in the literature. A detailed comparison of the learning algorithms is presented on real video data. We also propose an anomaly localization procedure, elegantly embedded in the topic modeling framework. It is shown that the developed learning algorithms can achieve 95% success rate. The proposed framework can be applied to a number of areas, including transportation systems, security, and surveillance.
Hindriks, Koen V.; Tykhonov, Dmytro
In automated negotiation, information gained about an opponent's preference profile by means of learning techniques may significantly improve an agent's negotiation performance. It therefore is useful to gain a better understanding of how various negotiation factors influence the quality of learning. The quality of learning techniques in negotiation are typically assessed indirectly by means of comparing the utility levels of agreed outcomes and other more global negotiation parameters. An evaluation of learning based on such general criteria, however, does not provide any insight into the influence of various aspects of negotiation on the quality of the learned model itself. The quality may depend on such aspects as the domain of negotiation, the structure of the preference profiles, the negotiation strategies used by the parties, and others. To gain a better understanding of the performance of proposed learning techniques in the context of negotiation and to be able to assess the potential to improve the performance of such techniques a more systematic assessment method is needed. In this paper we propose such a systematic method to analyse the quality of the information gained about opponent preferences by learning in single-instance negotiations. The method includes measures to assess the quality of a learned preference profile and proposes an experimental setup to analyse the influence of various negotiation aspects on the quality of learning. We apply the method to a Bayesian learning approach for learning an opponent's preference profile and discuss our findings.
Hwang, Wonil; Sohn, Kwang Young; Cho, Chang Hwan; Kim, Sung Jong
The acceptance methods associated with commercial-grade dedication are the following: 1) Special tests and inspection (Method 1) 2) Commercial-grade surveys (Method 2) 3) Source verification (Method 3) 4) An acceptable item and supplier performance record (Method 4) Special tests and inspections, often referred to as Method 1, are performed by the dedicating entity after the item is received to verify selected critical characteristics. Conducting a commercial-grade survey of a supplier is often referred to as Method 2. Supplier audits to verify compliance with a nuclear QA program do not meet the intent of a commercial-grade survey. Source verification, often referred to as Method 3, entails verification of critical characteristics during manufacture and testing of the item being procured. The performance history (good or bad) of the item and supplier is a consideration when determining the use of the other acceptance methods and the rigor with which they are used on a case-by-case basis. Some digital equipment system has the delivery reference and its operating history for Nuclear Power Plant as far as surveyed. However it was found that there is difficulty in collecting this of supporting data sheet, so that supplier usually decide to conduct the CGID based on the Method-1 and Method-2 based on the initial qualification likely. It is conceived that the Method-4 might be a better approach for CGID(Commercial Grade Item Dedication) even if there are some difficulties in data package for justifying CGID from the vendor and operating organization. This paper present the lesson learned from the consulting for Method-1 and 2 for digital equipment dedication. Considering all the information above, there are a couple of issues to remind in order to perform the CGID for Method-2. In doing commercial grade survey based on Method 2, quality personnel as well as technical engineer shall be involved for integral dedication. Other than this, the review of critical
Henßen, Robert; Schleipen, Miriam
OPC UA (OPC Unified Architecture) is a platform-independent standard series (IEC 62541) ,  for communication of industrial automation devices and systems. The OPC Unified Architecture is an advanced communication technology for process control. Certainly the launching costs for the initial information model are quite high. AutomationML (Automation Markup Language) is an upcoming open standard series (IEC 62714) ,  for describing production plants or plant components. The goal of t...
Monk, Jonathan M.; Lloyd, Colton J.; Brunk, Elizabeth
To the Editor: Extracting knowledge from the many types of big data produced by high-throughput methods remains a challenge, even when data are from Escherichia coli, the best characterized bacterial species. Here, we present iML1515, the most complete genome-scale reconstruction of the metabolic...
Yuk Chan, Cecilia Ka
Experiential learning pedagogy is taking a lead in the development of graduate attributes and educational aims as these are of prime importance for society. This paper shows a community service experiential project conducted in China. The project enabled students to serve the affected community in a post-earthquake area by applying their knowledge and skills. This paper documented the students' learning process from their project goals, pre-trip preparations, work progress, obstacles encountered to the final results and reflections. Using the data gathered from a focus group interview approach, the four components of Kolb's learning cycle, the concrete experience, reflection observation, abstract conceptualisation and active experimentation, have been shown to transform and internalise student's learning experience, achieving a variety of learning outcomes. The author will also explore how this community service type of experiential learning in the engineering discipline allowed students to experience deep learning and develop their graduate attributes.
Koo, Ching Lee; Liew, Mei Jing; Mohamad, Mohd Saberi; Salleh, Abdul Hakim Mohamed
Recently, the greatest statistical computational challenge in genetic epidemiology is to identify and characterize the genes that interact with other genes and environment factors that bring the effect on complex multifactorial disease. These gene-gene interactions are also denoted as epitasis in which this phenomenon cannot be solved by traditional statistical method due to the high dimensionality of the data and the occurrence of multiple polymorphism. Hence, there are several machine learning methods to solve such problems by identifying such susceptibility gene which are neural networks (NNs), support vector machine (SVM), and random forests (RFs) in such common and multifactorial disease. This paper gives an overview on machine learning methods, describing the methodology of each machine learning methods and its application in detecting gene-gene and gene-environment interactions. Lastly, this paper discussed each machine learning method and presents the strengths and weaknesses of each machine learning method in detecting gene-gene interactions in complex human disease.
Full Text Available The issue of controlling that data processing in an experiment results not affected by the presence of outliers is relevant for statistical control and learning studies. Learning schemes should thus be tested for their capacity of handling outliers in the observed training set so to achieve reliable estimates with respect to the crucial bias and variance aspects. We describe possible ways of endowing neural networks with statistically robust properties by defining feasible error criteria. It is convenient to cast neural nets in state space representations and apply both Kalman filter and stochastic approximation procedures in order to suggest statistically robustified solutions for on-line learning.
Rock, Adam J.; Coventry, William L.; Morgan, Methuen I.; Loi, Natasha M.
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal, Ginsburg, & Schau, 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof, Ceroni, Jeong, & Moghaddam, 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to...
Spidlen, Josef; Leif, Robert C; Moore, Wayne; Roederer, Mario; Brinkman, Ryan R
The lack of software interoperability with respect to gating due to lack of a standardized mechanism for data exchange has traditionally been a bottleneck, preventing reproducibility of flow cytometry (FCM) data analysis and the usage of multiple analytical tools. To facilitate interoperability among FCM data analysis tools, members of the International Society for the Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) have developed an XML-based mechanism to formally describe gates (Gating-ML). Gating-ML, an open specification for encoding gating, data transformations and compensation, has been adopted by the ISAC DSTF as a Candidate Recommendation. Gating-ML can facilitate exchange of gating descriptions the same way that FCS facilitated for exchange of raw FCM data. Its adoption will open new collaborative opportunities as well as possibilities for advanced analyses and methods development. The ISAC DSTF is satisfied that the standard addresses the requirements for a gating exchange standard.
Reng, Lars; Kofoed, Lise; Schoenau-Fog, Henrik
will focus on cases in which development of games did change the learning environments into production units where students or employees were producing games as part of the learning process. The cases indicate that the motivation as well as the learning curve became very high. The pedagogical theories......Game Based Learning has proven to have many possibilities for supporting better learning outcomes, when using educational or commercial games in the classroom. However, there is also a great potential in using game development as a motivator in other kinds of learning scenarios. This study...... and methods are based on Problem Based Learning (PBL), but are developed further by combining PBL with a production-oriented/design based approach. We illustrate the potential of using game production as a learning environment with investigation of three game productions. We can conclude that using game...
Cheplygina, Veronika; de Bruijne, Marleen; Pluim, Josien P. W.
Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods which can learn
National Aeronautics and Space Administration — Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional data with high degree of interpretability, at low computational...
Full Text Available This article reports on the use of Wiktionary, an open source online dictionary, as well as generic wiki pages within a university’s e-learning environment as teaching and learning resources in an Afrikaans sociolinguistics module. In a communal constructivist manner students learnt, but also constructed learning content. From the qualitative research conducted with students it is clear that wikis provide for effective facilitation of a blended learning approach to sociolinguistic research. The use of this medium was positively received, however, some students did prefer handing in assignments in hard copy. The issues of computer literacy and access to the internet were also raised by the respondents. The use of wikis and Wiktionary prompted useful unplanned discussions around reliability and quality of public wikis. The use of a public wiki such as Wiktionary served as encouragement for students as they were able to contribute to the promotion of Afrikaans in this way.
This paper reports on the learning designs, teaching methods and activities most commonly employed within the disciplines in six universities in Australia. The study sought to establish if there were significant differences between the disciplines in learning designs, teaching methods and teaching activities in the current Australian context, as…
Najafi, Mohammad; Motaghi, Zohre; Nasrabadi, Hassanali Bakhtiyar; Heshi, Kamal Nosrati
Regarding the importance of enhancement in learner's social skills, especially in learning process, this study tries to introduce one of the group learning programs entitled "debate" as a teaching method in Iran religious universities. It also considers the concept and the history of this method by qualitative and descriptive-analytical…
He, J.; de Rijke, M.
We describe our participation in the Link-the-Wiki track at INEX 2009. We apply machine learning methods to the anchor-to-best-entry-point task and explore the impact of the following aspects of our approaches: features, learning methods as well as the collection used for training the models. We
Tomcho, Thomas J.; Rice, Diana; Foels, Rob; Folmsbee, Leah; Vladescu, Jason; Lissman, Rachel; Matulewicz, Ryan; Bopp, Kara
Research methods and statistics courses constitute a core undergraduate psychology requirement. We analyzed course syllabi and faculty self-reported coverage of both research methods and statistics course learning objectives to assess the concordance with APA's learning objectives (American Psychological Association, 2007). We obtained a sample of…
Keenan, Kevin; Fontaine, Danielle
How undergraduate students learn research methods in geography has been understudied. Existing work has focused on course description from the instructor's perspective. This study, however, uses a grounded theory approach to allow students' voices to shape a new theory of how they themselves say that they learn research methods. Data from two…
Natland, Sidsel; Weissinger, Erika; Graaf, Genevieve; Carnochan, Sarah
The literature on teaching research methods to social work students identifies many challenges, such as dealing with the tensions related to producing research relevant to practice, access to data to teach practice-based research, and limited student interest in learning research methods. This is an exploratory study of the learning experiences of…
Bailey, Regina M.
In an information-saturated world, today's college students desire to be engaged both in and out of their college classrooms. This mixed-methods study sought to explore how replacing traditional teaching methods with engaged learning activities affects millennial college student attitudes and perceptions about learning. The sub-questions…
Lukman, Rebeka; Krajnc, Majda
This paper identifies the commonalities and differences within non-traditional learning methods regarding virtual and real-world environments. The non-traditional learning methods in real-world have been introduced within the following courses: Process Balances, Process Calculation, and Process Synthesis, and within the virtual environment through…
The purpose of this study was to investigate the preferred method of learning about heart disease by adult learners. This research study also investigated if there was a statistically significant difference between race/ethnicity, age, and gender of adult learners and their preferred method of learning preventative heart disease care. This…
Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke
Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples' experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the field of medical education. Quantitative diary methods offer several methodological advantages, such as measuring aspects of learning with great detail, accuracy and authenticity. Moreover, they enable researchers to study how and under which conditions learning in the health care setting occurs and in which way learning can be promoted. Hence, quantitative diary methods may contribute to theory development and the optimization of teaching methods in medical education.
Chan, Aileen Wai-Kiu; Chair, Sek-Ying; Sit, Janet Wing-Hung; Wong, Eliza Mi-Ling; Lee, Diana Tze-Fun; Fung, Olivia Wai-Man
Case-based learning (CBL) is an effective educational method for improving the learning and clinical reasoning skills of students. Advances in e-learning technology have supported the development of the Web-based CBL approach to teaching as an alternative or supplement to the traditional classroom approach. This study aims to examine the CBL experience of Hong Kong students using both traditional classroom and Web-based approaches in undergraduate nursing education. This experience is examined in terms of the perceived self-learning ability, clinical reasoning ability, and satisfaction in learning of these students. A mixture of quantitative and qualitative approaches was adopted. All Year-3 undergraduate nursing students were recruited. CBL was conducted using the traditional classroom approach in Semester 1, and the Web-based approach was conducted in Semester 2. Student evaluations were collected at the end of each semester using a self-report questionnaire. In-depth, focus-group interviews were conducted at the end of Semester 2. One hundred twenty-two students returned their questionnaires. No difference between the face-to-face and Web-based approaches was found in terms of self-learning ability (p = .947), clinical reasoning ability (p = .721), and satisfaction (p = .083). Focus group interview findings complemented survey findings and revealed five themes that reflected the CBL learning experience of Hong Kong students. These themes were (a) the structure of CBL, (b) the learning environment of Web-based CBL, (c) critical thinking and problem solving, (d) cultural influence on CBL learning experience, and (e) student-centered and teacher-centered learning. The Web-based CBL approach was comparable but not superior to the traditional classroom CBL approach. The Web-based CBL experience of these students sheds light on the impact of Chinese culture on student learning behavior and preferences.
Noorafshan, Ali; Hoseini, Leila; Amini, Mitra; Dehghani, Mohammad-Reza; Kojuri, Javad; Bazrafkan, Leila
Learning by lecture is a passive experience. Many innovative techniques have been presented to stimulate students to assume a more active attitude toward learning. In this study, simultaneous sketch drawing, as an interactive learning technique was applied to teach anatomy to the medical students. We reconstructed a fun interactive model of teaching anatomy as simultaneous anatomic sketching. To test the model's instruction effectiveness, we conducted a quasi- experimental study and then the students were asked to write their learning experiences in their portfolio, also their view was evaluated by a questionnaire. The results of portfolio evaluation revealed that students believed that this method leads to deep learning and understanding anatomical subjects better. Evaluation of the students' views on this teaching approach was showed that, more than 80% of the students were agreed or completely agreed with this statement that leaning anatomy concepts are easier and the class is less boring with this method. More than 60% of the students were agreed or completely agreed to sketch anatomical figures with professor simultaneously. They also found the sketching make anatomy more attractive and it reduced the time for learning anatomy. These number of students were agree or completely agree that the method help them learning anatomical concept in anatomy laboratory. More than 80% of the students found the simultaneous sketching is a good method for learning anatomy overall. Sketch drawing, as an interactive learning technique, is an attractive for students to learn anatomy.
Xu Chengjian, E-mail: email@example.com [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t
Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.
Wang, Wei; Yang, Yongxiao; Yin, Jianxin; Gong, Xinqi
Different types of protein-protein interactions make different protein-protein interface patterns. Different machine learning methods are suitable to deal with different types of data. Then, is it the same situation that different interface patterns are preferred for prediction by different machine learning methods? Here, four different machine learning methods were employed to predict protein-protein interface residue pairs on different interface patterns. The performances of the methods for different types of proteins are different, which suggest that different machine learning methods tend to predict different protein-protein interface patterns. We made use of ANOVA and variable selection to prove our result. Our proposed methods taking advantages of different single methods also got a good prediction result compared to single methods. In addition to the prediction of protein-protein interactions, this idea can be extended to other research areas such as protein structure prediction and design.
The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in the control group which may indicate that traditional teaching methods
Fu, Yuchen; Liu, Quan; Ling, Xionghong; Cui, Zhiming
Reinforcement learning (RL) is one kind of interactive learning methods. Its main characteristics are "trial and error" and "related reward." A hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of "curse of dimensionality," which means that the states space will grow exponentially in the number of features and low convergence speed. The method can reduce state spaces greatly and choose actions with favorable purpose and efficiency so as to optimize reward function and enhance convergence speed. Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards. The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method. All the performance with different parameters is compared and analyzed as well.
Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.
energy physics is divided into several stages. Various programs exist for these stages. In this article we are interested in interfacing different Monte Carlo event generators via data files, in particular, Matrix Element (ME) generators and Showering and Hadronization (SH) generators. There is a widely accepted format for data files for such interfaces - Les Houches Event Format (LHEF). Although information kept in an LHEF file is enough for proper working of SH generators, it is insufficient for understanding how events in the LHEF file have been prepared and which physical model has been applied. In this paper we propose an extension of the format for keeping additional information available in generators. We propose to add a new information block, marked up with XML tags, to the LHEF file. This block describes events in the file in more detail. In particular, it stores information about a physical model, kinematical cuts, generator, etc. This helps to make LHEF files self-documented. Certainly, HepML can be applied in more general context, not in LHEF files only. Solution method: In order to overcome drawbacks of the original LHEF accord we propose to add a new information block of HepML tags. HepML is an XML-based markup language. We designed several XML Schemas for all tags in the language. Any HepML document should follow rules of the Schemas. The language is equipped with a library for operation with HepML tags and documents. This C++ library, called libhepml, consists of classes for HepML objects, which represent a HepML document in computer memory, parsing classes, serializating classes, and some auxiliary classes. Restrictions: The software is adapted for solving problems, described in the article. There are no additional restrictions. Running time: Tests have been done on a computer with Intel(R) Core(TM)2 Solo, 1.4 GHz. Parsing of a HepML file: 6 ms (size of the HepML files is 12.5 Kb) Writing of a HepML block to file: 14 ms (file size 12.5 Kb) Merging of
Ninyerola, Miquel; Sevillano, Eva; Serral, Ivette; Pons, Xavier; Zabala, Alaitz; Bastin, Lucy; Masó, Joan
The scenario of rapidly growing geodata catalogues requires tools focused on facilitate users the choice of products. Having quality fields populated in metadata allow the users to rank and then select the best fit-for-purpose products. In this direction, we have developed the QualityML (http://qualityml.geoviqua.org), a dictionary that contains hierarchically structured concepts to precisely define and relate quality levels: from quality classes to quality measurements. Generically, a quality element is the path that goes from the higher level (quality class) to the lowest levels (statistics or quality metrics). This path is used to encode quality of datasets in the corresponding metadata schemas. The benefits of having encoded quality, in the case of data producers, are related with improvements in their product discovery and better transmission of their characteristics. In the case of data users, particularly decision-makers, they would find quality and uncertainty measures to take the best decisions as well as perform dataset intercomparison. Also it allows other components (such as visualization, discovery, or comparison tools) to be quality-aware and interoperable. On one hand, the QualityML is a profile of the ISO geospatial metadata standards providing a set of rules for precisely documenting quality indicator parameters that is structured in 6 levels. On the other hand, QualityML includes semantics and vocabularies for the quality concepts. Whenever possible, if uses statistic expressions from the UncertML dictionary (http://www.uncertml.org) encoding. However it also extends UncertML to provide list of alternative metrics that are commonly used to quantify quality. A specific example, based on a temperature dataset, is shown below. The annual mean temperature map has been validated with independent in-situ measurements to obtain a global error of 0.5 ° C. Level 0: Quality class (e.g., Thematic accuracy) Level 1: Quality indicator (e.g., Quantitative
Blockchain is a distributed database that maintains a dynamic list of data records, hardened to prevent tampering and revision. It is the framework for cryptocurrencies like Bitcoin.\\ud \\ud A Blockchain learning tool would provide a secure and verifiable learning transaction ledger. Its decentralised nature would ensure a learner, rather than institution-centred record of achievements that would be difficult to tamper with, enabling parties, such as employers or learning institutions, to revi...
Rock, Adam J.; Coventry, William L.; Morgan, Methuen I.; Loi, Natasha M.
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology. PMID:27014147
Rock, Adam J; Coventry, William L; Morgan, Methuen I; Loi, Natasha M
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology.
Jannicke Madeleine Baalsrud Hauge
Full Text Available The challenge of delivering personalized learning experiences is often increased by the size of classrooms and online learning communities. Serious Games (SGs are increasingly recognized for their potential to improve education. However, the issues related to their development and their level of effectiveness can be seriously affected when brought too rapidly into growing online learning communities. Deeper insights into how the students are playing is needed to deliver a comprehensive and intelligent learning framework that facilitates better understanding of learners' knowledge, effective assessment of their progress and continuous evaluation and optimization of the environments in which they learn. This paper discusses current SOTA and aims to explore the potential in the use of games and learning analytics towards scaffolding and supporting teaching and learning experience. The conceptual model (ecosystem and architecture discussed in this paper aims to highlight the key considerations that may advance the current state of learning analytics, adaptive learning and SGs, by leveraging SGs as an suitable medium for gathering data and performing adaptations.
Some properties of the soybean yield component in Ml generation were investigated. Soybean seed variety of Taichung were irradiated with gamma rays from the cobalt source at the Gamma Atomic Energy Research Centre, Yogyakarta. Seven different doses at 2, 4, 6, 8, 10, 12, and 14 krads were used. Subsequent growth of the irradiated seed was carried out in the field and the ages of flowering, seedling height number of pods, number of fertile and infertile pods and percentage of sterility were recorded. It was found that the significant effect on the above variables was observed due to the gamma radiation doses between 8-12 krads. (author)
We organize on the Kaggle platform a data science competition to stimulate both the ML and HEP communities to renew core tracking algorithms in preparation of the next generation of particle detectors at the LHC. In a nutshell : one event has 100.000 3D points ; how to associate the points onto 10.000 unknown approximately helicoidal trajectories ? avoiding combinatorial explosion ? you have a few seconds. But we do give you 100.000 events to train on. We ran ttbar+200 minimum bias event into ACTS a simplified (yet accurate) simulation of a generic LHC silicon detectors, and wrote out ...
In the ML-1 circuit of the 'Juan Vigon' research centre in Madrid, sodium corrosion tests are being carried out on the austenitic steels DIN 1.4970 (X10NiCrMoTiB1515) and DIN 1.4301 (X5CrNi189) at temperatures between 500 and 700 0 C. The exposure time of the samples amounts to 6,000 h by now. Every 1,000 h, the samples were weighed in order to measure corrosion and deposition effects. After 3,000 and 6,000 h, some selected samples were destroyed for inspection. The results are given. (GSC) [de
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright Â© 2012 Elsevier Inc. All rights reserved.
Vallila-Rohter, Sofia; Kiran, Swathi
Purpose The purpose of the current study was to explore non-linguistic learning ability in patients with aphasia, examining the impact of stimulus typicality and feedback on success with learning. Method Eighteen patients with aphasia and eight healthy controls participated in this study. All participants completed four computerized, non-linguistic category-learning tasks. We probed learning ability under two methods of instruction: feedback-based (FB) and paired-associate (PA). We also examined the impact of task complexity on learning ability, comparing two stimulus conditions: typical (Typ) and atypical (Atyp). Performance was compared between groups and across conditions. Results Results demonstrated that healthy controls were able to successfully learn categories under all conditions. For our patients with aphasia, two patterns of performance arose. One subgroup of patients was able to maintain learning across task manipulations and conditions. The other subgroup of patients demonstrated a sensitivity to task complexity, learning successfully only in the typical training conditions. Conclusions Results support the hypothesis that impairments of general learning are present in aphasia. Some patients demonstrated the ability to extract category information under complex training conditions, while others learned only under conditions that were simplified and emphasized salient category features. Overall, the typical training condition facilitated learning for all participants. Findings have implications for therapy, which are discussed. PMID:23695914
Moradabadi, Behnaz; Meybodi, Mohammad Reza
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard
Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .
Zaslavsky, I.; Valentine, D.; Maidment, D.; Tarboton, D. G.; Whiteaker, T.; Hooper, R.; Kirschtel, D.; Rodriguez, M.
available for spatial and semantics-based queries. The main component of interoperability across hydrologic data repositories in CUAHSI HIS is mapping different repository schemas and semantics to a shared community information model for observations made at stationary points. This information model has been implemented as both a relational schema (ODM) and an XML schema (WaterML). Its main design drivers have been data storage and data interchange needs of hydrology researchers, a series of community reviews of the ODM, and the practices of hydrologic data modeling and presentation adopted by federal agencies as observed in agency online data access applications, such as NWISWeb and USEPA STORET. The goal of the first version of WaterML was to encode the semantics of hydrologic observations discovery and retrieval and implement water data services in a way that is generic across different data providers. In particular, this implied maintaining a single common representation for the key constructs returned to web service calls, related to observations, features of interest, observation procedures, observation series, etc. Another WaterML design consideration was to create (in version 1 of CUAHSI HIS in particular) a fairly rigid, compact, and simple XML schema which was easy to generate and parse, thus creating the least barrier for adoption by hydrologists. Each of the three main request methods in the water data web services - GetSiteInfo, GetVariableInfo, and GetValues - has a corresponding response element in WaterML: SiteResponse, VariableResponse, and TimeSeriesResponse. The strictness and compactness of the first version of WaterML supported its community adoption. Over the last two years, several ODM and WaterML implementations for various platforms have emerged, and several Water Data Services client applications have been created by outside groups in both industry and academia. In a significant development, the WaterML specification has been adopted by federal
Wilson, Penne L.
This study was conducted as part of the five year evaluation of the Star Schools grant awarded to Oklahoma State University for the development on online teacher professional development in the Hypothesis Based Learning (HbL) method of science instruction. Participants in this research were five teachers who had completed the online workshop, submitted a lesson plan, and who allowed this researcher and other members of the University of New Mexico Evaluation Team into their classrooms to observe and to determine if the learning of the method from the online HbL workshop had translated into practice. These teachers worked in inner city, suburban, metropolitan, and rural communities in the U.S. Southwest. This study was conducted to determine if teachers learned the HbL method from the online HbL workshop, to examine the relationship of satisfaction to learning, and to determine the elements of the online workshop that led to teacher learning. To measure learning of HbL, three different assessment instruments were used: embedded assessments within the online HbL workshop that gave teachers a scenario and asked them to generate questions to facilitate the HbL process; the analysis of a lesson plan provided by teachers using a science concept that they wished to incorporate in their curriculum using an HbL lesson template provided at the HbL website; and, observations of teachers facilitating the HbL process conducted at three different times during the year that they began the HbL online workshop. To determine if teachers were satisfied with the learning environment, the online HbL workshop, and the product, HbL Method for Teaching Science, and to determine if teachers could identify the elements of the online workshop that led to learning, interviews with the participants were conducted. The research findings were presented in two parts: Part I is an analysis of data provided by the assessment instruments and a content analysis of the transcripts of the teacher
Full Text Available In order to support providing broadband wireless communication services against limited and expensive frequency bandwidth, we have to develop a bandwidth efficient system. Therefore, in this paper we propose a closed-loop MIMO (Multiple-Input-Multiple-Output system using ML (Maximum Likelihood detector to optimize capacity and to increase system performance. What is especially exciting about the benefits offered by MIMO is that a high capacity and performance can be attained without additional frequency-spectral resource. The grand scenario of this concept is the attained advantages of transformation matrices having capability to allocate transmitted signals power suit to the channel. Furthermore, product of these matrices forms parallel singular channels. Due to zero inter-channels correlation, thus we can design ML detector to increase the system performance. Finally, computer simulations validates that at 0 dB SNR our system can reach optimal capacity up to 1 bps/Hz and SER up to 0.2 higher than opened-loop MIMO.
Full Text Available Malathion, a well-known organophosphate pesticide, has been used in agriculture over the last two decades for controlling pests of economically important crops. In the present study, a single bacterium, ML-1, was isolated by soil-enrichment technique and identified as Bacillus licheniformis on the basis of the 16S rRNA technique. The bacterium was grown in carbon-free minimal salt medium (MSM and was found to be very efficient in utilizing malathion as the sole source of carbon. Biodegradation experiments were performed in MSM without carbon source to determine the malathion degradation by the selected strain, and the residues of malathion were determined quantitatively using HPLC techniques. Bacillus licheniformis showed very promising results and efficiently consumed malathion as the sole carbon source via malathion carboxylesterase (MCE, and about 78% malathion was degraded within 5 days. The carboxylesterase activity was determined by using crude extract while using malathion as substrate, and the residues were determined by HPLC. It has been found that the MCE hydrolyzed 87% malathion within 96 h of incubation. Characterization of crude MCE revealed that the enzyme is robust in nature in terms of organic solvents, as it was found to be stable in various concentrations of ethanol and acetonitrile. Similarly, and it can work in a wide pH and temperature range. The results of this study highlighted the potential of Bacillus licheniformis strain ML-1 as a biodegrader that can be used for the bioremediation of malathion-contaminated soil.
Zeng, Irene Sui Lan; Lumley, Thomas
Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.
Digital Preservation and Deep Infrastructure; Dublin Core Metadata Initiative Progress Report and Workplan for 2002; Video Gaming, Education and Digital Learning Technologies: Relevance and Opportunities; Digital Collections of Real World Objects; The MusArt Music-Retrieval System: An Overview; eML: Taking Mississippi Libraries into the 21st Century.
Granger, Stewart; Dekkers, Makx; Weibel, Stuart L.; Kirriemuir, John; Lensch, Hendrik P. A.; Goesele, Michael; Seidel, Hans-Peter; Birmingham, William; Pardo, Bryan; Meek, Colin; Shifrin, Jonah; Goodvin, Renee; Lippy, Brooke
One opinion piece and five articles in this issue discuss: digital preservation infrastructure; accomplishments and changes in the Dublin Core Metadata Initiative in 2001 and plans for 2002; video gaming and how it relates to digital libraries and learning technologies; overview of a music retrieval system; and the online version of the…
Llorens, Ariadna; Berbegal-Mirabent, Jasmina; Llinàs-Audet, Xavier
Engineering education is facing new challenges to effectively provide the appropriate skills to future engineering professionals according to market demands. This study proposes a model based on active learning methods, which is expected to facilitate the acquisition of the professional skills most highly valued in the information and communications technology (ICT) market. The theoretical foundations of the study are based on the specific literature on active learning methodologies. The Delphi method is used to establish the fit between learning methods and generic skills required by the ICT sector. An innovative proposition is therefore presented that groups the required skills in relation to the teaching method that best develops them. The qualitative research suggests that a combination of project-based learning and the learning contract is sufficient to ensure a satisfactory skills level for this profile of engineers.
Garwood, Janet K
The current longitudinal, descriptive, and correlational study explored which traditional teaching strategies can engage Millennial students and adequately prepare them for the ultimate test of nursing competence: the National Council Licensure Examination. The study comprised a convenience sample of 40 baccalaureate nursing students enrolled in a psychiatric nursing course. The students were exposed to a variety of traditional (e.g., PowerPoint(®)-guided lectures) and nontraditional (e.g., concept maps, group activities) teaching and learning strategies, and rated their effectiveness. The students' scores on the final examination demonstrated that student learning outcomes met or exceeded national benchmarks. Copyright 2015, SLACK Incorporated.
Full Text Available This paper discusses a research project carried out with 82 final and third year undergraduate students, learning Research Methods prior to undertaking an undergraduate thesis during the academic years 2010 and 2011. The research had two separate, linked objectives, (a to develop a Research Methods module that embraces an activity-based approach to learning in a group environment, (b to improve engagement by all students. The Research Methods module was previously taught through a traditional lecture-based format. Anecdotally, it was felt that student engagement was poor and learning was limited. It was believed that successful completion of the development of this Module would equip students with a deeply-learned battery of research skills to take into their further academic and professional careers. Student learning was achieved through completion of a series of activities based on different research methods. In order to encourage student engagement, a wide variety of activities were used. These activities included workshops, brainstorming, mind-mapping, presentations, written submissions, peer critiquing, lecture/seminar, and ‘speed dating’ with more senior students and self reflection. Student engagement was measured through a survey based on a U.S. National Survey of Student Engagement (2000. A questionnaire was devised to establish whether, and to what degree, students were engaged in the material that they were learning, while they were learning it. The results of the questionnaire were very encouraging with between 63% and 96% of students answering positively to a range of questions concerning engagement. In terms of the two objectives set, these were satisfactorily met. The module was successfully developed and continues to be delivered, based upon this new and significant level of student engagement.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Full Text Available This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs and the cross-spectral density (CSD, the proposed method employs a modified maximum-likelihood (ML prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z
Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.
Ariana, Armin; Amin, Moein; Pakneshan, Sahar; Dolan-Evans, Elliot; Lam, Alfred K
Dental students require a basic ability to explain and apply general principles of pathology to systemic, dental, and oral pathology. Although there have been recent advances in electronic and online resources, the academic effectiveness of using self-directed e-learning tools in pathology courses for dental students is unclear. The aim of this study was to determine if blended learning combining e-learning with traditional learning methods of lectures and tutorials would improve students' scores and satisfaction over those who experienced traditional learning alone. Two consecutive cohorts of Bachelor of Dentistry and Oral Health students taking the general pathology course at Griffith University in Australia were compared. The control cohort experienced traditional methods only, while members of the study cohort were also offered self-directed learning materials including online resources and online microscopy classes. Final assessments for the course were used to compare the differences in effectiveness of the intervention, and students' satisfaction with the teaching format was evaluated using questionnaires. On the final course assessments, students in the study cohort had significantly higher scores than students in the control cohort (plearning tools such as virtual microscopy and interactive online resources for delivering pathology instruction can be an effective supplement for developing dental students' competence, confidence, and satisfaction.
Fernando, Sithara Y. J. N.; Marikar, Faiz M. M. T.
Evidence for the teaching involves transmission of knowledge, superiority of guided transmission is explained in the context of our knowledge, but it is also much more that. In this study we have examined General Sir John Kotelawala Defence University's cadet and civilian students' response to constructivist learning theory and participatory…
The demands in higher education are on the rise. Charged with teaching more content, increased class sizes and engaging students, educators face numerous challenges. In design education, educators are often torn between the teaching of technology and the teaching of theory. Learning the formal concepts of hierarchy, contrast and space provide the…
Vidnerová, Petra; Neruda, Roman
submitted 25. 1. (2018) ISSN 1530-437X R&D Projects: GA ČR GA15-18108S Grant - others:GA MŠk(CZ) LM2015042 Institutional support: RVO:67985807 Keywords : machine learning * sensors * air pollution * deep neural networks * regularization networks Subject RIV: IN - Informatics, Computer Science Impact factor: 2.512, year: 2016
Moorthy, N. S.Hari Narayana; Kumar, Surendra; Poongavanam, Vasanthanathan
An accurate calculation of carcinogenicity of chemicals became a serious challenge for the health assessment authority around the globe because of not only increased cost for experiments but also various ethical issues exist using animal models. In this study, we provide machine learning...
Xu, Bo; Lin, Hongfei; Lin, Yuan; Ma, Yunlong; Yang, Liang; Wang, Jian; Yang, Zhihao
In these years, the number of biomedical articles has increased exponentially, which becomes a problem for biologists to capture all the needed information manually. Information retrieval technologies, as the core of search engines, can deal with the problem automatically, providing users with the needed information. However, it is a great challenge to apply these technologies directly for biomedical retrieval, because of the abundance of domain specific terminologies. To enhance biomedical retrieval, we propose a novel framework based on learning to rank. Learning to rank is a series of state-of-the-art information retrieval techniques, and has been proved effective in many information retrieval tasks. In the proposed framework, we attempt to tackle the problem of the abundance of terminologies by constructing ranking models, which focus on not only retrieving the most relevant documents, but also diversifying the searching results to increase the completeness of the resulting list for a given query. In the model training, we propose two novel document labeling strategies, and combine several traditional retrieval models as learning features. Besides, we also investigate the usefulness of different learning to rank approaches in our framework. Experimental results on TREC Genomics datasets demonstrate the effectiveness of our framework for biomedical information retrieval.
This paper describes an alternative approachto the teaching of concepts related to theEnglish curriculum, namely literature, writing summaries and grammar. It combines ashift in the theory of school learning development by a combination with a psychologicaltheory of development. The research was conducted over the ...
Currin-Percival, Mary; Johnson, Martin
We investigate differences in what students learn about survey methodology in a class on public opinion presented in two critically different ways: with the inclusion or exclusion of an original research project using a random-digit-dial telephone survey. Using a quasi-experimental design and data obtained from pretests and posttests in two public…
Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to  empirically determine the signatures of this mechanism in solar image data and  use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.
海老澤, 賢史; Ebisawa, Satoshi
The educational methods with Learning Management System (LMS) are described, which are applied to two specialized courses for engineering education and a research guidance for graduation work at Niigata Institute of Technology.According to the analysis of LMS usage situation for graduation work, the LMS has provided an effect that learning time outside class hour is held and the convenience of students in learning is enhanced.In the specializedcourses, the rate of utilization of LMS has depen...
Full Text Available Background and purpose: This paper analyzes the interest of potential users for learning in the field of currency trading or foreign exchange (forex, FX. The purpose of our article is a to present currency trading, b to present different options, methods and learning approaches to educating in forex, c to present the research results discovering the interest of potential users for learning in the field of currency trading.
Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong
Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.
With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...
Full Text Available The paper explores geophysical methods of wells survey, as well as their role in the development of Kazakhstan’s uranium deposit mining efforts. An analysis of the existing methods for solving the problem of interpreting geophysical data using machine learning in petroleum geophysics is made. The requirements and possible applications of machine learning methods in regard to uranium deposits of Kazakhstan are formulated in the paper.
Bendinskaitė I. Perspective for applying traditional and innovative teaching and learning methods to nurse’s continuing education, magister thesis / supervisor Assoc. Prof. O. Riklikienė; Departament of Nursing and Care, Faculty of Nursing, Lithuanian University of Health Sciences. – Kaunas, 2015, – p. 92 The purpose of this study was to investigate traditional and innovative teaching and learning methods perspective to nurse’s continuing education. Material and methods. In a period fro...
Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.
Davis, Eric J.; Pauls, Steve; Dick, Jonathan
Presented is a project-based learning (PBL) laboratory approach for an upper-division environmental chemistry or quantitative analysis course. In this work, a combined laboratory class of 11 environmental chemistry students developed a method based on published EPA methods for the extraction of dichlorodiphenyltrichloroethane (DDT) and its…
Esteban-Sánchez, Natalia; Pizarro, Celeste; Velázquez-Iturbide, J. Ángel
An evaluation of the educational effectiveness of a didactic method for the active learning of greedy algorithms is presented. The didactic method sets students structured-inquiry challenges to be addressed with a specific experimental method, supported by the interactive system GreedEx. This didactic method has been refined over several years of…
Gillespie, Suzanne M; Olsan, Tobie; Liebel, Dianne; Cai, Xueya; Stewart, Reginald; Katz, Paul R; Karuza, Jurgis
To describe the development of a nursing home (NH) quality improvement learning collaborative (QILC) that provides Lean Six Sigma (LSS) training and infrastructure support for quality assurance performance improvement change efforts. Case report. Twenty-seven NHs located in the Greater Rochester, NY area. The learning collaborative approach in which interprofessional teams from different NHs work together to improve common clinical and organizational processes by sharing experiences and evidence-based practices to achieve measurable changes in resident outcomes and system efficiencies. NH participation, curriculum design, LSS projects. Over 6 years, 27 NHs from urban and rural settings joined the QILC as organizational members and sponsored 47 interprofessional teams to learn LSS techniques and tools, and to implement quality improvement projects. NHs, in both urban and rural settings, can benefit from participation in QILCs and are able to learn and apply LSS tools in their team-based quality improvement efforts. Published by Elsevier Inc.
Full Text Available The purpose of this research was determined the effect of application WhatsApp Messenger in the Group Investigation (GI method on learning achievement. The methods used experimental research with control group pretest-postest design. The sampling procedure used the purposive sampling technique that consists of 17 students as a control group and 17 students as an experimental group. The sample in this research is students in Electrical Engineering Education Study Program. The experimental group used the GI method that integrated with WhatsApp Messenger. The control group used lecture method without social media integration. The collecting data used observation, documentation, interview, questionnaire, and test. The researcher used a t-test for compared the control group and the experimental group’s learning outcomes at an alpha level of 0,05. The results showed differences between the experiment group and the control group. The study result of the experimental higher than the control groups. This learning was designed with start, grouping, planning, presenting, organizing, investigating, evaluating, ending’s stage. Integration of WhatsApp with group investigation method could cause the positive communication between student and lecturer. Discussion in this learning was well done, the student’s knowledge could appear in a group and the information could spread evenly and quickly.
Wehner, Frank; Lorz, Alexander
This paper presents the use of an XML grammar for two complementary projects--CHAMELEON (Cooperative Hypermedia Adaptive MultimEdia Learning Objects) and EIT (Enabling Informal Teamwork). Areas of applications are modular courseware documents and the collaborative authoring process of didactical units. A number of requirements for a suitable…
Taufik, Nurshahira Alwani Mohd; Maat, Siti Mistima
Mathematics education is one of the branches to be mastered by students to help them compete with the upcoming challenges that are very challenging. As such, all parties should work together to help increase student achievement in Mathematics education in line with the Malaysian Education Blueprint (MEB) 2010-2025. Teaching methods play a very important role in attracting and fostering student understanding and interested in learning Mathematics. Therefore, this study was conducted to identify the perceptions of teachers in carrying out cooperative methods in the teaching and learning of mathematics. Participants of this study involving 4 teachers who teach Mathematics in primary schools around the state of Negeri Sembilan. Interviews are used as a method for gathering data. The findings indicate that cooperative methods help increasing interest and understanding in the teaching and learning of mathematics. In conclusion, the teaching methods affect the interest and understanding of students in the learning of Mathematics in the classroom.
Kidziński, Łukasz; Mohanty, Sharada Prasanna; Ong, Carmichael; Huang, Zhewei; Zhou, Shuchang; Pechenko, Anton; Stelmaszczyk, Adam; Jarosik, Piotr; Pavlov, Mikhail; Kolesnikov, Sergey; Plis, Sergey; Chen, Zhibo; Zhang, Zhizheng; Chen, Jiale; Shi, Jun
In the NIPS 2017 Learning to Run challenge, participants were tasked with building a controller for a musculoskeletal model to make it run as fast as possible through an obstacle course. Top participants were invited to describe their algorithms. In this work, we present eight solutions that used deep reinforcement learning approaches, based on algorithms such as Deep Deterministic Policy Gradient, Proximal Policy Optimization, and Trust Region Policy Optimization. Many solutions use similar ...
Blended learning is a teaching technique that utilizes face-to-face teaching and online or technology-based practice in which the learner has the ability to exert some level of control over the pace, place, path, or time of learning. Schools that employ this method of teaching often demonstrate larger gains than traditional face-to-face programs…
In this study, the effect of the learning together technique, which is one of the cooperative learning methods, on the development of the listening comprehension and listening skills of the secondary school eighth grade students was investigated. Regarding the purpose of the research, experimental and control groups consisting of 75 students from,…
Discusses methods for analyzing case studies of failures of technological systems. Describes two distance learning courses that compare standard models of failure and success with the actuality of given scenarios. Provides teaching and learning materials and information sources for application to aspects of design, manufacture, inspection, use,…
Estébanez, Raquel Pérez
In the way of continuous improvement in teaching methods this paper explores the effects of Cooperative Learning (CL) against Traditional Learning (TL) in academic performance of students in higher education in two groups of the first course of Computer Science Degree at the university. The empirical study was conducted through an analysis of…
Reimann, Peter; Markauskaite, Lina; Bannert, Maria
This paper discusses the fundamental question of how data-intensive e-research methods could contribute to the development of learning theories. Using methodological developments in research on self-regulated learning as an example, it argues that current applications of data-driven analytical techniques, such as educational data mining and its…
Suprasegmental features are of paramount importance in spoken English. Yet, these pronunciation features are marginalised in EFL/ESL teaching-learning. This article reported a study that was aimed at improving the students' mastery of English suprasegmental features through the use of reflective learning method. The study adopted Kemmis and…
Vargas-Vargas, Manuel; Mondejar-Jimenez, Jose; Santamaria, Maria-Letica Meseguer; Alfaro-Navarro, Jose-Luis; Fernandez-Aviles, Gema
This document sets out a novel teaching methodology as used in subjects with statistical content, traditionally regarded by students as "difficult". In a virtual learning environment, instructional techniques little used in mathematical courses were employed, such as the Jigsaw cooperative learning method, which had to be adapted to the…
Martin, John F.
This mixed methods study exploring student outcomes of service learning experiences is inter-disciplinary, near the intersection of higher education research, moral development, and nursing. The specific problem examined in this study is that service learning among university students is utilized by educators, but largely without a full…
Toland, John; Boyle, Christopher
This study involves the use of methods derived from cognitive behavioral therapy (CBT) to change the attributions for success and failure of school children with regard to learning. Children with learning difficulties and/or motivational and self-esteem difficulties (n = 29) were identified by their schools. The children then took part in twelve…
Hunt, Emily M.; Lockwood-Cooke, Pamela; Kelley, Judy
Problem-Based Learning (PBL) is a problem-centered teaching method with exciting potential in engineering education for motivating and enhancing student learning. Implementation of PBL in engineering education has the potential to bridge the gap between theory and practice. Two common problems are encountered when attempting to integrate PBL into…
Full Text Available Abstract Background Several studies in the UK have suggested that women with learning disabilities may be less likely to receive cervical screening tests and a previous local study in had found that GPs considered screening unnecessary for women with learning disabilities. This study set out to ascertain whether women with learning disabilities are more likely to be ceased from a cervical screening programme than women without; and to examine the reasons given for ceasing women with learning disabilities. It was carried out in Bury, Heywood-and-Middleton and Rochdale. Methods Carried out using retrospective cohort study methods, women with learning disabilities were identified by Read code; and their cervical screening records were compared with the Call-and-Recall records of women without learning disabilities in order to examine their screening histories. Analysis was carried out using case-control methods – 1:2 (women with learning disabilities: women without learning disabilities, calculating odds ratios. Results 267 women's records were compared with the records of 534 women without learning disabilities. Women with learning disabilities had an odds ratio (OR of 0.48 (Confidence Interval (CI 0.38 – 0.58; X2: 72.227; p.value X2: 24.236; p.value X2: 286.341; p.value Conclusion The reasons given for ceasing and/or not screening suggest that merely being coded as having a learning disability is not the sole reason for these actions. There are training needs among smear takers regarding appropriate reasons not to screen and providing screening for women with learning disabilities.
Branney, Jonathan; Priego-Hernández, Jacqueline
It is important for nurses to have a thorough understanding of the biosciences such as pathophysiology that underpin nursing care. These courses include content that can be difficult to learn. Team-based learning is emerging as a strategy for enhancing learning in nurse education due to the promotion of individual learning as well as learning in teams. In this study we sought to evaluate the use of team-based learning in the teaching of applied pathophysiology to undergraduate student nurses. A mixed methods observational study. In a year two, undergraduate nursing applied pathophysiology module circulatory shock was taught using Team-based Learning while all remaining topics were taught using traditional lectures. After the Team-based Learning intervention the students were invited to complete the Team-based Learning Student Assessment Instrument, which measures accountability, preference and satisfaction with Team-based Learning. Students were also invited to focus group discussions to gain a more thorough understanding of their experience with Team-based Learning. Exam scores for answers to questions based on Team-based Learning-taught material were compared with those from lecture-taught material. Of the 197 students enrolled on the module, 167 (85% response rate) returned the instrument, the results from which indicated a favourable experience with Team-based Learning. Most students reported higher accountability (93%) and satisfaction (92%) with Team-based Learning. Lectures that promoted active learning were viewed as an important feature of the university experience which may explain the 76% exhibiting a preference for Team-based Learning. Most students wanted to make a meaningful contribution so as not to let down their team and they saw a clear relevance between the Team-based Learning activities and their own experiences of teamwork in clinical practice. Exam scores on the question related to Team-based Learning-taught material were comparable to those
Ivana Đurđević Babić
Full Text Available Academic motivation is closely related to academic performance. For educators, it is equally important to detect early students with a lack of academic motivation as it is to detect those with a high level of academic motivation. In endeavouring to develop a classification model for predicting student academic motivation based on their behaviour in learning management system (LMS courses, this paper intends to establish links between the predicted student academic motivation and their behaviour in the LMS course. Students from all years at the Faculty of Education in Osijek participated in this research. Three machine learning classifiers (neural networks, decision trees, and support vector machines were used. To establish whether a significant difference in the performance of models exists, a t-test of the difference in proportions was used. Although, all classifiers were successful, the neural network model was shown to be the most successful in detecting the student academic motivation based on their behaviour in LMS course.
Kamra, Ashish; Ber, Elisa
Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.
How teaching and learning takes place in classrooms can be easily seen by the way classrooms are set up: Students' desks and chairs are arranged in rolls while teachers' desks are up front. Yet, why must teachers be the ones who lecture, why can't it be students? Would it be better or worse when teachers are the receivers and the students are the…
Policy flows are not quantifiable and calculating processes but part of the uneven movement of ideas and experiences that involves power and personalities. Processes of learning and policy circulation have thus proven difficult to study especially as the exchanges taking place between actors and localities rarely lead directly to uptake. This paper outlines a conceptual and methodological framework for conducting policy mobilities research by attending to the plethora of ordinary practices – ...
Sultan, A. Z.; Hamzah, N.; Rusdi, M.
The implementation of concept attainment method based on simulation was used to increase student’s interest in the subjects Engineering of Mechanics in second semester of academic year 2016/2017 in Manufacturing Engineering Program, Department of Mechanical PNUP. The result of the implementation of this learning method shows that there is an increase in the students’ learning interest towards the lecture material which is summarized in the form of interactive simulation CDs and teaching materials in the form of printed books and electronic books. From the implementation of achievement method of this simulation based concept, it is noted that the increase of student participation in the presentation and discussion as well as the deposit of individual assignment of significant student. With the implementation of this method of learning the average student participation reached 89%, which before the application of this learning method only reaches an average of 76%. And also with previous learning method, for exam achievement of A-grade under 5% and D-grade above 8%. After the implementation of the new learning method (simulation based-concept attainment method) the achievement of Agrade has reached more than 30% and D-grade below 1%.
Jeong, Yong Sun; Kim, Jin Sun
A blended learning can be a useful learning strategy to improve the quality of fever and fever management education for pediatric nurses. This study compared the effects of a blended and face-to-face learning program on pediatric nurses' childhood fever management, using theory of planned behavior. A nonequivalent control group pretest-posttest design was used. A fever management education program using blended learning (combining face-to-face and online learning components) was offered to 30 pediatric nurses, and 29 pediatric nurses received face-to-face education. Learning outcomes did not significantly differ between the two groups. However, learners' satisfaction was higher for the blended learning program than the face-to-face learning program. A blended learning pediatric fever management program was as effective as a traditional face-to-face learning program. Therefore, a blended learning pediatric fever management-learning program could be a useful and flexible learning method for pediatric nurses.
Roberts, Fiona; Cooper, Kay
The objective of this review is to identify if high fidelity simulated learning methods are effective in enhancing clinical/practical skills compared to usual, low fidelity simulated learning methods in pre-registration physiotherapy education.
Full Text Available This paper discusses the application of computational linguistics in the machine learning (ML system for the processing of garden path sentences. ML is closely related to artificial intelligence and linguistic cognition. The rapid and efficient processing of the complex structures is an effective method to test the system. By means of parsing the garden path sentence, we draw a conclusion that the integration of theoretical and statistical methods is helpful for the development of ML system.
Hommes, J; Van den Bossche, P; de Grave, W; Bos, G; Schuwirth, L; Scherpbier, A
Little is known how time influences collaborative learning groups in medical education. Therefore a thorough exploration of the development of learning processes over time was undertaken in an undergraduate PBL curriculum over 18 months. A mixed-methods triangulation design was used. First, the quantitative study measured how various learning processes developed within and over three periods in the first 1,5 study years of an undergraduate curriculum. Next, a qualitative study using semi-structured individual interviews focused on detailed development of group processes driving collaborative learning during one period in seven tutorial groups. The hierarchic multilevel analyses of the quantitative data showed that a varying combination of group processes developed within and over the three observed periods. The qualitative study illustrated development in psychological safety, interdependence, potency, group learning behaviour, social and task cohesion. Two new processes emerged: 'transactive memory' and 'convergence in mental models'. The results indicate that groups are dynamic social systems with numerous contextual influences. Future research should thus include time as an important influence on collaborative learning. Practical implications are discussed.
Dr. Ismail Ipek
Full Text Available The purpose of this paper is to provide basic dimensions for rapid training development in e-learning courses in education and business. Principally, it starts with defining task analysis and how to select tasks for analysis and task analysis methods for instructional design. To do this, first, learning and instructional technologies as visions of the future were discussed. Second, the importance of task analysis methods in rapid e-learning was considered, with learning technologies as asynchronous and synchronous e-learning development. Finally, rapid instructional design concepts and e-learning design strategies were defined and clarified with examples, that is, all steps for effective task analysis and rapid training development techniques based on learning and instructional design approaches were discussed, such as m-learning and other delivery systems. As a result, the concept of task analysis, rapid e-learning development strategies and the essentials of online course design were discussed, alongside learner interface design features for learners and designers.
Garcia Penna, Caridad M.; Montes de Oca Porto, Yanet; Salomon Izquierdo, Suslebys
Paracetamol is an effective analgesic and antipyretic drug of the non-steroidal anti-inflammatory drug group. Paracetamol oral drops are indicated for use in infant population aged up to 5 years to relieve fever, headache, toothache and symptomatic relief of common cold. To validate two analytical methods for the quality control and the stability study and to study the stability of 100 mg/ml Paracetamol oral drops made in Cuba
Chami, Muhammad; Seemüller, Holger; Voos, Holger
The engineering discipline mechatronics is one of the main innovation leader in industry nowadays. With the need for an optimal synergetic integration of the involved disciplines, the engineering process of mechatronic systems is faced with an increasing complexity and the interdisciplinary nature of these systems. New methods and techniques have to be developed to deal with these challenges. This document presents an approach of a SysML-based integration framework that s...
Howe, Tsu-Hsin; Sheu, Ching-Fan; Hinojosa, Jim
Cooperative learning provides an important vehicle for active learning, as knowledge is socially constructed through interaction with others. This study investigated the effect of cooperative learning on occupational therapy (OT) theory knowledge attainment in professional-level OT students in a classroom environment. Using a pre- and post-test group design, 24 first-year, entry-level OT students participated while taking a theory course in their second semester of the program. Cooperative learning methods were implemented via in-class group assignments. The students were asked to complete two questionnaires regarding their attitudes toward group environments and their perception toward group learning before and after the semester. MANCOVA was used to examine changes in attitudes and perceived learning among groups. Students' summary sheets for each in-class assignment and course evaluations were collected for content analysis. Results indicated significant changes in students' attitude toward working in small groups regardless of their prior group experience.
Full Text Available The «PBL working environment» is a virtual environment developed in the framework of SCENE project (profeSsional development for an effeCtive PBL approach: a practical experiENce through ICT-enabled lEarning solution, co-funded by the European Lifelong Learning Program. The «PBL working environment» is devoted to prepare headmasters and teachers of secondary and vocational schools to use Problem-Based Learning (PBL pedagogy effectively. It is a student-centered pedagogy where learners are «actively» engaged in real world problems to solve or challenges to meet. Students develop problem-solving, self-directed learning and team skills. The «PBL working environment» is an virtual tool including three main elements: e-learning platform, virtual facilitator and PBL repository. Teachers, trainers and headmasters/school managers learn the PBL pedagogy by attending an on-line course (e-learning platform delivered through the «inductive method». It allows learners to experience PBL approach, by practicing it stage by stage, and then learn to turn practice into theory by abstracting their experience to build a theoretical understanding. Since generating the proper scenario is the most critical aspect of PBL, after benefiting from the on-line course, users can benefit from a further support: the Virtual Facilitator. It provides tips and hints on how correctly design a problem scenario and by asking questions to collect data on user's specific needs. The Virtual Facilitator is able to provide a/or more suitable example(s which match as closest as possible the teacher/trainer need. Finally, users can share problem scenarios and projects of different subjects of studies and with different characteristics uploaded and downloaded in the PBL repository.
Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi
Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stirling, Bridget V
Learning style preference impacts how well groups of students respond to their curricula. Faculty have many choices in the methods for delivering nursing content, as well as assessing students. The purpose was to develop knowledge around how faculty delivered curricula content, and then considering these findings in the context of the students learning style preference. Following an in-service on teaching and learning styles, faculty completed surveys on their methods of teaching and the proportion of time teaching, using each learning style (visual, aural, read/write and kinesthetic). This study took place at the College of Nursing a large all-female university in Saudi Arabia. 24 female nursing faculty volunteered to participate in the project. A cross-sectional design was used. Faculty reported teaching using mostly methods that were kinesthetic and visual, although lecture was also popular (aural). Students preferred kinesthetic and aural learning methods. Read/write was the least preferred by students and the least used method of teaching by faculty. Faculty used visual methods about one third of the time, although they were not preferred by the students. Students' preferred learning style (kinesthetic) was the method most used by faculty. Copyright © 2017 Elsevier Ltd. All rights reserved.
Breckenridge, Jonathan T.; Johnson, Stephen B.
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
David J. Lary
Full Text Available Learning incorporates a broad range of complex procedures. Machine learning (ML is a subdivision of artificial intelligence based on the biological learning process. The ML approach deals with the design of algorithms to learn from machine readable data. ML covers main domains such as data mining, difficult-to-program applications, and software applications. It is a collection of a variety of algorithms (e.g. neural networks, support vector machines, self-organizing map, decision trees, random forests, case-based reasoning, genetic programming, etc. that can provide multivariate, nonlinear, nonparametric regression or classification. The modeling capabilities of the ML-based methods have resulted in their extensive applications in science and engineering. Herein, the role of ML as an effective approach for solving problems in geosciences and remote sensing will be highlighted. The unique features of some of the ML techniques will be outlined with a specific attention to genetic programming paradigm. Furthermore, nonparametric regression and classification illustrative examples are presented to demonstrate the efficiency of ML for tackling the geosciences and remote sensing problems.
Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P
Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.
House, Lisa; Sterns, James A.
This document contains the PowerPoint presentation given by the authors at the 2002 WCC-72 meetings, regarding what agricultural economics Ph.D students are learning about agribusiness research methods and subject areas.
Ayscough, P. B.; And Others
Discusses the application of computer-assisted learning methods to the interpretation of infrared, nuclear magnetic resonance, and mass spectra; and outlines extensions into the area of integrated spectroscopy. (Author/CMV)
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Araya, S. N.; Ghezzehei, T. A.
Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.
Stanton, H. E.
Discusses the Lozanov Method of teaching foreign languages developed by Lozanov in Bulgaria. This method (also known as Suggestopedia) uses various techniques such as physical relaxation exercises, mental concentration, classical music, and ego-enhancing suggestions. (CFM)
Rankin, Jean; Brown, Val
Traditional ways of teaching in Higher Education are enhanced with adult-based approaches to learning within the curriculum. Adult-based learning enables students to take ownership of their own learning, working in independence using a holistic approach. Introducing creative activities promotes students to think in alternative ways to the traditional learning models. The study aimed to explore student midwives perceptions of a creative teaching method as a learning strategy. A qualitative design was used adopting a phenomenological approach to gain the lived experience of students within this learning culture. Purposive sampling was used to recruit student midwives (n=30). Individual interviews were conducted using semi-structured interviews with open-ended questions to gain subjective information. Data were transcribed and analyzed into useful and meaningful themes and emerging themes using Colaizzi's framework for analyzing qualitative data in a logical and systematic way. Over 500 meaningful statements were identified from the transcripts. Three key themes strongly emerged from the transcriptions. These included'meaningful learning','inspired to learn and achieve', and 'being connected'. A deep meaningful learning experience was found to be authentic in the context of theory and practice. Students were inspired to learn and achieve and positively highlighted the safe learning environment. The abilities of the facilitators were viewed positively in supporting student learning. This approach strengthened the relationships and social engagement with others in the peer group and the facilitators. On a less positive note, tensions and conflict were noted in group work and indirect negative comments about the approach from the teaching team. Incorporating creative teaching activities is a positive addition to the healthcare curriculum. Creativity is clearly an asset to the range of contemporary learning strategies. In doing so, higher education will continue to keep
Hasanpour-Dehkordi, Ali; Solati, Kamal
Communication skills training, responsibility, respect, and self-awareness are important indexes of changing learning behaviours in modern approaches. The aim of this study was to investigate the efficacy of three learning approaches, collaborative, context-based learning (CBL), and traditional, on learning, attitude, and behaviour of undergraduate nursing students. This study was a clinical trial with pretest and post-test of control group. The participants were senior nursing students. The samples were randomly assigned to three groups; CBL, collaborative, and traditional. To gather data a standard questionnaire of students' behaviour and attitude was administered prior to and after the intervention. Also, the rate of learning was investigated by a researcher-developed questionnaire prior to and after the intervention in the three groups. In CBL and collaborative training groups, the mean score of behaviour and attitude increased after the intervention. But no significant association was obtained between the mean scores of behaviour and attitude prior to and after the intervention in the traditional group. However, the mean learning score increased significantly in the CBL, collaborative, and traditional groups after the study in comparison to before the study. Both CBL and collaborative approaches were useful in terms of increased respect, self-awareness, self-evaluation, communication skills and responsibility as well as increased motivation and learning score in comparison to traditional method.
Madson, Laura; Trafimow, David; Gray, Tara; Gutowitz, Michael
What makes some faculty members more likely to use interactive engagement methods than others? We use the theory of reasoned action to predict faculty members' use of interactive engagement methods. Results indicate that faculty members' beliefs about the personal positive consequences of using these methods (e.g., "Using interactive…
Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco
Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.
A. S. Potapov
Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.
A NEW METHOD FOR LEARNING TO READ TECHNICAL LITERATURE IN A FOREIGN LANGUAGE IS BEING DEVELOPED AND TESTED AT THE LANGUAGE CENTRE OF THE UNIVERSITY OF ESSEX, COLCHESTER, ENGLAND. THE METHOD IS CALLED "THREE QUESTION EXPERIMENTAL METHOD (3QX)," AND IT HAS BEEN USED IN THREE COURSES FOR TEACHING SCIENTIFIC RUSSIAN TO PHYSICISTS. THE THREE…
Distance . . . . . . . . . . . . . . . . 84 Successful Programs Use a Variety of Methods to Foster Student Engagement and Success in Online Interactive...sometimes interact in ways that inhibit collaborative learning. Successful Programs Use a Variety of Methods to Foster Student Engagement and...Programs Use a Variety of Methods to Foster Student Engagement and Success in Online Interactive Activities We looked to the case studies for
Rudick, C. Kyle; Golsan, Kathryn B.; Freitag, Jennifer
Course: Mixed-Method Communication Research Methods. Objective: The purpose of this semester-long activity is to provide students with opportunities to cultivate mixed-method communication research skills through a social justice-informed service-learning format. Completing this course, students will be able to: recognize the unique strengths of…
Wrinkle, Cheryl Schaefer; Manivannan, Mani K.
The K-W-L method of teaching is a simple method that actively engages students in their own learning. It has been used with kindergarten and elementary grades to teach other subjects. The authors have successfully used it to teach physics at the college level. In their introductory physics labs, the K-W-L method helped students think about what…
van der Loo, Janneke; Krahmer, Emiel; van Amelsvoort, Marije
In this paper we present preliminary results on a study on the effect of instructional method (observational learning and learning by doing) and reflection (yes or no) on academic text quality and self-efficacy beliefs. 56 undergraduate students were assigned to either an observational learning or learning-by-doing condition, with or without…
Chan, Cecilia Ka Yuk
Experiential learning pedagogy is taking a lead in the development of graduate attributes and educational aims as these are of prime importance for society. This paper shows a community service experiential project conducted in China. The project enabled students to serve the affected community in a post-earthquake area by applying their knowledge…
Hussain, Sayed Yusoff bin Syed; Hoe, Tan Wee; Idris, Muhammad Zaffwan bin
Digital game-based learning (DGBL) had been regarded as a sound learning strategy in raising pupils' willingness and interest in many disciplines. Normally, video and digital games are used in the teaching and learning mathematics. based on literature, digital games have proven its capability in making pupils motivated and are more likely to contribute to effective learning mathematics. Hence this research aims to construct a DGBL in the teaching of Mathematics for Year 1 pupils. Then, a quasi-experimental study was carried out in a school located in Gua Musang, Kelantan, involving 39 pupils. Specifically, this article tests the effectiveness of the use of DGBL in the teaching of the topic Addition of Less than 100 on pupil's achievement. This research employed a quasi-experiment, Pre and Post Test of Non-equivalent Control Group design. The data were analysed using the Nonparametric test namely the Mann-Whitney U. The research finding shows the use of the DGBL could increase the pupils' achievement in the topic of Addition of Less than 100. In practice, this research indicates that the DBGL can utilized as an alternative reference strategy for Mathematics teacher.
Kupczynski, Lori; Mundy, Marie-Anne; Ruiz, Alberto
The purpose of this study was to examine the effects of the Community of Inquiry framework through an in-depth examination of learning comprised of teaching, social and cognitive presence in traditional versus cooperative online teaching at a community college. A total of 21 students participated in this study, with approximately 45% having taken…
Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approaches only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.
Wu, Lin; Wang, Yang; Pan, Shirui
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
In MIMO communication systems maximum-likelihood (ML) decoding can be formulated as a tree-searching problem. This paper presents a tree-searching approach that combines the features of classical depth-first and breadth-first approaches to achieve close to ML performance while minimizing the number of visited nodes. A detailed outline of the algorithm is given, including the required storage. The effects of storage size on BER performance and complexity in terms of search space are also studied. Our result demonstrates that with a proper choice of storage size the proposed method visits 40% fewer nodes than a sphere decoding algorithm at signal to noise ratio (SNR) = 20dB and by an order of magnitude at 0 dB SNR.
Cheryl J. Davis
Full Text Available It is common in college courses to test students on the required readings for that course. With a rise in online education it is often the case that students are required to provide evidence of reading the material. However, there is little empirical research stating the best written means to assess that students read the materials. This study experimentally compared the effect of assigned reading summaries or study questions on student test performance. The results revealed that study questions produced higher quiz scores and higher preparation for the quiz, based on student feedback. Limitations of the study included a small sample size and extraneous activities that may have affected general knowledge on a topic. Results suggest that study questions focusing students on critical information in the required readings improve student learning.
The best scientists and engineers regularly combine creative and critical skill sets. As faculty, we are responsible to provide future scientists and engineers with those skills sets. EGR 390: Engineering Measurements at Murray State University is structured to actively engage students in the processes that develop and enhance those skills. Students learn through a mix of traditional lecture and homework, active discussion of open-ended questions, small group activities, structured laboratory exercises, oral and written communications exercises, student chosen team projects, and peer evaluations. Examples of each of these activities, the skill set addressed by each activity, outcomes from and effectiveness of each activity and recommendations for future directions in the EGR 390 course as designed will be presented.
Maizels, Max; Mickelson, Jennie; Yerkes, Elizabeth; Maizels, Evelyn; Stork, Rachel; Young, Christine; Corcoran, Julia; Holl, Jane; Kaplan, William E
Changes in health care are stimulating residency training programs to develop new methods for teaching surgical skills. We developed Computer-Enhanced Visual Learning (CEVL) as an innovative Internet-based learning and assessment tool. The CEVL method uses the educational procedures of deliberate practice and performance to teach and learn surgery in a stylized manner. CEVL is a learning and assessment tool that can provide students and educators with quantitative feedback on learning a specific surgical procedure. Methods involved examine quantitative data of improvement in surgical skills. Herein, we qualitatively describe the method and show how program directors (PDs) may implement this technique in their residencies. CEVL allows an operation to be broken down into teachable components. The process relies on feedback and remediation to improve performance, with a focus on learning that is applicable to the next case being performed. CEVL has been shown to be effective for teaching pediatric orchiopexy and is being adapted to additional adult and pediatric procedures and to office examination skills. The CEVL method is available to other residency training programs.
McDermott, Hilary J.; Dovey, Terence M.
Research methods courses aim to equip students with the knowledge and skills required for research yet seldom include practical aspects of assessment. This reflective practitioner report describes and evaluates an innovative approach to teaching and assessing advanced qualitative research methods to final-year psychology undergraduate students. An…
The research methods unit of survey psychology classes introduces important concepts of scientific reasoning and fluency, making it an ideal course in which to deliver enhanced curricula. To increase interest and engagement, the author developed an expanded research methods and statistics module to give students the opportunity to explore…
Fan, Yu; Guo, Huiming
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Liu, Yuzhe; Gopalakrishnan, Vanathi
Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.
Bonney, Kevin M
Following years of widespread use in business and medical education, the case study teaching method is becoming an increasingly common teaching strategy in science education. However, the current body of research provides limited evidence that the use of published case studies effectively promotes the fulfillment of specific learning objectives integral to many biology courses. This study tested the hypothesis that case studies are more effective than classroom discussions and textbook reading at promoting learning of key biological concepts, development of written and oral communication skills, and comprehension of the relevance of biological concepts to everyday life. This study also tested the hypothesis that case studies produced by the instructor of a course are more effective at promoting learning than those produced by unaffiliated instructors. Additionally, performance on quantitative learning assessments and student perceptions of learning gains were analyzed to determine whether reported perceptions of learning gains accurately reflect academic performance. The results reported here suggest that case studies, regardless of the source, are significantly more effective than other methods of content delivery at increasing performance on examination questions related to chemical bonds, osmosis and diffusion, mitosis and meiosis, and DNA structure and replication. This finding was positively correlated to increased student perceptions of learning gains associated with oral and written communication skills and the ability to recognize connections between biological concepts and other aspects of life. Based on these findings, case studies should be considered as a preferred method for teaching about a variety of concepts in science courses.
Kevin M. Bonney
Full Text Available Following years of widespread use in business and medical education, the case study teaching method is becoming an increasingly common teaching strategy in science education. However, the current body of research provides limited evidence that the use of published case studies effectively promotes the fulfillment of specific learning objectives integral to many biology courses. This study tested the hypothesis that case studies are more effective than classroom discussions and textbook reading at promoting learning of key biological concepts, development of written and oral communication skills, and comprehension of the relevance of biological concepts to everyday life. This study also tested the hypothesis that case studies produced by the instructor of a course are more effective at promoting learning than those produced by unaffiliated instructors. Additionally, performance on quantitative learning assessments and student perceptions of learning gains were analyzed to determine whether reported perceptions of learning gains accurately reflect academic performance. The results reported here suggest that case studies, regardless of the source, are significantly more effective than other methods of content delivery at increasing performance on examination questions related to chemical bonds, osmosis and diffusion, mitosis and meiosis, and DNA structure and replication. This finding was positively correlated to increased student perceptions of learning gains associated with oral and written communication skills and the ability to recognize connections between biological concepts and other aspects of life. Based on these findings, case studies should be considered as a preferred method for teaching about a variety of concepts in science courses.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
Ustvarjanje produktivnega geografskega učnega okolja z vidika učnih stilov, oblik in metod = Creating the productive geographical learning environment from the point of view of learning-styles and learning-methods
Full Text Available Experiences, which we receive in space (indirectly influence on education process respectivelyon learning-environment. Because of that is the most productive learning-environmentthose witch founded on experiential-learning. In this research experience took the leadingplace in forming didactical approaches in teaching geography and to define learning-stylesand methods respectively in the direction of creating representative geographical learningenvironment.
SADEGHI, ROYA; SEDAGHAT, MOHAMMAD MEHDI; SHA AHMADI, FARAMARZ
Introduction: Blended learning, a new approach in educational planning, is defined as an applying more than one method, strategy, technique or media in education. Todays, due to the development of infrastructure of Internet networks and the access of most of the students, the Internet can be utilized along with traditional and conventional methods of training. The aim of this study was to compare the students’ learning and satisfaction in combination of lecture and e-learning with conventional lecture methods. Methods: This quasi-experimental study is conducted among the sophomore students of Public Health School, Tehran University of Medical Science in 2012-2013. Four classes of the school are randomly selected and are divided into two groups. Education in two classes (45 students) was in the form of lecture method and in the other two classes (48 students) was blended method with e-Learning and lecture methods. The students’ knowledge about tuberculosis in two groups was collected and measured by using pre and post-test. This step has been done by sending self-reported electronic questionnaires to the students' email addresses through Google Document software. At the end of educational programs, students' satisfaction and comments about two methods were also collected by questionnaires. Statistical tests such as descriptive methods, paired t-test, independent t-test and ANOVA were done through the SPSS 14 software, and p≤0.05 was considered as significant difference. Results: The mean scores of the lecture and blended groups were 13.18±1.37 and 13.35±1.36, respectively; the difference between the pre-test scores of the two groups was not statistically significant (p=0.535). Knowledge scores increased in both groups after training, and the mean and standard deviation of knowledge scores of the lectures and combined groups were 16.51±0.69 and 16.18±1.06, respectively. The difference between the post-test scores of the two groups was not statistically
Obayashi, Masanao; Uchiyama, Shogo; Kuremoto, Takashi; Kobayashi, Kunikazu
This study proposes a robust cooperated control method combining reinforcement learning with robust control to control the system. A remarkable characteristic of the reinforcement learning is that it doesn't require model formula, however, it doesn't guarantee the stability of the system. On the other hand, robust control system guarantees stability and robustness, however, it requires model formula. We employ both the actor-critic method which is a kind of reinforcement learning with minimal amount of computation to control continuous valued actions and the traditional robust control, that is, H∞ control. The proposed system was compared method with the conventional control method, that is, the actor-critic only used, through the computer simulation of controlling the angle and the position of a crane system, and the simulation result showed the effectiveness of the proposed method.
Liang, Faming; Carrol, Raymond J
This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight
Tian, Yuling; Zhang, Hongxian
For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions.
Liang, Ru-Ze; Xie, Wei; Li, Weizhi; Wang, Hongqi; Wang, Jim Jing-Yan; Taylor, Lisa
In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.
In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.
Mingjie Tan; Peiji Shao
The high rate of dropout is a serious problem in E-learning program. Thus it has received extensive concern from the education administrators and researchers. Predicting the potential dropout students is a workable solution to prevent dropout. Based on the analysis of related literature, this study selected student’s personal characteristic and academic performance as input attributions. Prediction models were developed using Artificial Neural Network (ANN), Decision Tree (DT) and Bayesian Ne...
Mohamed, Aly A; Berg, Wendie A; Peng, Hong; Luo, Yahong; Jankowitz, Rachel C; Wu, Shandong
Mammographic breast density is an established risk marker for breast cancer and is visually assessed by radiologists in routine mammogram image reading, using four qualitative Breast Imaging and Reporting Data System (BI-RADS) breast density categories. It is particularly difficult for radiologists to consistently distinguish the two most common and most variably assigned BI-RADS categories, i.e., "scattered density" and "heterogeneously dense". The aim of this work was to investigate a deep learning-based breast density classifier to consistently distinguish these two categories, aiming at providing a potential computerized tool to assist radiologists in assigning a BI-RADS category in current clinical workflow. In this study, we constructed a convolutional neural network (CNN)-based model coupled with a large (i.e., 22,000 images) digital mammogram imaging dataset to evaluate the classification performance between the two aforementioned breast density categories. All images were collected from a cohort of 1,427 women who underwent standard digital mammography screening from 2005 to 2016 at our institution. The truths of the density categories were based on standard clinical assessment made by board-certified breast imaging radiologists. Effects of direct training from scratch solely using digital mammogram images and transfer learning of a pretrained model on a large nonmedical imaging dataset were evaluated for the specific task of breast density classification. In order to measure the classification performance, the CNN classifier was also tested on a refined version of the mammogram image dataset by removing some potentially inaccurately labeled images. Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were used to measure the accuracy of the classifier. The AUC was 0.9421 when the CNN-model was trained from scratch on our own mammogram images, and the accuracy increased gradually along with an increased size of training samples
National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...
Sadeghi, Roya; Sedaghat, Mohammad Mehdi; Sha Ahmadi, Faramarz
Blended learning, a new approach in educational planning, is defined as an applying more than one method, strategy, technique or media in education. Todays, due to the development of infrastructure of Internet networks and the access of most of the students, the Internet can be utilized along with traditional and conventional methods of training. The aim of this study was to compare the students' learning and satisfaction in combination of lecture and e-learning with conventional lecture methods. This quasi-experimental study is conducted among the sophomore students of Public Health School, Tehran University of Medical Science in 2012-2013. Four classes of the school are randomly selected and are divided into two groups. Education in two classes (45 students) was in the form of lecture method and in the other two classes (48 students) was blended method with e-Learning and lecture methods. The students' knowledge about tuberculosis in two groups was collected and measured by using pre and post-test. This step has been done by sending self-reported electronic questionnaires to the students' email addresses through Google Document software. At the end of educational programs, students' satisfaction and comments about two methods were also collected by questionnaires. Statistical tests such as descriptive methods, paired t-test, independent t-test and ANOVA were done through the SPSS 14 software, and p≤0.05 was considered as significant difference. The mean scores of the lecture and blended groups were 13.18±1.37 and 13.35±1.36, respectively; the difference between the pre-test scores of the two groups was not statistically significant (p=0.535). Knowledge scores increased in both groups after training, and the mean and standard deviation of knowledge scores of the lectures and combined groups were 16.51±0.69 and 16.18±1.06, respectively. The difference between the post-test scores of the two groups was not statistically significant (p=0.112). Students
Full Text Available The US shale exploration and production (E&P industry has grown since 2007 due to the development of new techniques such as hydraulic fracturing and horizontal drilling. As a result, the share of shale gas in the US natural gas production is almost 50%, and the share of tight oil in the US crude oil production is almost 52%. Even though oil and gas prices decreased sharply in 2014, the production amounts of shale gas and tight oil increased between 2014 and 2015. We show that many players in the US shale E&P industry succeeded in decreasing their production costs to maintain their business activity and production. However, most of the companies in the US petroleum E&P industry incurred losses in 2015 and 2016. Furthermore, crude oil and natural gas prices could not rebound to their 2015 price levels. Therefore, many companies in the US petroleum E&P industry need to increase their productivity to overcome the low commodity prices situation. Hence, to test the change in their productivity and analyze their ability to survive in the petroleum industry, this study calculates the learning rate using the US shale E&P players’ annual report data from 2008 to 2016. The result of the calculation is that the long-term learning rate is 1.87% and the short-term learning rate is 3.16%. This indicates a change in the technological development trend.
Learning Prototypical Cases OFF-BROADWAY, MCI and RMHC -* are three CBR-ML systems that learn case prototypes. We feel that methods that enable the...at Irvine Machine Learning Repository, including heart disease and breast cancer databases. OFF-BROADWAY, MCI and RMHC -* made the following notable
Gurpinar, Erol; Alimoglu, Mustafa Kemal; Mamakli, Sumer; Aktekin, Mehmet
The curriculum of our medical school has a hybrid structure including both traditional training (lectures) and problem-based learning (PBL) applications. The purpose of this study was to determine the learning styles of our medical students and investigate the relation of learning styles with each of satisfaction with different instruction methods and academic achievement in them. This study was carried out with the participation of 170 first-year medical students (the participation rate was 91.4%). The researchers prepared sociodemographic and satisfaction questionnaires to determine the characteristics of the participants and their satisfaction levels with traditional training and PBL. The Kolb learning styles inventory was used to explore the learning styles of the study group. The participants completed all forms at the end of the first year of medical education. Indicators of academic achievement were scores of five theoretical block exams and five PBL exams performed throughout the academic year of 2008-2009. The majority of the participants took part in the "diverging" (n = 84, 47.7%) and "assimilating" (n = 73, 41.5%) groups. Numbers of students in the "converging" and "accommodating" groups were 11 (6.3%) and 8 (4.5%), respectively. In all learning style groups, PBL satisfaction scores were significantly higher than those of traditional training. Exam scores for "PBL and traditional training" did not differ among the four learning styles. In logistic regression analysis, learning style (assimilating) predicted student satisfaction with traditional training and success in theoretical block exams. Nothing predicted PBL satisfaction and success. This is the first study conducted among medical students evaluating the relation of learning style with student satisfaction and academic achievement. More research with larger groups is needed to generalize our results. Some learning styles may relate to satisfaction with and achievement in some instruction methods.
Edwards, Nathan J
The PepArML meta-search peptide identification platform for tandem mass spectra provides a unified search interface to seven search engines; a robust cluster, grid, and cloud computing scheduler for large-scale searches; and an unsupervised, model-free, machine-learning-based result combiner, which selects the best peptide identification for each spectrum, estimates false-discovery rates, and outputs pepXML format identifications. The meta-search platform supports Mascot; Tandem with native, k-score and s-score scoring; OMSSA; MyriMatch; and InsPecT with MS-GF spectral probability scores—reformatting spectral data and constructing search configurations for each search engine on the fly. The combiner selects the best peptide identification for each spectrum based on search engine results and features that model enzymatic digestion, retention time, precursor isotope clusters, mass accuracy, and proteotypic peptide properties, requiring no prior knowledge of feature utility or weighting. The PepArML meta-search peptide identification platform often identifies two to three times more spectra than individual search engines at 10% FDR.
Floor-fractured craters are impact craters that have undergone post impact deformations. They are characterized by shallow floors with a plate-like or convex appearance, wide floor moats, and radial, concentric, and polygonal floor-fractures. While the origin of these deformations has long been debated, it is now generally accepted that they are the result of the emplacement of shallow magmatic intrusions below their floor. These craters thus constitute an efficient tool to probe the importance of intrusive magmatism from the lunar surface. The most recent catalog of lunar-floor fractured craters references about 200 of them, mainly located around the lunar maria Herein, we will discuss the possibility of using machine learning algorithms to try to detect new floor-fractured craters on the Moon among the 60000 craters referenced in the most recent catalogs. In particular, we will use the gravity field provided by the Gravity Recovery and Interior Laboratory (GRAIL) mission, and the topographic dataset obtained from the Lunar Orbiter Laser Altimeter (LOLA) instrument to design a set of representative features for each crater. We will then discuss the possibility to design a binary supervised classifier, based on these features, to discriminate between the presence or absence of crater-centered intrusion below a specific crater. First predictions from different classifier in terms of their accuracy and uncertainty will be presented.
Zhang, Wen; Zhu, Xiaopeng; Fu, Yu; Tsuji, Junko; Weng, Zhiping
Alternative splicing is the critical process in a single gene coding, which removes introns and joins exons, and splicing branchpoints are indicators for the alternative splicing. Wet experiments have identified a great number of human splicing branchpoints, but many branchpoints are still unknown. In order to guide wet experiments, we develop computational methods to predict human splicing branchpoints. Considering the fact that an intron may have multiple branchpoints, we transform the branchpoint prediction as the multi-label learning problem, and attempt to predict branchpoint sites from intron sequences. First, we investigate a variety of intron sequence-derived features, such as sparse profile, dinucleotide profile, position weight matrix profile, Markov motif profile and polypyrimidine tract profile. Second, we consider several multi-label learning methods: partial least squares regression, canonical correlation analysis and regularized canonical correlation analysis, and use them as the basic classification engines. Third, we propose two ensemble learning schemes which integrate different features and different classifiers to build ensemble learning systems for the branchpoint prediction. One is the genetic algorithm-based weighted average ensemble method; the other is the logistic regression-based ensemble method. In the computational experiments, two ensemble learning methods outperform benchmark branchpoint prediction methods, and can produce high-accuracy results on the benchmark dataset.
Ching Lee Koo
Full Text Available Recently, the greatest statistical computational challenge in genetic epidemiology is to identify and characterize the genes that interact with other genes and environment factors that bring the effect on complex multifactorial disease. These gene-gene interactions are also denoted as epitasis in which this phenomenon cannot be solved by traditional statistical method due to the high dimensionality of the data and the occurrence of multiple polymorphism. Hence, there are several machine learning methods to solve such problems by identifying such susceptibility gene which are neural networks (NNs, support vector machine (SVM, and random forests (RFs in such common and multifactorial disease. This paper gives an overview on machine learning methods, describing the methodology of each machine learning methods and its application in detecting gene-gene and gene-environment interactions. Lastly, this paper discussed each machine learning method and presents the strengths and weaknesses of each machine learning method in detecting gene-gene interactions in complex human disease.
Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.
Latipa Sari, Herlina; Suranti Mrs., Dewi; Natalia Zulita, Leni
Teaching and Learning process at SMK Negeri 2 Bengkulu Tengah has applied e-learning system for teachers and students. The e-learning was based on the classification of normative, productive, and adaptive subjects. SMK Negeri 2 Bengkulu Tengah consisted of 394 students and 60 teachers with 16 subjects. The record of e-learning database was used in this research to observe students’ activity pattern in attending class. K-Means algorithm in this research was used to classify students’ learning activities using e-learning, so that it was obtained cluster of students’ activity and improvement of student’s ability. Implementation of K-Means Clustering method for electronic learning model at SMK Negeri 2 Bengkulu Tengah was conducted by observing 10 students’ activities, namely participation of students in the classroom, submit assignment, view assignment, add discussion, view discussion, add comment, download course materials, view article, view test, and submit test. In the e-learning model, the testing was conducted toward 10 students that yielded 2 clusters of membership data (C1 and C2). Cluster 1: with membership percentage of 70% and it consisted of 6 members, namely 1112438 Anggi Julian, 1112439 Anis Maulita, 1112441 Ardi Febriansyah, 1112452 Berlian Sinurat, 1112460 Dewi Anugrah Anwar and 1112467 Eka Tri Oktavia Sari. Cluster 2:with membership percentage of 30% and it consisted of 4 members, namely 1112463 Dosita Afriyani, 1112471 Erda Novita, 1112474 Eskardi and 1112477 Fachrur Rozi.
Purpose: Dental extraction is a routine part of clinical dental practice. For this reason, understanding the way how students’ extraction knowledge and skills development are important. Problem Statement and Objectives: To date, there is no accredited statement about the most effective method for the teaching of exodontia to dental students. Students have different abilities and preferences regarding how they learn and process information. This is defined as learning style. In this study, the effectiveness of active learning in the teaching of preclinical oral surgery was examined. The personality type of the groups involved in this study was determined, and the possible effect of personality type on learning style was investigated. Method: This study was undertaken over five years from 2011 to 2015. The sample consisted of 115 students and eight staff members. Questionnaires were submitted by 68 students and all eight staff members involved. Three measures were used in the study: The Index of Learning Styles (Felder and Soloman, 1991), the Myers-Briggs Type Indicator (MBTI), and the styles of learning typology (Grasha and Hruska-Riechmann). Results and Discussion: Findings indicated that demonstration and minimal clinical exposure give students personal validation. Frequent feedback on their work is strongly indicated to build the cognitive, psychomotor, and interpersonal skills needed from preclinical oral surgery courses. Conclusion: Small group cooperative active learning in the form of demonstration and minimal clinical exposure that gives frequent feedback and students’ personal validation on their work is strongly indicated to build the skills needed for preclinical oral surgery courses. PMID:28357004
Finnegan, Alex; Song, Jun S
New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.
Madenda, Sarifuddin; Tommy, F. R.
New developments in information technology and telecommunication play an important rile in exchanging fast and accurate information which range from text, sound, graphic to video. These technologies seem to be very effective for Distance learning, Virtual University and E-learning. This paper presents an E-learning programming method and it's implementation based on multimedia and Web. An example of the study case corresponds to human organ, where the organ functions are presented as texts and sounds and the activities as graphic and video
Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša
Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...
Blumberg, Maurice H.; Randall, Richard L.
This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.
Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.
Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
Full Text Available Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV approach and adaptive dictionary learning (DL. In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
The present paper discusses attentional focus in motor learning and performance from the point of view of mindful movement practices, taking as a starting point the Feldenkrais method. It is argued that earlier criticism of the Feldenkrais method (and thereby implicitly of mindful movement practices more generally) because of allegedly inappropriate attentional focus turns out to be unfounded in light of recent developments in the study of motor learning and performance. Conversely, the examples of the Feldenkrais method and Ki-Aikido are used to illustrate how both Western and Eastern (martial arts derived) mindful movement practices might benefit sports psychology. © The Author(s) 2016.
Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna
machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...
Nath, Abhigyan; Kumari, Priyanka; Chaube, Radha
Identification of drug targets and drug target interactions are important steps in the drug-discovery pipeline. Successful computational prediction methods can reduce the cost and time demanded by the experimental methods. Knowledge of putative drug targets and their interactions can be very useful for drug repurposing. Supervised machine learning methods have been very useful in drug target prediction and in prediction of drug target interactions. Here, we describe the details for developing prediction models using supervised learning techniques for human drug target prediction and their interactions.
Bobra, M. G.; Ilonidis, S.
Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections (CMEs). Usually, solar active regions that produce large flares will also produce a CME, but this is not always true. Despite advances in numerical modeling, it is still unclear which circumstances will produce a CME. Therefore, it is worthwhile to empirically determine which features distinguish flares associated with CMEs from flares that are not. At this time, no extensive study has used physically meaningful features of active regions to distinguish between these two populations. As such, we attempt to do so by using features derived from (1) photospheric vector magnetic field data taken by the Solar Dynamics Observatory’s Helioseismic and Magnetic Imager instrument and (2) X-ray flux data from the Geostationary Operational Environmental Satellite’s X-ray Flux instrument. We build a catalog of active regions that either produced both a flare and a CME (the positive class) or simply a flare (the negative class). We then use machine-learning algorithms to (1) determine which features distinguish these two populations, and (2) forecast whether an active region that produces an M- or X-class flare will also produce a CME. We compute the True Skill Statistic, a forecast verification metric, and find that it is a relatively high value of ∼0.8 ± 0.2. We conclude that a combination of six parameters, which are all intensive in nature, will capture most of the relevant information contained in the photospheric magnetic field.
Bobra, M. G.; Ilonidis, S. [W.W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States)
Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections (CMEs). Usually, solar active regions that produce large flares will also produce a CME, but this is not always true. Despite advances in numerical modeling, it is still unclear which circumstances will produce a CME. Therefore, it is worthwhile to empirically determine which features distinguish flares associated with CMEs from flares that are not. At this time, no extensive study has used physically meaningful features of active regions to distinguish between these two populations. As such, we attempt to do so by using features derived from (1) photospheric vector magnetic field data taken by the Solar Dynamics Observatory ’s Helioseismic and Magnetic Imager instrument and (2) X-ray flux data from the Geostationary Operational Environmental Satellite’s X-ray Flux instrument. We build a catalog of active regions that either produced both a flare and a CME (the positive class) or simply a flare (the negative class). We then use machine-learning algorithms to (1) determine which features distinguish these two populations, and (2) forecast whether an active region that produces an M- or X-class flare will also produce a CME. We compute the True Skill Statistic, a forecast verification metric, and find that it is a relatively high value of ∼0.8 ± 0.2. We conclude that a combination of six parameters, which are all intensive in nature, will capture most of the relevant information contained in the photospheric magnetic field.
Purpose: Dental extraction is a routine part of clinical dental practice. For this reason, understanding the way how students? extraction knowledge and skills development are important. Problem Statement and Objectives: To date, there is no accredited statement about the most effective method for the teaching of exodontia to dental students. Students have different abilities and preferences regarding how they learn and process information. This is defined as learning style. In this study, the...
Full Text Available In this paper, we are developing an automated method for the detection of tubercle bacilli in clinical specimens, principally the sputum. This investigation is the first attempt to automatically identify TB bacilli in sputum using image processing and learning vector quantization (LVQ techniques. The evaluation of the learning vector quantization (LVQ was carried out on Tuberculosis dataset show that average of accuracy is 91,33%.
The purpose of this empirical study was to determine the extent to which three different objective analytical methods--sequence analysis, surface cohesion analysis, and lexical cohesion analysis--can most accurately identify specific characteristics of online interaction. Statistically significant differences were found in all points of…
Stone, Anna; Meade, Claire; Watling, Rosamond
Feedback from students on a Level 1 Research Methods and Statistics module, studied as a core part of a BSc Psychology programme, highlighted demand for additional tutorials to help them to understand basic concepts. Students in their final year of study commonly request work experience to enhance their employability. All students on the Level 1…
GOANTA Adrian Mihai
Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.
The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…
Curriculum of 2013 has been started in schools appointed as the implementer. This curriculum, for English subject demands the students to improve their skills. To reach this one of the suggested methods is discovery learning since this method is considered appropriate to implement for increasing the students' ability especially to fulfill minimum…
The prime objective of this research was to investigate whether the Montessori method of learning helped kindergarten pupils improve their mathematical proficiency, critical thinking and problem-solving skills, besides training them to be responsible learners. Quantitative, qualitative, and observational methods were employed in the investigation.…
The purpose of this study was to investigate the effects of inquiry-based learning method on students' academic achievement in sciences lesson. A total of 40 fifth grade students from two different classes were involved in the study. They were selected through purposive sampling method. The group which was assigned as experimental group was…