Sandoval, Michelle; Patterson, Dianne; Dai, Huanping; Vance, Christopher J; Plante, Elena
The neural basis of statistical learning as it occurs over time was explored with stimuli drawn from a natural language (Russian nouns). The input reflected the "rules" for marking categories of gendered nouns, without making participants explicitly aware of the nature of what they were to learn. Participants were scanned while listening to a series of gender-marked nouns during four sequential scans, and were tested for their learning immediately after each scan. Although participants were not told the nature of the learning task, they exhibited learning after their initial exposure to the stimuli. Independent component analysis of the brain data revealed five task-related sub-networks. Unlike prior statistical learning studies of word segmentation, this morphological learning task robustly activated the inferior frontal gyrus during the learning period. This region was represented in multiple independent components, suggesting it functions as a network hub for this type of learning. Moreover, the results suggest that subnetworks activated by statistical learning are driven by the nature of the input, rather than reflecting a general statistical learning system.
Vapnik, Vladimir; Vashist, Akshay
In the Afterword to the second edition of the book "Estimation of Dependences Based on Empirical Data" by V. Vapnik, an advanced learning paradigm called Learning Using Hidden Information (LUHI) was introduced. This Afterword also suggested an extension of the SVM method (the so called SVM(gamma)+ method) to implement algorithms which address the LUHI paradigm (Vapnik, 1982-2006, Sections 2.4.2 and 2.5.3 of the Afterword). See also (Vapnik, Vashist, & Pavlovitch, 2008, 2009) for further development of the algorithms. In contrast to the existing machine learning paradigm where a teacher does not play an important role, the advanced learning paradigm considers some elements of human teaching. In the new paradigm along with examples, a teacher can provide students with hidden information that exists in explanations, comments, comparisons, and so on. This paper discusses details of the new paradigm and corresponding algorithms, introduces some new algorithms, considers several specific forms of privileged information, demonstrates superiority of the new learning paradigm over the classical learning paradigm when solving practical problems, and discusses general questions related to the new ideas.
Healy, M J
Until recently, the dominant philosophy of science was that due to Karl Popper, with its doctrine that the proper task of science was the formulation of hypotheses followed by attempts at refuting them. In spite of the close analogy with significance testing, these ideas do not fit well with the practice of medical statistics. The same can be said of the later philosophy of Thomas Kuhn, who maintains that science proceeds by way of revolutionary upheavals separated by periods of relatively pedestrian research which are governed by what Kuhn refers to as paradigms. Through there have been paradigm shifts in the history of statistics, a degree of continuity can also be discerned. A current paradigm shift is embodied in the spread of Bayesian ideas. It may be that a future paradigm will emphasise the pragmatic approach to statistics that is associated with the name of Daniel Schwartz.
Rohrmeier, Martin A; Cross, Ian
Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.
Siegelman, Noam; Bogaerts, Louisa; Kronenfeld, Ofer; Frost, Ram
From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible "statistical" properties that are the object of learning. Much less attention has been given to defining what "learning" is in the context of "statistical learning." One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL. © 2017 Cognitive Science Society, Inc.
Duus, Henrik Johannsen
E-learning området er meget varieret hvad angår produkter, holdninger ogmeninger, og indeholder også en del 'støj' og mytedannelser, som afspejles i såvel denakademisk-videnskabelige som den journalistisk-offentlige debat om området. Dennevariation i såvel produkter som udtrykte meninger søges...... systematiseret og ordnet i fireidealtypiske paradigmer. Det vises, hvorledes disse fire paradigmer har hver sinebestemte karakteristika og udviklingsgrænser. Dette har afgørende strategisk betydningfor virksomheders og læreanstalters udvikling af e-learning, idet forkerte paradigmevalgvil hæmme udviklingen....
Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis
The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.
Protopopescu, V.; Rao, N.S.V.
We address three problems in machine learning, namely: (i) function learning, (ii) regression estimation, and (iii) sensor fusion, in the Probably and Approximately Correct (PAC) framework. We show that, under certain conditions, one can reduce the three problems above to the regression estimation. The latter is usually tackled with artificial neural networks (ANNs) that satisfy the PAC criteria, but have high computational complexity. We propose several computationally efficient PAC alternatives to ANNs to solve the regression estimation. Thereby we also provide efficient PAC solutions to the function learning and sensor fusion problems. The approach is based on cross-fertilizing concepts and methods from statistical estimation, nonlinear algorithms, and the theory of computational complexity, and is designed as part of a new, coherent paradigm for machine learning.
Wang, Sue-Jane; Hung, H M James; O'Neill, Robert
In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information
Barr, Robert B.; Tagg, John
Two alternative paradigms for undergraduate education are compared; one holds teaching as its purpose, the other learning. The natures of the two paradigms are examined on the following dimensions: mission and purposes, criteria for success, teaching and learning structures, underlying learning theory, concepts of productivity and methods of…
Rogoff, Barbara; Mejía-Arauz, Rebeca; Correa-Chávez, Maricela
We discuss Learning by Observing and Pitching In (LOPI) as a cultural paradigm that provides an interesting alternative to Assembly-Line Instruction for supporting children's learning. Although LOPI may occur in all communities, it appears to be especially prevalent in many Indigenous and Indigenous-heritage communities of the Americas. We explain key features of this paradigm, previewing the chapters of this volume, which examine LOPI as it occurs in the lives of families and communities. In this introductory chapter, we focus especially on one feature of the paradigm that plays an important role in its uptake and maintenance in families, institutions, and communities-the nature of assessment. We consider the power of the dominant paradigm and the challenges in making paradigm shifts. © 2015 Elsevier Inc. All rights reserved.
Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy
Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.
Ana B. Gil
Full Text Available The paradigm of Learning Object provides Educators and Learners with the ability to access an extensive number of learning resources. To do so, this paradigm provides different technologies and tools, such as federated search platforms and storage repositories, in order to obtain information ubiquitously and on demand. However, the vast amount and variety of educational content, which is distributed among several repositories, and the existence of various and incompatible standards, technologies and interoperability layers among repositories, constitutes a real problem for the expansion of this paradigm. This study presents an agent-based architecture that uses the advantages provided by Cloud Computing platforms to deal with the open issues on the Learning Object paradigm.
Tang, Ling; Yu, Lean; Wang, Shuai; Li, Jianping; Wang, Shouyang
Highlights: ► A hybrid ensemble learning paradigm integrating EEMD and LSSVR is proposed. ► The hybrid ensemble method is useful to predict time series with high volatility. ► The ensemble method can be used for both one-step and multi-step ahead forecasting. - Abstract: In this paper, a novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EEMD) and least squares support vector regression (LSSVR) is proposed for nuclear energy consumption forecasting, based on the principle of “decomposition and ensemble”. This hybrid ensemble learning paradigm is formulated specifically to address difficulties in modeling nuclear energy consumption, which has inherently high volatility, complexity and irregularity. In the proposed hybrid ensemble learning paradigm, EEMD, as a competitive decomposition method, is first applied to decompose original data of nuclear energy consumption (i.e. a difficult task) into a number of independent intrinsic mode functions (IMFs) of original data (i.e. some relatively easy subtasks). Then LSSVR, as a powerful forecasting tool, is implemented to predict all extracted IMFs independently. Finally, these predicted IMFs are aggregated into an ensemble result as final prediction, using another LSSVR. For illustration and verification purposes, the proposed learning paradigm is used to predict nuclear energy consumption in China. Empirical results demonstrate that the novel hybrid ensemble learning paradigm can outperform some other popular forecasting models in both level prediction and directional forecasting, indicating that it is a promising tool to predict complex time series with high volatility and irregularity.
Full Text Available Reza KarimiPacific University School of Pharmacy, Hillsboro, OR, USABackground: Problem-based learning (PBL has made a major shift in support of student learning for many medical school curricula around the world. Since curricular development of PBL in the early 1970s and its growth in the 1980s and 1990s, there have been growing numbers of publications providing positive and negative data in regard to the curricular effectiveness of PBL. The purpose of this study was to explore supportive data for the four core objectives of PBL and to identify an interface between the objectives of PBL and a learner-centered paradigm.Methods: The four core PBL objectives, ie, structuring of knowledge and clinical context, clinical reasoning, self-directed learning, and intrinsic motivation, were used to search MEDLINE, the Education Resources Information Center, the Educator’s Reference Complete, and PsycINFO from January 1969 to January 2011. The literature search was facilitated and narrowed if the published study included the following terms: “problem-based learning”, “medical education”, “traditional curriculum”, and one of the above four PBL objectives.Results: Through a comprehensive search analysis, one can find supportive data for the effectiveness of a PBL curriculum in achieving the four core objectives of PBL. A further analysis of these four objectives suggests that there is an interface between PBL objectives and criteria from a learner-centered paradigm. In addition, this review indicates that promotion of teamwork among students is another interface that exists between PBL and a learner-centered paradigm.Conclusion: The desire of medical schools to enhance student learning and a need to provide an environment where students construct knowledge rather than receive knowledge have encouraged many medical schools to move into a learner-centered paradigm. Implementation of a PBL curriculum can be used as a prevailing starting point to
Jain, Lakhmi; Howlett, Robert
This book presents fundamental topics and algorithms that form the core of machine learning (ML) research, as well as emerging paradigms in intelligent system design. The multidisciplinary nature of machine learning makes it a very fascinating and popular area for research. The book is aiming at students, practitioners and researchers and captures the diversity and richness of the field of machine learning and intelligent systems. Several chapters are devoted to computational learning models such as granular computing, rough sets and fuzzy sets An account of applications of well-known learning methods in biometrics, computational stylistics, multi-agent systems, spam classification including an extremely well-written survey on Bayesian networks shed light on the strengths and weaknesses of the methods. Practical studies yielding insight into challenging problems such as learning from incomplete and imbalanced data, pattern recognition of stochastic episodic events and on-line mining of non-stationary ...
Problem-based learning (PBL) has made a major shift in support of student learning for many medical school curricula around the world. Since curricular development of PBL in the early 1970s and its growth in the 1980s and 1990s, there have been growing numbers of publications providing positive and negative data in regard to the curricular effectiveness of PBL. The purpose of this study was to explore supportive data for the four core objectives of PBL and to identify an interface between the objectives of PBL and a learner-centered paradigm. The four core PBL objectives, ie, structuring of knowledge and clinical context, clinical reasoning, self-directed learning, and intrinsic motivation, were used to search MEDLINE, the Education Resources Information Center, the Educator's Reference Complete, and PsycINFO from January 1969 to January 2011. The literature search was facilitated and narrowed if the published study included the following terms: "problem-based learning", "medical education", "traditional curriculum", and one of the above four PBL objectives. Through a comprehensive search analysis, one can find supportive data for the effectiveness of a PBL curriculum in achieving the four core objectives of PBL. A further analysis of these four objectives suggests that there is an interface between PBL objectives and criteria from a learner-centered paradigm. In addition, this review indicates that promotion of teamwork among students is another interface that exists between PBL and a learner-centered paradigm. The desire of medical schools to enhance student learning and a need to provide an environment where students construct knowledge rather than receive knowledge have encouraged many medical schools to move into a learner-centered paradigm. Implementation of a PBL curriculum can be used as a prevailing starting point to develop not only a learner-centered paradigm, but also to facilitate a smooth curricular transition from a teacher-centered paradigm to a
Saffran, Jenny R.; Kirkham, Natasha Z.
Perception involves making sense of a dynamic, multimodal environment. In the absence of mechanisms capable of exploiting the statistical patterns in the natural world, infants would face an insurmountable computational problem. Infant statistical learning mechanisms facilitate the detection of structure. These abilities allow the infant to compute across elements in their environmental input, extracting patterns for further processing and subsequent learning. In this selective review, we summarize findings that show that statistical learning is both a broad and flexible mechanism (supporting learning from different modalities across many different content areas) and input specific (shifting computations depending on the type of input and goal of learning). We suggest that statistical learning not only provides a framework for studying language development and object knowledge in constrained laboratory settings, but also allows researchers to tackle real-world problems, such as multilingualism, the role of ever-changing learning environments, and differential developmental trajectories. PMID:28793812
Duus, Henrik Johannsen
The e-learning area is characterized by a magnitude of different products, systems and approaches. The variations can also be observed in differences in the views and notions of e-learning among business people, researchers and journalists. This article attempts to disentangle the area by using...... economic and sociological theories, the theories of marketing management and strategy as well as practical experience gained by the author while working with leading edge suppliers of e-learning. On this basis, a distinction between knowledge creation e-learning and knowledge transfer e-learning....... The selection of which paradigm to use in the development of an e-learning strategy may prove crucial for success. Implications for the development of an e-learning strategy in businesses and learning institutions are outlined....
Seyed Mehran Kazemi
Full Text Available The aim of statistical relational learning is to learn statistical models from relational or graph-structured data. Three main statistical relational learning paradigms include weighted rule learning, random walks on graphs, and tensor factorization. These paradigms have been mostly developed and studied in isolation for many years, with few works attempting at understanding the relationship among them or combining them. In this article, we study the relationship between the path ranking algorithm (PRA, one of the most well-known relational learning methods in the graph random walk paradigm, and relational logistic regression (RLR, one of the recent developments in weighted rule learning. We provide a simple way to normalize relations and prove that relational logistic regression using normalized relations generalizes the path ranking algorithm. This result provides a better understanding of relational learning, especially for the weighted rule learning and graph random walk paradigms. It opens up the possibility of using the more flexible RLR rules within PRA models and even generalizing both by including normalized and unnormalized relations in the same model.
Ettlinger, Marc; Wong, Patrick C. M.
Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success. PMID:27391085
Antoniou, Mark; Ettlinger, Marc; Wong, Patrick C M
Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success.
Full Text Available Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success.
Through ongoing rapid-fire changes in the nature of communications, the social, professional, and political landscapes of our time are rapidly transforming. But the prevailing paradigm in most schools and school systems is a relic of the industrial revolution. Schools and school systems must adopt a new paradigm for learning if they are to remain…
The capacity for assessing the degree of uncertainty in the environment relies on estimating statistics of temporally unfolding inputs. This, in turn, allows calibration of predictive and bottom-up processing, and signalling changes in temporally unfolding environmental features. In the last decade, several studies have examined how the brain codes for and responds to input uncertainty. Initial neurobiological experiments implicated frontoparietal and hippocampal systems, based largely on paradigms that manipulated distributional features of visual stimuli. However, later work in the auditory domain pointed to different systems, whose activation profiles have interesting implications for computational and neurobiological models of statistical learning (SL). This review begins by briefly recapping the historical development of ideas pertaining to the sensitivity to uncertainty in temporally unfolding inputs. It then discusses several issues at the interface of studies of uncertainty and SL. Following, it presents several current treatments of the neurobiology of uncertainty and reviews recent findings that point to principles that serve as important constraints on future neurobiological theories of uncertainty, and relatedly, SL. This review suggests it may be useful to establish closer links between neurobiological research on uncertainty and SL, considering particularly mechanisms sensitive to local and global structure in inputs, the degree of input uncertainty, the complexity of the system generating the input, learning mechanisms that operate on different temporal scales and the use of learnt information for online prediction.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Full Text Available The human visual system can acquire the statistical structures in temporal sequences of object feature changes, such as changes in shape, color, and its combination. Here we investigate whether the statistical learning for spatial position and shape changes operates separately or not. It is known that the visual system processes these two types of information separately; the spatial information is processed in the parietal cortex, whereas object shapes and colors are detected in the temporal pathway, and, after that, we perceive bound information in the two streams. We examined whether the statistical learning operates before or after binding the shape and the spatial information by using the “re-paired triplet” paradigm proposed by Turk-Browne, Isola, Scholl, and Treat (2008. The result showed that observers acquired combined sequences of shape and position changes, but no statistical information in individual sequence was obtained. This finding suggests that the visual statistical learning works after binding the temporal sequences of shapes and spatial structures and would operate in the higher-order visual system; this is consistent with recent ERP (Abla & Okanoya, 2009 and fMRI (Turk-Browne, Scholl, Chun, & Johnson, 2009 studies.
Incorporation of student self-assessment (SSA) in engineering education offers opportunities to support and encourage learner-led-learning. This paper presents an innovative assessment paradigm that integrates formative, summative, and SSA to enhance student learning. The assessment innovation was implemented in a senior-level civil engineering…
A paradigm is presented for student learning outcome assessment in information systems education. Successful deployment of the paradigm is illustrated using the author's home institution. The paradigm is consistent with both the scholarship of teaching and learning and the scholarship of assessment. It is concluded that the deployment of the…
Full Text Available Learning is the long process of transforming information as well as experience into knowledge, skills, attitude and behaviors. To make up the wide gap between the demand of increasing higher education and comparatively limited resources, more and more educational institutes are looking into instructional technology. Use of online resources not only reduces the cost of education but also meet the needs of society. Intelligent e-learning has become one of the important channels to reach out to students exceeding geographic boundaries. Besides this, the characteristics of e-learning have complicated the process of education, and have brought challenges to both instructors and students. This paper will focus on the discussion of different discipline of intelligent e-learning like scaffolding based e-learning, personalized e-learning, confidence based e-learning, intelligent tutoring system, etc. to illuminate the educational paradigm shift in intelligent e-learning system.
Teaching and learning paradigms have attracted increased attention especially in the last decade. Immense developments of different ICT technologies and services have paved the way for alternative but effective approaches in educational processes. Many concepts of the agent technology, such as intelligence, autonomy, and cooperation, have had a direct positive impact on many of the requests imposed on modern e-learning systems and educational processes. This book presents the state-of-the-art of e-learning and tutoring systems, and discusses their capabilities and benefits that stem from integrating software agents. We hope that the presented work will be of a great use to our colleagues and researchers interested in the e-learning and agent technology.
Habron, Geoffrey; Goralnik, Lissy; Thorp, Laurie
Purpose: Michigan State University developed an undergraduate, academic specialization in sustainability based on the learning paradigm. The purpose of this paper is to share initial findings on assessment of systems thinking competency. Design/methodology/approach: The 15-week course served 14 mostly third and fourth-year students. Assessment of…
Schmidt, James R; Augustinova, Maria; De Houwer, Jan
In the typical color-word contingency learning paradigm, participants respond to the print color of words where each word is presented most often in one color. Learning is indicated by faster and more accurate responses when a word is presented in its usual color, relative to another color. To eliminate the possibility that this effect is driven exclusively by the familiarity of item-specific word-color pairings, we examine whether contingency learning effects can be observed also when colors are related to categories of words rather than to individual words. To this end, the reported experiments used three categories of words (animals, verbs, and professions) that were each predictive of one color. Importantly, each individual word was presented only once, thus eliminating individual color-word contingencies. Nevertheless, for the first time, a category-based contingency effect was observed, with faster and more accurate responses when a category item was presented in the color in which most of the other items of that category were presented. This finding helps to constrain episodic learning models and sets the stage for new research on category-based contingency learning.
Thiessen, Erik D
Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik
Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan
, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...
Full Text Available Much research in the past two decades has documented infants’ and adults' ability to extract statistical regularities from auditory input. Importantly, recent research has extended these findings to the visual domain, demonstrating learners' sensitivity to statistical patterns within visual arrays and sequences of shapes. In this review we discuss both auditory and visual statistical learning to elucidate both the generality of and constraints on statistical learning. The review first outlines the major findings of the statistical learning literature with infants, followed by discussion of statistical learning across domains, modalities, and development. The second part of this review considers constraints on statistical learning. The discussion focuses on two categories of constraint: constraints on the types of input over which statistical learning operates and constraints based on the state of the learner. The review concludes with a discussion of possible mechanisms underlying statistical learning.
C. V. Subbulakshmi
Full Text Available Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO algorithm with the extreme learning machine (ELM classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN, proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers.
Batterink, Laura J; Paller, Ken A
The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and
In the light of technology-driven social change that creates new challenges for universities, this paper considers the potential of mobile learning as a subset of e-learning to effect a paradigm shift in higher education. Universities face exponential growth in demand for higher education, significant decreases in government funding for education, a changing in understanding of the nature of knowledge, changing student demographics and expectations, and global competition. At the same time un...
Blanchard, Emmanuel G.; Zanciu, Alin-Nicolae; Mahmoud, Haydar
This paper presents a computer-supported approach for providing ‘enhanced’ discovery learning in informal settings like museums. It is grounded on a combination of gesture-based interactions and artwork-embedded AIED paradigms, and is implemented through a distributed architecture.......This paper presents a computer-supported approach for providing ‘enhanced’ discovery learning in informal settings like museums. It is grounded on a combination of gesture-based interactions and artwork-embedded AIED paradigms, and is implemented through a distributed architecture....
Gibson, David; Broadley, Tania; Downie, Jill; Wallet, Peter
The UNESCO Institute for Statistics (UIS) has been measuring ICT in education since 2009, but with such rapid change in technology and its use in education, it is important now to revise the collection mechanisms to focus on how technology is being used to enhance learning and teaching. Sustainable development goal (SDG) 4, for example, moves…
Freundlieb, Nils; Ridder, Volker; Dobel, Christian; Enriquez-Geppert, Stefanie; Baumgaertner, Annette; Zwitserlood, Pienie; Gerloff, Christian; Hummel, Friedhelm C; Liuzzi, Gianpiero
Despite a growing number of studies, the neurophysiology of adult vocabulary acquisition is still poorly understood. One reason is that paradigms that can easily be combined with neuroscientfic methods are rare. Here, we tested the efficiency of two paradigms for vocabulary (re-) acquisition, and compared the learning of novel words for actions and objects. Cortical networks involved in adult native-language word processing are widespread, with differences postulated between words for objects and actions. Words and what they stand for are supposed to be grounded in perceptual and sensorimotor brain circuits depending on their meaning. If there are specific brain representations for different word categories, we hypothesized behavioural differences in the learning of action-related and object-related words. Paradigm A, with the learning of novel words for body-related actions spread out over a number of days, revealed fast learning of these new action words, and stable retention up to 4 weeks after training. The single-session Paradigm B employed objects and actions. Performance during acquisition did not differ between action-related and object-related words (time*word category: p = 0.01), but the translation rate was clearly better for object-related (79%) than for action-related words (53%, p = 0.002). Both paradigms yielded robust associative learning of novel action-related words, as previously demonstrated for object-related words. Translation success differed for action- and object-related words, which may indicate different neural mechanisms. The paradigms tested here are well suited to investigate such differences with neuroscientific means. Given the stable retention and minimal requirements for conscious effort, these learning paradigms are promising for vocabulary re-learning in brain-lesioned people. In combination with neuroimaging, neuro-stimulation or pharmacological intervention, they may well advance the understanding of language learning
Tóth, Brigitta; Janacsek, Karolina; Takács, Ádám; Kóbor, Andrea; Zavecz, Zsófia; Nemeth, Dezso
Statistical learning is a fundamental mechanism of the brain, which extracts and represents regularities of our environment. Statistical learning is crucial in predictive processing, and in the acquisition of perceptual, motor, cognitive, and social skills. Although previous studies have revealed competitive neurocognitive processes underlying statistical learning, the neural communication of the related brain regions (functional connectivity, FC) has not yet been investigated. The present study aimed to fill this gap by investigating FC networks that promote statistical learning in humans. Young adults (N=28) performed a statistical learning task while 128-channels EEG was acquired. The task involved probabilistic sequences, which enabled to measure incidental/implicit learning of conditional probabilities. Phase synchronization in seven frequency bands was used to quantify FC between cortical regions during the first, second, and third periods of the learning task, respectively. Here we show that statistical learning is negatively correlated with FC of the anterior brain regions in slow (theta) and fast (beta) oscillations. These negative correlations increased as the learning progressed. Our findings provide evidence that dynamic antagonist brain networks serve a hallmark of statistical learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Prior knowledge, in the form of a mental schema or framework, is viewed to facilitate the learning of new information in a range of experimental and everyday scenarios. Despite rising interest in the cognitive and neural mechanisms underlying schema-driven facilitation of new learning, few paradigms have been developed to examine this issue in…
Thomas, Cyril; Didierjean, André; Maquestiaux, François; Goujon, Annabelle
Since the seminal study by Chun and Jiang (Cognitive Psychology, 36, 28-71, 1998), a large body of research based on the contextual-cueing paradigm has shown that the cognitive system is capable of extracting statistical contingencies from visual environments. Most of these studies have focused on how individuals learn regularities found within an intratrial temporal window: A context predicts the target position within a given trial. However, Ono, Jiang, and Kawahara (Journal of Experimental Psychology, 31, 703-712, 2005) provided evidence of an intertrial implicit-learning effect when a distractor configuration in preceding trials N - 1 predicted the target location in trials N. The aim of the present study was to gain further insight into this effect by examining whether it occurs when predictive relationships are impeded by interfering task-relevant noise (Experiments 2 and 3) or by a long delay (Experiments 1, 4, and 5). Our results replicated the intertrial contextual-cueing effect, which occurred in the condition of temporally close contingencies. However, there was no evidence of integration across long-range spatiotemporal contingencies, suggesting a temporal limitation of statistical learning.
Absorption, distribution, metabolism and excretion (ADME)-related failure of drug candidates is a major issue for the pharmaceutical industry today. Prediction of ADME by in silico tools has now become an inevitable paradigm to reduce cost and enhance efficiency in pharmaceutical research. Recently, machine learning as well as nonlinear statistical tools has been widely applied to predict routine ADME end points. To achieve accurate and reliable predictions, it would be a prerequisite to understand the concepts, mechanisms and limitations of these tools. Here, we have devised a small synthetic nonlinear data set to help understand the mechanism of machine learning by 2D-visualisation. We applied six new machine learning methods to four different data sets. The methods include Naive Bayes classifier, classification and regression tree, random forest, Gaussian process, support vector machine and k nearest neighbour. The results demonstrated that ensemble learning and kernel machine displayed greater accuracy of prediction than classical methods irrespective of the data set size. The importance of interaction with the engineering field is also addressed. The results described here provide insights into the mechanism of machine learning, which will enable appropriate usage in the future.
Inglis, W L; Olmstead, M C; Robbins, T W
The role of the pedunculopontine tegmental nucleus (PPTg) in stimulus-reward learning was assessed by testing the effects of PPTg lesions on performance in visual autoshaping and conditioned reinforcement (CRf) paradigms. Rats with PPTg lesions were unable to learn an association between a conditioned stimulus (CS) and a primary reward in either paradigm. In the autoshaping experiment, PPTg-lesioned rats approached the CS+ and CS- with equal frequency, and the latencies to respond to the two stimuli did not differ. PPTg lesions also disrupted discriminated approaches to an appetitive CS in the CRf paradigm and completely abolished the acquisition of responding with CRf. These data are discussed in the context of a possible cognitive function of the PPTg, particularly in terms of lesion-induced disruptions of attentional processes that are mediated by the thalamus.
Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: email@example.com
The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.
The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms
Monroy, Claire; Meyer, Marlene; Gerson, Sarah; Hunnius, Sabine
Sensitivity to the regularities and structure contained within sequential, goal-directed actions is an important building block for generating expectations about the actions we observe. Until now, research on statistical learning for actions has solely focused on individual action sequences, but many actions in daily life involve multiple actors in various interaction contexts. The current study is the first to investigate the role of statistical learning in tracking regularities between actions performed by different actors, and whether the social context characterizing their interaction influences learning. That is, are observers more likely to track regularities across actors if they are perceived as acting jointly as opposed to in parallel? We tested adults and toddlers to explore whether social context guides statistical learning and-if so-whether it does so from early in development. In a between-subjects eye-tracking experiment, participants were primed with a social context cue between two actors who either shared a goal of playing together ('Joint' condition) or stated the intention to act alone ('Parallel' condition). In subsequent videos, the actors performed sequential actions in which, for certain action pairs, the first actor's action reliably predicted the second actor's action. We analyzed predictive eye movements to upcoming actions as a measure of learning, and found that both adults and toddlers learned the statistical regularities across actors when their actions caused an effect. Further, adults with high statistical learning performance were sensitive to social context: those who observed actors with a shared goal were more likely to correctly predict upcoming actions. In contrast, there was no effect of social context in the toddler group, regardless of learning performance. These findings shed light on how adults and toddlers perceive statistical regularities across actors depending on the nature of the observed social situation and the
The aim of this article is to study how policy learning has led to new understandings of ways to support renewable energies, based on experience in the wind power sector. Drawing on analysis of the literature and informed by field-work in the wind power sector in Denmark, France and the UK, it explores the extent to which policy learning over the medium term has brought us closer to models that integrate economic, environmental and societal desiderata into renewables policy in a manner congruent with the sustainable development aspirations espoused by the European Union and its constituent states. It contributes to policy theory development by arguing in favour of a new policy paradigm that reaches beyond measures to increase production capacity per se to embrace both the institutional dynamics of innovation processes and the fostering of societal engagement in implementation processes
Berk, Richard A
This textbook considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. As a first approximation, this can be seen as an extension of nonparametric regression. This fully revised new edition includes important developments over the past 8 years. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis derives from sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. A continued emphasis on the implications for practice runs through the text. Among the statistical learning procedures examined are bagging, random forests, boosting, support vector machines and neural networks. Response variables may be quantitative or categorical. As in the first edition, a unifying theme is supervised learning that can be trea...
This thesis studies the performance of statistical learning methods in high energy and astrophysics where they have become a standard tool in physics analysis. They are used to perform complex classification or regression by intelligent pattern recognition. This kind of artificial intelligence is achieved by the principle ''learning from examples'': The examples describe the relationship between detector events and their classification. The application of statistical learning methods is either motivated by the lack of knowledge about this relationship or by tight time restrictions. In the first case learning from examples is the only possibility since no theory is available which would allow to build an algorithm in the classical way. In the second case a classical algorithm exists but is too slow to cope with the time restrictions. It is therefore replaced by a pattern recognition machine which implements a fast statistical learning method. But even in applications where some kind of classical algorithm had done a good job, statistical learning methods convinced by their remarkable performance. This thesis gives an introduction to statistical learning methods and how they are applied correctly in physics analysis. Their flexibility and high performance will be discussed by showing intriguing results from high energy and astrophysics. These include the development of highly efficient triggers, powerful purification of event samples and exact reconstruction of hidden event parameters. The presented studies also show typical problems in the application of statistical learning methods. They should be only second choice in all cases where an algorithm based on prior knowledge exists. Some examples in physics analyses are found where these methods are not used in the right way leading either to wrong predictions or bad performance. Physicists also often hesitate to profit from these methods because they fear that statistical learning methods cannot be controlled in a
This thesis studies the performance of statistical learning methods in high energy and astrophysics where they have become a standard tool in physics analysis. They are used to perform complex classification or regression by intelligent pattern recognition. This kind of artificial intelligence is achieved by the principle ''learning from examples'': The examples describe the relationship between detector events and their classification. The application of statistical learning methods is either motivated by the lack of knowledge about this relationship or by tight time restrictions. In the first case learning from examples is the only possibility since no theory is available which would allow to build an algorithm in the classical way. In the second case a classical algorithm exists but is too slow to cope with the time restrictions. It is therefore replaced by a pattern recognition machine which implements a fast statistical learning method. But even in applications where some kind of classical algorithm had done a good job, statistical learning methods convinced by their remarkable performance. This thesis gives an introduction to statistical learning methods and how they are applied correctly in physics analysis. Their flexibility and high performance will be discussed by showing intriguing results from high energy and astrophysics. These include the development of highly efficient triggers, powerful purification of event samples and exact reconstruction of hidden event parameters. The presented studies also show typical problems in the application of statistical learning methods. They should be only second choice in all cases where an algorithm based on prior knowledge exists. Some examples in physics analyses are found where these methods are not used in the right way leading either to wrong predictions or bad performance. Physicists also often hesitate to profit from these methods because they fear that statistical learning methods cannot
Orlov A. I.
The article is devoted to the methods of analysis of statistical and expert data in problems of economics and management that are discussed in the framework of scientific specialization "Mathematical methods of economy", including organizational-economic and economic-mathematical modeling, econometrics and statistics, as well as economic aspects of decision theory, systems analysis, cybernetics, operations research. The main provisions of the new paradigm of this scientific and practical fiel...
Atwood, Jan R.; Dinham, Sarah M.
Metatheoretical analysis of Ausubel's Theory of Meaningful Verbal Learning and Gagne's Theory of Instruction using the Dickoff and James paradigm produced two instructional systems for basic statistics. The systems were tested with a pretest-posttest control group design utilizing students enrolled in an introductory-level graduate statistics…
Quan, Hui; Chen, Xun; Zhang, Ji; Zhao, Peng-Liang
Paradigm for new drug development has changed dramatically over the last decade. Even though new technology increases efficiency in many aspects, partially due to much more stringent regulatory requirements, it actually now takes longer and costs more to develop a new drug. To deal with challenge, some initiatives are taken by the pharmaceutical industry. These initiatives include exploring emerging markets, conducting global trials and building research and development centers in emerging markets to curb spending. It is particularly the current trend that major pharmaceutical companies offshore a part of their biostatistical support to China. In this paper, we first discuss the skill set for trial statisticians in the new era. We then elaborate on some of the approaches for acquiring statistical talent and capacity in general, particularly in emerging markets. We also make some recommendations on the use of the PDUFA strategy and collaborations among industry, health authority and academia from emerging market statistical perspective. © 2013.
Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.
von Luxburg, Ulrike; Schoelkopf, Bernhard
Statistical learning theory provides the theoretical basis for many of today's machine learning algorithms. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We target at a broad audience, not necessarily machine learning researchers. This paper can serve as a starting point for people who want to get an overview on the field before diving into technical details.
Active Learning with Statistical Models ASC-9217041, NSF CDA-9309300 6. AUTHOR(S) David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan 7. PERFORMING...TERMS 15. NUMBER OF PAGES Al, MIT, Artificial Intelligence, active learning , queries, locally weighted 6 regression, LOESS, mixtures of gaussians...COMPUTATIONAL LEARNING DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES A.I. Memo No. 1522 January 9. 1995 C.B.C.L. Paper No. 110 Active Learning with
Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...
Yu, Lean; Wang, Shouyang; Lai, Kin Keung
In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)
Potter, Christine E; Wang, Tianlin; Saffran, Jenny R
Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.
Full Text Available With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves the complex application of machine-learning algorithms, such as Bayesian fusions and neural network, for data extraction and logical inference. Deep learning, combined with information fusion paradigms, can be utilized to provide more comprehensive and reliable predictions from big health data. Based on the proposed framework, an experimental system is developed as an illustration for the framework implementation.
Full Text Available Blended learning combines face-to-face class based and online teaching and learning delivery in order to increase flexibility in how, when, and where students study and learn. The development, integration, and promotion of blended learning in frameworks of curriculum design can optimize the opportunities afforded by information and communication technologies and, concomitantly, accommodate a broad range of student learning styles. This study critically reviews the potential benefits of blended learning as a progressive educative paradigm for the teaching of biomedical science and evaluates the opportunities that blended learning offers for the delivery of accessible, flexible and sustainable teaching and learning experiences. A central tenet of biomedical science education at the tertiary level is the development of comprehensive hands-on practical competencies and technical skills (many of which require laboratory-based learning environments, and it is advanced that a blended learning model, which combines face-to-face synchronous teaching and learning activities with asynchronous online teaching and learning activities, effectively creates an authentic, enriching, and student-centred learning environment for biomedical science. Lastly, a blending learning design for introductory biochemistry will be described as an effective example of integrating face-to-face and online teaching, learning and assessment activities within the teaching domain of biomedical science. DOI: 10.18870/hlrc.v3i4.169
Thordis Marisa Neger
Full Text Available Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech.In the present study, 73 older adults (aged over 60 years and 60 younger adults (aged between 18 and 30 years performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed. Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
Richtsmeier, Peter T; Goffman, Lisa
What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.
Dunn, Peter K.; Carey, Michael D.; Richardson, Alice M.; McDonald, Christine
Learning statistics requires learning the language of statistics. Statistics draws upon words from general English, mathematical English, discipline-specific English and words used primarily in statistics. This leads to many linguistic challenges in teaching statistics and the way in which the language is used in statistics creates an extra layer…
Lauri Juhani Kurkela
Full Text Available The implementation of social media in learning, teaching and cooperation is an innovation process which has implications at many levels in networking universities. eLearning developers and educational designers need to be aware of social media related technological prospects to be able to determine how to benefit from new possibilities. They also need to be aware of related pedagogical possibilities, competences and attitudes among students, teachers and tutors. Soft System Methodology (SSM has been applied to investigate the problem area more deeply. One can see three development challenges at each level: 1 paradigms and paradigm shifts, 2 teaching and learning competences and related culture, 3 infrastructure and technology related services and innovations. The Virtual Campus for Digital Students (ViCaDiS Project is used to concretise some features of the systemic approach of SSM. As a result of the SSM analysis, one can find a useful framework to start analysing development challenges in the context of one university or universities working together.
Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze
Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.
Lelli, Maria Barbara
On the strength of the literature analysis and the Emilia-Romagna Region experience, we suggest a reflection on the workplace-based learning that goes beyond the analysis of the effectiveness of specific didactic methodologies and aspects related to Continuing Medical Education. Health education and training issue is viewed from a wider perspective, that integrates the three learning dimensions (formal, non formal and informal). In such a perspective the workplace-based learning becomes an essential paradigm to reshape the explicit knowledge conveyed in formal context and to emphasize informal contexts where innovation is generated.
Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: firstname.lastname@example.org; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.
Zimmermann, J.; Kiesling, C.
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application
Haley, M. Ryan
This paper describes a flexible paradigm for creating an electronic "Core Concepts Plus" textbook (CCP-text) for a course in Introductory Business and Economic Statistics (IBES). In general terms, "core concepts" constitute the intersection of IBES course material taught by all IBES professors at the author's university. The…
Hayashibe, Mitsuhiro; Shimoda, Shingo
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.
Hayashibe, Mitsuhiro; Shimoda, Shingo
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach
Full Text Available Response time (RT is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task, in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop, and stop-signal onset time, SSD (stop-signal delay, with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop and SSD. The human behavioral data (n=20 bear out this prediction, showing P(stop and SSD both to be significant, independent predictors of RT, with P(stop being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.
Ma, Ning; Yu, Angela J
Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.
Prior knowledge, in the form of a mental schema or framework, is viewed to facilitate the learning of new information in a range of experimental and everyday scenarios. Despite rising interest in the cognitive and neural mechanisms underlying schema-driven facilitation of new learning, few paradigms have been developed to examine this issue in humans. Here we develop a multiphase experimental scenario aimed at characterizing schema-based effects in the context of a paradigm that has been very widely used across species, the transitive inference task. We show that an associative schema, comprised of prior knowledge of the rank positions of familiar items in the hierarchy, has a marked effect on transitivity performance and the development of relational knowledge of the hierarchy that cannot be accounted for by more general changes in task strategy. Further, we show that participants are capable of deploying prior knowledge to successful effect under surprising conditions (i.e., when corrective feedback is totally absent), but only when the associative schema is robust. Finally, our results provide insights into the cognitive mechanisms underlying such schema-driven effects, and suggest that new hierarchy learning in the transitive inference task can occur through a contextual transfer mechanism that exploits the structure of associative experiences. PMID:23782509
Prior knowledge, in the form of a mental schema or framework, is viewed to facilitate the learning of new information in a range of experimental and everyday scenarios. Despite rising interest in the cognitive and neural mechanisms underlying schema-driven facilitation of new learning, few paradigms have been developed to examine this issue in humans. Here we develop a multiphase experimental scenario aimed at characterizing schema-based effects in the context of a paradigm that has been very widely used across species, the transitive inference task. We show that an associative schema, comprised of prior knowledge of the rank positions of familiar items in the hierarchy, has a marked effect on transitivity performance and the development of relational knowledge of the hierarchy that cannot be accounted for by more general changes in task strategy. Further, we show that participants are capable of deploying prior knowledge to successful effect under surprising conditions (i.e., when corrective feedback is totally absent), but only when the associative schema is robust. Finally, our results provide insights into the cognitive mechanisms underlying such schema-driven effects, and suggest that new hierarchy learning in the transitive inference task can occur through a contextual transfer mechanism that exploits the structure of associative experiences.
Complex machine learning tools, such as deep neural networks and gradient boosting algorithms, are increasingly being used to construct powerful discriminative features for High Energy Physics analyses. These methods are typically trained with simulated or auxiliary data samples by optimising some classification or regression surrogate objective. The learned feature representations are then used to build a sample-based statistical model to perform inference (e.g. interval estimation or hypothesis testing) over a set of parameters of interest. However, the effectiveness of the mentioned approach can be reduced by the presence of known uncertainties that cause differences between training and experimental data, included in the statistical model via nuisance parameters. This work presents an end-to-end algorithm, which leverages on existing deep learning technologies but directly aims to produce inference-optimal sample-summary statistics. By including the statistical model and a differentiable approximation of ...
Full Text Available In the light of technology-driven social change that creates new challenges for universities, this paper considers the potential of mobile learning as a subset of e-learning to effect a paradigm shift in higher education. Universities face exponential growth in demand for higher education, significant decreases in government funding for education, a changing in understanding of the nature of knowledge, changing student demographics and expectations, and global competition. At the same time untethered mobile telephony is connecting large numbers of potential learners to communications networks. A review of some empirical literature on the current status of mobile learning that explores alternatives to help universities fulfil core functions of storage, processing, and disseminating knowledge that can be applied to real life problems, is followed by an examination of the strengths and weaknesses of increased connectivity to mobile communications networks to support constructivist, self-directed quality interactive learning for increasingly mobile learners. This paper also examines whether mobile learning can align the developing technology with changing student expectations and the implications of such an alignment for teaching and institutional strategies. Technologies considered include mobile computing and technology, wireless laptop, hand-held PDAs, and mobile telephony.
Anderson, Sarah J; Hecker, Kent G; Krigolson, Olave E; Jamniczky, Heather A
In anatomy education, a key hurdle to engaging in higher-level discussion in the classroom is recognizing and understanding the extensive terminology used to identify and describe anatomical structures. Given the time-limited classroom environment, seeking methods to impart this foundational knowledge to students in an efficient manner is essential. Just-in-Time Teaching (JiTT) methods incorporate pre-class exercises (typically online) meant to establish foundational knowledge in novice learners so subsequent instructor-led sessions can focus on deeper, more complex concepts. Determining how best do we design and assess pre-class exercises requires a detailed examination of learning and retention in an applied educational context. Here we used electroencephalography (EEG) as a quantitative dependent variable to track learning and examine the efficacy of JiTT activities to teach anatomy. Specifically, we examined changes in the amplitude of the N250 and reward positivity event-related brain potential (ERP) components alongside behavioral performance as novice students participated in a series of computerized reinforcement-based learning modules to teach neuroanatomical structures. We found that as students learned to identify anatomical structures, the amplitude of the N250 increased and reward positivity amplitude decreased in response to positive feedback. Both on a retention and transfer exercise when learners successfully remembered and translated their knowledge to novel images, the amplitude of the reward positivity remained decreased compared to early learning. Our findings suggest ERPs can be used as a tool to track learning, retention, and transfer of knowledge and that employing the reinforcement learning paradigm is an effective educational approach for developing anatomical expertise.
This book covers the key ideas that link probability, statistics, and machine learning illustrated using Python modules in these areas. The entire text, including all the figures and numerical results, is reproducible using the Python codes and their associated Jupyter/IPython notebooks, which are provided as supplementary downloads. The author develops key intuitions in machine learning by working meaningful examples using multiple analytical methods and Python codes, thereby connecting theoretical concepts to concrete implementations. Modern Python modules like Pandas, Sympy, and Scikit-learn are applied to simulate and visualize important machine learning concepts like the bias/variance trade-off, cross-validation, and regularization. Many abstract mathematical ideas, such as convergence in probability theory, are developed and illustrated with numerical examples. This book is suitable for anyone with an undergraduate-level exposure to probability, statistics, or machine learning and with rudimentary knowl...
Full Text Available The issue of controlling that data processing in an experiment results not affected by the presence of outliers is relevant for statistical control and learning studies. Learning schemes should thus be tested for their capacity of handling outliers in the observed training set so to achieve reliable estimates with respect to the crucial bias and variance aspects. We describe possible ways of endowing neural networks with statistically robust properties by defining feasible error criteria. It is convenient to cast neural nets in state space representations and apply both Kalman filter and stochastic approximation procedures in order to suggest statistically robustified solutions for on-line learning.
Jiang, Yuhong V; Swallow, Khena M
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.
Anderson, Sarah J.; Hecker, Kent G.; Krigolson, Olave E.; Jamniczky, Heather A.
In anatomy education, a key hurdle to engaging in higher-level discussion in the classroom is recognizing and understanding the extensive terminology used to identify and describe anatomical structures. Given the time-limited classroom environment, seeking methods to impart this foundational knowledge to students in an efficient manner is essential. Just-in-Time Teaching (JiTT) methods incorporate pre-class exercises (typically online) meant to establish foundational knowledge in novice learners so subsequent instructor-led sessions can focus on deeper, more complex concepts. Determining how best do we design and assess pre-class exercises requires a detailed examination of learning and retention in an applied educational context. Here we used electroencephalography (EEG) as a quantitative dependent variable to track learning and examine the efficacy of JiTT activities to teach anatomy. Specifically, we examined changes in the amplitude of the N250 and reward positivity event-related brain potential (ERP) components alongside behavioral performance as novice students participated in a series of computerized reinforcement-based learning modules to teach neuroanatomical structures. We found that as students learned to identify anatomical structures, the amplitude of the N250 increased and reward positivity amplitude decreased in response to positive feedback. Both on a retention and transfer exercise when learners successfully remembered and translated their knowledge to novel images, the amplitude of the reward positivity remained decreased compared to early learning. Our findings suggest ERPs can be used as a tool to track learning, retention, and transfer of knowledge and that employing the reinforcement learning paradigm is an effective educational approach for developing anatomical expertise. PMID:29467638
Sarah J. Anderson
Full Text Available In anatomy education, a key hurdle to engaging in higher-level discussion in the classroom is recognizing and understanding the extensive terminology used to identify and describe anatomical structures. Given the time-limited classroom environment, seeking methods to impart this foundational knowledge to students in an efficient manner is essential. Just-in-Time Teaching (JiTT methods incorporate pre-class exercises (typically online meant to establish foundational knowledge in novice learners so subsequent instructor-led sessions can focus on deeper, more complex concepts. Determining how best do we design and assess pre-class exercises requires a detailed examination of learning and retention in an applied educational context. Here we used electroencephalography (EEG as a quantitative dependent variable to track learning and examine the efficacy of JiTT activities to teach anatomy. Specifically, we examined changes in the amplitude of the N250 and reward positivity event-related brain potential (ERP components alongside behavioral performance as novice students participated in a series of computerized reinforcement-based learning modules to teach neuroanatomical structures. We found that as students learned to identify anatomical structures, the amplitude of the N250 increased and reward positivity amplitude decreased in response to positive feedback. Both on a retention and transfer exercise when learners successfully remembered and translated their knowledge to novel images, the amplitude of the reward positivity remained decreased compared to early learning. Our findings suggest ERPs can be used as a tool to track learning, retention, and transfer of knowledge and that employing the reinforcement learning paradigm is an effective educational approach for developing anatomical expertise.
Full Text Available This paper outlines a problem we have found in our own practice when we have been developing new researchers at post-graduate level. When students begin research training and practice, they are often confused between different levels of thinking when they are faced with methods, methodologies and research paradigms. We argue that this confusion arises from the way research methods are taught, embedded and embodied in educational systems. We set out new ways of thinking about levels of research in the field of learning technology. We argue for a problem driven/pragmatic approach to research and consider the range of methods that can be applied as diverse lenses to particular research problems. The problem of developing a coherent approach to research and research methods is not confined to research in learning technology because it is arguably a problem for all educational research and one that also affects an even wider range of disciplinary and interdisciplinary subject areas. For the purposes of this paper we will discuss the problem in relation to research in learning technologies and make a distinction between developmental and basic research that we think is particularly relevant in this field. The paradigms of research adopted have real consequences for the ways research problems are conceived and articulated, and the ways in which research is conducted. This has become an even more pressing concern in the challenging funding climate that researchers now face. We argue that there is not a simple 1 to 1 relationship between levels and most particularly that there usually is not a direct association of particular methods with either a philosophical outlook or paradigm of research. We conclude by recommending a pluralist approach to thinking about research problems and we illustrate this with the suggestion that we should encourage researchers to think in terms of counterpositives. If the researcher suggests one way of doing research in an
Lany, Jill; Shoaib, Amber; Thompson, Abbie; Estes, Katharine Graf
Infants are adept at learning statistical regularities in artificial language materials, suggesting that the ability to learn statistical structure may support language development. Indeed, infants who perform better on statistical learning tasks tend to be more advanced in parental reports of infants' language skills. Work with adults suggests…
Francisco Javier Tapia Moreno
Full Text Available Mobile learning (m-learning allows a person to study using a mobile computer device anywhere and anytime. In this work we report the elaboration of learning objects for the teaching of introductory statistics using cellular phones.
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe
When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to
Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T
(1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe
Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics-that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.
Meyer, Meredith; Baldwin, Dare
Identification of distinct units within a continuous flow of human action is fundamental to action processing. Such segmentation may rest in part on statistical learning. In a series of four experiments, we examined what types of statistics people can use to segment a continuous stream involving many brief, goal-directed action elements. The results of Experiment 1 showed no evidence for sensitivity to conditional probability, whereas Experiment 2 displayed learning based on joint probability. In Experiment 3, we demonstrated that additional exposure to the input failed to engender sensitivity to conditional probability. However, the results of Experiment 4 showed that a subset of adults-namely, those more successful at identifying actions that had been seen more frequently than comparison sequences-were also successful at learning conditional-probability statistics. These experiments help to clarify the mechanisms subserving processing of intentional action, and they highlight important differences from, as well as similarities to, prior studies of statistical learning in other domains, including language.
Full Text Available Learners segment potential lexical units from syllable streams when statistically variable transitional probabilities between adjacent syllables are the only cues to word boundaries. Here we examine the nature of the representations that result from statistical learning by assessing learners’ ability to generalize across acoustically different stimuli. In three experiments, we investigate limitations on the outcome of statistical learning by considering two possibilities: that the products of statistical segmentation processes are abstract and generalizable representations, or, alternatively, that products of statistical learning are stimulus-bound and restricted to perceptually similar instances. In Experiment 1, learners segmented units from statistically predictable streams, and recognized these units when they were acoustically transformed by temporal reversals. In Experiment 2, learners were able to segment units from temporally reversed syllable streams, but were only able to generalize in conditions of mild acoustic transformation. In Experiment 3, learners were able to recognize statistically segmented units after a voice change but were unable to do so when the novel voice was mildly distorted. Together these results suggest that representations that result from statistical learning can be abstracted to some degree, but not in all listening conditions.
Hiedemann, Bridget; Jones, Stacey M.
We compare the effectiveness of academic service learning to that of case studies in an undergraduate introductory business statistics course. Students in six sections of the course were assigned either an academic service learning project (ASL) or business case studies (CS). We examine two learning outcomes: students' performance on the final…
Bigbee, Allison J.; Crown, Eric D.; Ferguson, Adam R.; Roy, Roland R.; Tillakaratne, Niranjala J.K.; Grau, James W.; Edgerton, V. Reggie
The effect of two chronic motor training paradigms on the ability of the lumbar spinal cord to perform an acute instrumental learning task was examined in neonatally (postnatal day 5; P5) spinal cord transected (i.e., spinal) rats. At ∼P30, rats began either unipedal hindlimb stand training (Stand-Tr; 20-25 min/day, 5 days/wk), or bipedal hindlimb step training (Step-Tr; 20 min/day; 5 days/wk) for 7 wks. Non-trained spinal rats (Non-Tr) served as controls. After 7 wks all groups were tested on the flexor-biased instrumental learning paradigm. We hypothesized that 1) Step-Tr rats would exhibit an increased capacity to learn the flexor-biased task relative to Non-Tr subjects, as locomotion involves repetitive training of the tibialis anterior (TA), the ankle flexor whose activation is important for successful instrumental learning, and 2) Stand-Tr rats would exhibit a deficit in acute motor learning, as unipedal training activates the ipsilateral ankle extensors, but not flexors. Results showed no differences in acute learning potential between Non-Tr and Step-Tr rats, while the Stand-Tr group showed a reduced capacity to learn the acute task. Further investigation of the Stand-Tr group showed that, while both the ipsilateral and contralateral hindlimbs were significantly impaired in their acute learning potential, the contralateral, untrained hindlimbs exhibited significantly greater learning deficits. These results suggest that different types of chronic peripheral input may have a significant impact on the ability to learn a novel motor task, and demonstrate the potential for experience-dependent plasticity in the spinal cord in the absence of supraspinal connectivity. PMID:17434606
Schrider, Daniel R.; Kern, Andrew D.
As population genomic datasets grow in size, researchers are faced with the daunting task of making sense of a flood of information. To keep pace with this explosion of data, computational methodologies for population genetic inference are rapidly being developed to best utilize genomic sequence data. In this review we discuss a new paradigm that has emerged in computational population genomics: that of supervised machine learning (ML). We review the fundamentals of ML, discuss recent applications of supervised ML to population genetics that outperform competing methods, and describe promising future directions in this area. Ultimately, we argue that supervised ML is an important and underutilized tool that has considerable potential for the world of evolutionary genomics. PMID:29331490
Lany, Jill; Gómez, Rebecca L
Probabilistically-cued co-occurrence relationships between word categories are common in natural languages but difficult to acquire. For example, in English, determiner-noun and auxiliary-verb dependencies both involve co-occurrence relationships, but determiner-noun relationships are more reliably marked by correlated distributional and phonological cues, and appear to be learned more readily. We tested whether experience with co-occurrence relationships that are more reliable promotes learning those that are less reliable using an artificial language paradigm. Prior experience with deterministically-cued contingencies did not promote learning of less reliably-cued structure, nor did prior experience with relationships instantiated in the same vocabulary. In contrast, prior experience with probabilistically-cued co-occurrence relationships instantiated in different vocabulary did enhance learning. Thus, experience with co-occurrence relationships sharing underlying structure but not vocabulary may be an important factor in learning grammatical patterns. Furthermore, experience with probabilistically-cued co-occurrence relationships, despite their difficultly for naïve learners, lays an important foundation for learning novel probabilistic structure.
Thiessen, Erik D; Kronstein, Alexandra T; Hufnagle, Daniel G
The term statistical learning in infancy research originally referred to sensitivity to transitional probabilities. Subsequent research has demonstrated that statistical learning contributes to infant development in a wide array of domains. The range of statistical learning phenomena necessitates a broader view of the processes underlying statistical learning. Learners are sensitive to a much wider range of statistical information than the conditional relations indexed by transitional probabilities, including distributional and cue-based statistics. We propose a novel framework that unifies learning about all of these kinds of statistical structure. From our perspective, learning about conditional relations outputs discrete representations (such as words). Integration across these discrete representations yields sensitivity to cues and distributional information. To achieve sensitivity to all of these kinds of statistical structure, our framework combines processes that extract segments of the input with processes that compare across these extracted items. In this framework, the items extracted from the input serve as exemplars in long-term memory. The similarity structure of those exemplars in long-term memory leads to the discovery of cues and categorical structure, which guides subsequent extraction. The extraction and integration framework provides a way to explain sensitivity to both conditional statistical structure (such as transitional probabilities) and distributional statistical structure (such as item frequency and variability), and also a framework for thinking about how these different aspects of statistical learning influence each other. 2013 APA, all rights reserved
Thiessen, Erik D
All theories of language development suggest that learning is constrained. However, theories differ on whether these constraints arise from language-specific processes or have domain-general origins such as the characteristics of human perception and information processing. The current experiments explored constraints on statistical learning of patterns, such as the phonotactic patterns of an infants' native language. Infants in these experiments were presented with a visual analog of a phonotactic learning task used by J. R. Saffran and E. D. Thiessen (2003). Saffran and Thiessen found that infants' phonotactic learning was constrained such that some patterns were learned more easily than other patterns. The current results indicate that infants' learning of visual patterns shows the same constraints as infants' learning of phonotactic patterns. This is consistent with theories suggesting that constraints arise from domain-general sources and, as such, should operate over many kinds of stimuli in addition to linguistic stimuli. © 2011 The Author. Child Development © 2011 Society for Research in Child Development, Inc.
Following the learned helplessness paradigm, I assessed in this study the effects of global and specific attributions for failure on the generalization of performance deficits in a dissimilar situation. Helplessness training consisted of experience with noncontingent failures on four cognitive discrimination problems attributed to either global or specific causes. Experiment 1 found that performance in a dissimilar situation was impaired following exposure to globally attributed failure. Experiment 2 examined the behavioral effects of the interaction between stable and global attributions of failure. Exposure to unsolvable problems resulted in reduced performance in a dissimilar situation only when failure was attributed to global and stable causes. Finally, Experiment 3 found that learned helplessness deficits were a product of the interaction of global and internal attribution. Performance deficits following unsolvable problems were recorded when failure was attributed to global and internal causes. Results were discussed in terms of the reformulated learned helplessness model.
Carstensen, Martin B.; Matthijs, Matthias
in the study of policy paradigms. To demonstrate the general applicability of our framework, the paper examines the evolution of British macroeconomic policy making since 1990. We show that various Prime Ministers and their Chancellors were able to reinterpret and redefine the dominant neoliberal understanding......? Despite the profound impact of Peter Hall’s approach to policy paradigms and social learning, there is a burgeoning consensus that transposing a rudimentary ‘Kuhnian’ understanding of paradigms into the context of public policy making leads to a notion of punctuated equilibrium style shifts as the only...
Guikema, Seth D.
Probabilistic risk analysis has historically been developed for situations in which measured data about the overall reliability of a system are limited and expert knowledge is the best source of information available. There continue to be a number of important problem areas characterized by a lack of hard data. However, in other important problem areas the emergence of information technology has transformed the situation from one characterized by little data to one characterized by data overabundance. Natural disaster risk assessments for events impacting large-scale, critical infrastructure systems such as electric power distribution systems, transportation systems, water supply systems, and natural gas supply systems are important examples of problems characterized by data overabundance. There are often substantial amounts of information collected and archived about the behavior of these systems over time. Yet it can be difficult to effectively utilize these large data sets for risk assessment. Using this information for estimating the probability or consequences of system failure requires a different approach and analysis paradigm than risk analysis for data-poor systems does. Statistical learning theory, a diverse set of methods designed to draw inferences from large, complex data sets, can provide a basis for risk analysis for data-rich systems. This paper provides an overview of statistical learning theory methods and discusses their potential for greater use in risk analysis
Humans are remarkably sensitive to the statistical structure of language. However, different mechanisms have been proposed to account for such statistical sensitivities. The present study compared adult learning of syntax and the ability of two models of statistical learning to simulate human performance: Simple Recurrent Networks, which learn by…
McGrath, April L.
This study examined the experiences of and challenges faced by students when completing a statistics course. As part of the requirement for this course, students completed a learning check-in, which consisted of an individual meeting with the instructor to discuss questions and the completion of a learning reflection and study plan. Forty…
Macher, Daniel; Paechter, Manuela; Papousek, Ilona; Ruggeri, Kai
The present study investigated the relationship between statistics anxiety, individual characteristics (e.g., trait anxiety and learning strategies), and academic performance. Students enrolled in a statistics course in psychology (N = 147) filled in a questionnaire on statistics anxiety, trait anxiety, interest in statistics, mathematical…
Full Text Available Impairments in statistical learning might be a common deficit among individuals with Specific Language Impairment (SLI and Autism Spectrum Disorder (ASD. Using meta-analysis, we examined statistical learning in SLI (14 studies, 15 comparisons and ASD (13 studies, 20 comparisons to evaluate this hypothesis. Effect sizes were examined as a function of diagnosis across multiple statistical learning tasks (Serial Reaction Time, Contextual Cueing, Artificial Grammar Learning, Speech Stream, Observational Learning, Probabilistic Classification. Individuals with SLI showed deficits in statistical learning relative to age-matched controls g = .47, 95% CI [.28, .66], p < .001. In contrast, statistical learning was intact in individuals with ASD relative to controls, g = –.13, 95% CI [–.34, .08], p = .22. Effect sizes did not vary as a function of task modality or participant age. Our findings inform debates about overlapping social-communicative difficulties in children with SLI and ASD by suggesting distinct underlying mechanisms. In line with the procedural deficit hypothesis (Ullman & Pierpont, 2005, impaired statistical learning may account for phonological and syntactic difficulties associated with SLI. In contrast, impaired statistical learning fails to account for the social-pragmatic difficulties associated with ASD.
Anne McClure Walk
Full Text Available Recent studies have demonstrated participants’ ability to learn cross-modal associations during statistical learning tasks. However, these studies are all similar in that the cross-modal associations to be learned occur simultaneously, rather than sequentially. In addition, the majority of these studies focused on learning across sensory modalities but not across perceptual categories. To test both cross-modal and cross-categorical learning of sequential dependencies, we used an artificial grammar learning task consisting of a serial stream of auditory and/or visual stimuli containing both within- and cross-domain dependencies. Experiment 1 examined within-modal and cross-modal learning across two sensory modalities (audition and vision. Experiment 2 investigated within-categorical and cross-categorical learning across two perceptual categories within the same sensory modality (e.g. shape and color; tones and non-words. Our results indicated that individuals demonstrated learning of the within-modal and within-categorical but not the cross-modal or cross-categorical dependencies. These results stand in contrast to the previous demonstrations of cross-modal statistical learning, and highlight the presence of modality constraints that limit the effectiveness of learning in a multimodal environment.
Smith, Linda B; Jayaraman, Swapnaa; Clerkin, Elizabeth; Yu, Chen
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stevens, David J; Arciuli, Joanne; Anderson, David I
This study examined the effect of a prior bout of exercise on implicit cognition. Specifically, we examined whether a prior bout of moderate intensity exercise affected performance on a statistical learning task in healthy adults. A total of 42 participants were allocated to one of three conditions-a control group, a group that exercised for 15 min prior to the statistical learning task, and a group that exercised for 30 min prior to the statistical learning task. The participants in the exercise groups cycled at 60% of their respective V˙O2 max. Each group demonstrated significant statistical learning, with similar levels of learning among the three groups. Contrary to previous research that has shown that a prior bout of exercise can affect performance on explicit cognitive tasks, the results of the current study suggest that the physiological stress induced by moderate-intensity exercise does not affect implicit cognition as measured by statistical learning. Copyright © 2015 Cognitive Science Society, Inc.
Northrup, Christian Glenn
This study investigated the use of writing in a statistics classroom to learn if writing provided a rich description of problem-solving processes of students as they solved problems. Through analysis of 329 written samples provided by students, it was determined that writing provided a rich description of problem-solving processes and enabled…
Stephen Wee Hun eLim
Full Text Available Research methods and statistics are an indispensable subject in the undergraduate psychology curriculum, but there are challenges associated with teaching it, such as making learning durable. Here we hypothesized that retrieval-based learning promotes long-term retention of statistical knowledge in psychology. Participants either studied the educational material in four consecutive periods, or studied it just once and practised retrieving the information in the subsequent three periods, and then took a final test through which their learning was assessed. Whereas repeated studying yielded better test performance when the final test was immediately administered, repeated practice yielded better performance when the test was administered a week after. The data suggest that retrieval practice enhanced the learning – produced better long-term retention – of statistical knowledge in psychology than did repeated studying.
Pelucchi, Bruna; Hay, Jessica F; Saffran, Jenny R
Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.
The paper aims to identify a solution to the dilemma that currently exists within the paradigm of strategic learning: the dilemma of whether a strategy should be seen as an action dependent on the specific knowledge of an educational actor--strategic knowledge--or whether it is dependent on a planned instructional context--strategic context. This…
Watkin, T.L.H.; Rau, A.; Biehl, M.
A summary is presented of the statistical mechanical theory of learning a rule with a neural network, a rapidly advancing area which is closely related to other inverse problems frequently encountered by physicists. By emphasizing the relationship between neural networks and strongly interacting physical systems, such as spin glasses, the authors show how learning theory has provided a workshop in which to develop new, exact analytical techniques
Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Alexis N Bosseler
Full Text Available Statistical learning and the social contexts of language addressed to infants are hypothesized to play important roles in early language development. Previous behavioral work has found that the exaggerated prosodic contours of infant-directed speech (IDS facilitate statistical learning in 8-month-old infants. Here we examined the neural processes involved in on-line statistical learning and investigated whether the use of IDS facilitates statistical learning in sleeping newborns. Event-related potentials (ERPs were recorded while newborns were exposed to12 pseudo-words, six spoken with exaggerated pitch contours of IDS and six spoken without exaggerated pitch contours (ADS in ten alternating blocks. We examined whether ERP amplitudes for syllable position within a pseudo-word (word-initial vs. word-medial vs. word-final, indicating statistical word learning and speech register (ADS vs. IDS would interact. The ADS and IDS registers elicited similar ERP patterns for syllable position in an early 0-100 ms component but elicited different ERP effects in both the polarity and topographical distribution at 200-400 ms and 450-650 ms. These results provide the first evidence that the exaggerated pitch contours of IDS result in differences in brain activity linked to on-line statistical learning in sleeping newborns.
Taylor, Jonathan; Tibshirani, Robert J
We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Wiig, E H
We are beginning a decade, during which many traditional paradigms in education, special education, and speech-language pathology will undergo change. Among paradigms considered promising for speech-language pathology in the schools are collaborative language intervention and strategy training for language and communication. This presentation introduces management models for developing a collaborative language intervention process, among them the Deming Management Method for Total Quality (TQ) (Deming 1986). Implementation models for language assessment and IEP planning and multicultural issues are also introduced (Damico and Nye 1990; Secord and Wiig in press). While attention to processes involved in developing and implementing collaborative language intervention is paramount, content should not be neglected. To this end, strategy training for language and communication is introduced as a viable paradigm. Macro- and micro-level process models for strategy training are featured and general issues are discussed (Ellis, Deshler, and Schumaker 1989; Swanson 1989; Wiig 1989).
Zeng, Irene Sui Lan; Lumley, Thomas
Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.
Hall, Michelle G; Mattingley, Jason B; Dux, Paul E
The brain exploits redundancies in the environment to efficiently represent the complexity of the visual world. One example of this is ensemble processing, which provides a statistical summary of elements within a set (e.g., mean size). Another is statistical learning, which involves the encoding of stable spatial or temporal relationships between objects. It has been suggested that ensemble processing over arrays of oriented lines disrupts statistical learning of structure within the arrays (Zhao, Ngo, McKendrick, & Turk-Browne, 2011). Here we asked whether ensemble processing and statistical learning are mutually incompatible, or whether this disruption might occur because ensemble processing encourages participants to process the stimulus arrays in a way that impedes statistical learning. In Experiment 1, we replicated Zhao and colleagues' finding that ensemble processing disrupts statistical learning. In Experiments 2 and 3, we found that statistical learning was unimpaired by ensemble processing when task demands necessitated (a) focal attention to individual items within the stimulus arrays and (b) the retention of individual items in working memory. Together, these results are consistent with an account suggesting that ensemble processing and statistical learning can operate over the same stimuli given appropriate stimulus processing demands during exposure to regularities. (c) 2015 APA, all rights reserved).
Full Text Available Abstract Background Statistical learning is a candidate for one of the basic prerequisites underlying the expeditious acquisition of spoken language. Infants from 8 months of age exhibit this form of learning to segment fluent speech into distinct words. To test the statistical learning skills at birth, we recorded event-related brain responses of sleeping neonates while they were listening to a stream of syllables containing statistical cues to word boundaries. Results We found evidence that sleeping neonates are able to automatically extract statistical properties of the speech input and thus detect the word boundaries in a continuous stream of syllables containing no morphological cues. Syllable-specific event-related brain responses found in two separate studies demonstrated that the neonatal brain treated the syllables differently according to their position within pseudowords. Conclusion These results demonstrate that neonates can efficiently learn transitional probabilities or frequencies of co-occurrence between different syllables, enabling them to detect word boundaries and in this way isolate single words out of fluent natural speech. The ability to adopt statistical structures from speech may play a fundamental role as one of the earliest prerequisites of language acquisition.
Milic, Natasa M.; Trajkovic, Goran Z.; Bukumiric, Zoran M.; Cirkovic, Andja; Nikolic, Ivan M.; Milin, Jelena S.; Milic, Nikola V.; Savic, Marko D.; Corac, Aleksandar M.; Marinkovic, Jelena M.; Stanisavljevic, Dejana M.
Background Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. Methods This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013–14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Results Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (plearning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training in medical statistics. PMID:26859832
Studer-Eichenberger, Esther; Studer-Eichenberger, Felix; Koenig, Thomas
The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.
Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation
Main styles, or paradigms of programming – imperative, functional, logic, and object-oriented – are shortly described and compared, and corresponding programming techniques are outlined. Programming languages are classified in accordance with the main style and techniques supported. It is argued that profound education in computer science should include learning base programming techniques of all main programming paradigms.
Stephen Wee Hun eLim; Gavin Jun Peng eNg; Gabriel Qi Hao eWong
Research methods and statistics are an indispensable subject in the undergraduate psychology curriculum, but there are challenges associated with engaging students in it, such as making learning durable. Here we hypothesized that retrieval-based learning promotes long-term retention of statistical knowledge in psychology. Participants either studied the educational material in four consecutive periods, or studied it just once and practiced retrieving the information in the subsequent three pe...
Principe, Jose C
This book presents the first cohesive treatment of Information Theoretic Learning (ITL) algorithms to adapt linear or nonlinear learning machines both in supervised or unsupervised paradigms. ITL is a framework where the conventional concepts of second order statistics (covariance, L2 distances, correlation functions) are substituted by scalars and functions with information theoretic underpinnings, respectively entropy, mutual information and correntropy. ITL quantifies the stochastic structure of the data beyond second order statistics for improved performance without using full-blown Bayesi
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Peters, Pam; Smith, Adam; Middledorp, Jenny; Karpin, Anne; Sin, Samantha; Kilgore, Alan
This paper describes a terminological approach to the teaching and learning of fundamental concepts in foundation tertiary units in Statistics and Accounting, using an online dictionary-style resource (TermFinder) with customised "termbanks" for each discipline. Designed for independent learning, the termbanks support inquiring students…
Darmawan, M.; Hidayah, N. Y.
Currently, there has been a change of new paradigm in the learning model in college, ie from Teacher Centered Learning (TCL) model to Student Centered Learing (SCL). It is generally assumed that the SCL model is better than the TCL model. The Courses of 2nd Industrial Statistics in the Department Industrial Engineering Pancasila University is the subject that belongs to the Basic Engineering group. So far, the applied learning model refers more to the TCL model, and field facts show that the learning outcomes are less satisfactory. Of the three consecutive semesters, ie even semester 2013/2014, 2014/2015, and 2015/2016 obtained grade average is equal to 56.0; 61.1, and 60.5. In the even semester of 2016/2017, Classroom Action Research (CAR) is conducted for this course through the implementation of SCL model with Problem Based Learning (PBL) methods. The hypothesis proposed is that the SCL-PBL model will be able to improve the final grade of the course. The results shows that the average grade of the course can be increased to 73.27. This value was then tested using the ANOVA and the test results concluded that the average grade was significantly different from the average grade value in the previous three semesters.
Rock, Adam J; Coventry, William L; Morgan, Methuen I; Loi, Natasha M
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology.
Rock, Adam J.; Coventry, William L.; Morgan, Methuen I.; Loi, Natasha M.
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology. PMID:27014147
Adam John Rock
Full Text Available Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal, Ginsburg, & Schau, 1997. Given the ubiquitous and distributed nature of eLearning systems (Nof, Ceroni, Jeong, & Moghaddam, 2015, teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology.
Natasa M Milic
Full Text Available Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face learning to further assess the potential value of web-based learning in medical statistics.This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545 the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course.Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001 and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023 with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001.This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional
Suanda, Sumarga H; Mugwanya, Nassali; Namy, Laura L
Recent empirical work has highlighted the potential role of cross-situational statistical word learning in children's early vocabulary development. In the current study, we tested 5- to 7-year-old children's cross-situational learning by presenting children with a series of ambiguous naming events containing multiple words and multiple referents. Children rapidly learned word-to-object mappings by attending to the co-occurrence regularities across these ambiguous naming events. The current study begins to address the mechanisms underlying children's learning by demonstrating that the diversity of learning contexts affects performance. The implications of the current findings for the role of cross-situational word learning at different points in development are discussed along with the methodological implications of employing school-aged children to test hypotheses regarding the mechanisms supporting early word learning. Copyright © 2014 Elsevier Inc. All rights reserved.
Alt, Mary; Meyers, Christina; Oglivie, Trianna; Nicholas, Katrina; Arizmendi, Genesis
To explore the efficacy of a word learning intervention for late-talking toddlers that is based on principles of cross-situational statistical learning. Four late-talking toddlers were individually provided with 7-10 weeks of bi-weekly word learning intervention that incorporated principles of cross-situational statistical learning. Treatment was input-based meaning that, aside from initial probes, children were not asked to produce any language during the sessions. Pre-intervention data included parent-reported measures of productive vocabulary and language samples. Data collected during intervention included production on probes, spontaneous production during treatment, and parent report of words used spontaneously at home. Data were analyzed for number of target words learned relative to control words, effect sizes, and pre-post treatment vocabulary measures. All children learned more target words than control words and, on average, showed a large treatment effect size. Children made pre-post vocabulary gains, increasing their percentile scores on the MCDI, and demonstrated a rate of word learning that was faster than rates found in the literature. Cross-situational statistically based word learning intervention has the potential to improve vocabulary learning in late-talking toddlers. Limitations on interpretation are also discussed. Readers will describe what cross-situational learning is and how it might apply to treatment. They will identify how including lexical and contextual variability in a word learning intervention for toddlers affected treatment outcomes. They will also recognize evidence of improved rate of vocabulary learning following treatment. Copyright © 2014 Elsevier Inc. All rights reserved.
Schmalz, Xenia; Altoè, Gianmarco; Mulatti, Claudio
The existing literature on developmental dyslexia (hereafter: dyslexia) often focuses on isolating cognitive skills which differ across dyslexic and control participants. Among potential correlates, previous research has studied group differences between dyslexic and control participants in performance on statistical learning tasks. A statistical…
Milic, Natasa M; Trajkovic, Goran Z; Bukumiric, Zoran M; Cirkovic, Andja; Nikolic, Ivan M; Milin, Jelena S; Milic, Nikola V; Savic, Marko D; Corac, Aleksandar M; Marinkovic, Jelena M; Stanisavljevic, Dejana M
Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (pstatistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training in medical statistics.
Ishikawa, Tetsuo; Mogi, Ken
Once people perceive what is in the hidden figure such as Dallenbach's cow and Dalmatian, they seldom seem to come back to the previous state when they were ignorant of the answer. This special type of learning process can be accomplished in a short time, with the effect of learning lasting for a long time (visual one-shot learning). Although it is an intriguing cognitive phenomenon, the lack of the control of difficulty of stimuli presented has been a problem in research. Here we propose a novel paradigm to create new hidden figures systematically by using a morphing technique. Through gradual changes from a blurred and binarized two-tone image to a blurred grayscale image of the original photograph including objects in a natural scene, spontaneous one-shot learning can occur at a certain stage of morphing when a sufficient amount of information is restored to the degraded image. A negative correlation between confidence levels and reaction times is observed, giving support to the fluency theory of one-shot learning. The correlation between confidence ratings and correct recognition rates indicates that participants had an accurate introspective ability (metacognition). The learning effect could be tested later by verifying whether or not the target object was recognized quicker in the second exposure. The present method opens a way for a systematic production of "good" hidden figures, which can be used to demystify the nature of visual one-shot learning.
First, Michael B
Work is currently under way on the Diagnostic and Statistical Manual of Mental Disorders (DSM), Fifth Edition, due to be published by the American Psychiatric Association in 2013. Dissatisfaction with the current categorical descriptive approach has led to aspirations for a paradigm shift for DSM-5. A historical review of past revisions of the DSM was performed. Efforts undertaken before the start of the DSM-5 development process to conduct a state-of-the science review and set a research agenda were examined to determine if results supported a paradigm shift for DSM-5. Proposals to supplement DSM-5 categorical diagnosis with dimensional assessments are reviewed and critiqued. DSM revisions have alternated between paradigm shifts (the first edition of the DSM in 1952 and DSM-III in 1980) and incremental improvements (DSM-II in 1968, DSM-III-R in 1987, and DSM-IV in 1994). The results of the review of the DSM-5 research planning initiatives suggest that despite the scientific advances that have occurred since the descriptive approach was first introduced in 1980, the field lacks a sufficiently deep understanding of mental disorders to justify abandoning the descriptive approach in favour of a more etiologically based alternative. Proposals to add severity and cross-cutting dimensions throughout DSM-5 are neither paradigm shifting, given that simpler versions of such dimensions are already a component of DSM-IV, nor likely to be used by busy clinicians without evidence that they improve clinical outcomes. Despite initial aspirations that DSM would undergo a paradigm shift with this revision, DSM-5 will continue to adopt a descriptive categorical approach, albeit with a greatly expanded dimensional component.
Kabadayi, Can; Bobrowicz, Katarzyna; Osvath, Mathias
In this paper, we review one of the oldest paradigms used in animal cognition: the detour paradigm. The paradigm presents the subject with a situation where a direct route to the goal is blocked and a detour must be made to reach it. Often being an ecologically valid and a versatile tool, the detour paradigm has been used to study diverse cognitive skills like insight, social learning, inhibitory control and route planning. Due to the relative ease of administrating detour tasks, the paradigm has lately been used in large-scale comparative studies in order to investigate the evolution of inhibitory control. Here we review the detour paradigm and some of its cognitive requirements, we identify various ecological and contextual factors that might affect detour performance, we also discuss developmental and neurological underpinnings of detour behaviors, and we suggest some methodological approaches to make species comparisons more robust.
Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward
We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a
Full Text Available BACKGROUND: We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses, which required us to develop a specific-purpose Statistical Learning Environment (SLE based on Reproducible Computing and newly developed Peer Review (PR technology. OBJECTIVES: The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. METHODS: Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. RESULTS: The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student
Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward
Background We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific–purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under
Seidenberg, Mark S.; MacDonald, Maryellen C.
This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…
Song, Yanjie; Kong, Siu-Cheung
The study aims at investigating university students' acceptance of a statistics learning platform to support the learning of statistics in a blended learning context. Three kinds of digital resources, which are simulations, online videos, and online quizzes, were provided on the platform. Premised on the technology acceptance model, we adopted a…
Jeste, Shafali S.; Kirkham, Natasha; Senturk, Damla; Hasenstab, Kyle; Sugar, Catherine; Kupelian, Chloe; Baker, Elizabeth; Sanders, Andrew J.; Shimizu, Christina; Norona, Amanda; Paparella, Tanya; Freeman, Stephanny F. N.; Johnson, Scott P.
Statistical learning is characterized by detection of regularities in one's environment without an awareness or intention to learn, and it may play a critical role in language and social behavior. Accordingly, in this study we investigated the electrophysiological correlates of visual statistical learning in young children with autism…
Charles, Abigail Sheena
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless
Franco, Ana; Gaillard, Vinciane; Cleeremans, Axel; Destrebecqz, Arnaud
Statistical learning can be used to extract the words from continuous speech. Gómez, Bion, and Mehler (Language and Cognitive Processes, 26, 212-223, 2011) proposed an online measure of statistical learning: They superimposed auditory clicks on a continuous artificial speech stream made up of a random succession of trisyllabic nonwords. Participants were instructed to detect these clicks, which could be located either within or between words. The results showed that, over the length of exposure, reaction times (RTs) increased more for within-word than for between-word clicks. This result has been accounted for by means of statistical learning of the between-word boundaries. However, even though statistical learning occurs without an intention to learn, it nevertheless requires attentional resources. Therefore, this process could be affected by a concurrent task such as click detection. In the present study, we evaluated the extent to which the click detection task indeed reflects successful statistical learning. Our results suggest that the emergence of RT differences between within- and between-word click detection is neither systematic nor related to the successful segmentation of the artificial language. Therefore, instead of being an online measure of learning, the click detection task seems to interfere with the extraction of statistical regularities.
McDonough, Kim; Trofimovich, Pavel
This study investigated whether second language (L2) speakers' morphosyntactic pattern learning was predicted by their statistical learning and working memory abilities. Across three experiments, Thai English as a Foreign Language (EFL) university students (N = 140) were exposed to either the transitive construction in Esperanto (e.g., "tauro…
Chen, Chi-Hsin; Zhang, Yayun; Yu, Chen
Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input. Copyright © 2017 Cognitive Science Society, Inc.
Poepsel, Timothy J; Weiss, Daniel J
Statistical learning is a fundamental component of language acquisition, yet to date, relatively few studies have examined whether these abilities differ in bilinguals. In the present study, we examine this issue by comparing English monolinguals with Chinese-English and English-Spanish bilinguals in a cross-situational statistical learning (CSSL) task. In Experiment 1, we assessed the ability of both monolinguals and bilinguals on a basic CSSL task that contained only one-to-one mappings. In Experiment 2, learners were asked to form both one-to-one and two-to-one mappings, and were tested at three points during familiarization. Overall, monolinguals and bilinguals did not differ in their learning of one-to-one mappings. However, bilinguals more quickly acquired two-to-one mappings, while also exhibiting greater proficiency than monolinguals. We conclude that the fundamental SL mechanism may not be affected by language experience, in accord with previous studies. However, when the input contains greater variability, bilinguals may be more prone to detecting the presence of multiple structures. Copyright © 2016 Elsevier B.V. All rights reserved.
Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.
Aslin, Richard N
How do infants learn so rapidly and with little apparent effort? In 1996, Saffran, Aslin, and Newport reported that 8-month-old human infants could learn the underlying temporal structure of a stream of speech syllables after only 2 min of passive listening. This demonstration of what was called statistical learning, involving no instruction, reinforcement, or feedback, led to dozens of confirmations of this powerful mechanism of implicit learning in a variety of modalities, domains, and species. These findings reveal that infants are not nearly as dependent on explicit forms of instruction as we might have assumed from studies of learning in which children or adults are taught facts such as math or problem solving skills. Instead, at least in some domains, infants soak up the information around them by mere exposure. Learning and development in these domains thus appear to occur automatically and with little active involvement by an instructor (parent or teacher). The details of this statistical learning mechanism are discussed, including how exposure to specific types of information can, under some circumstances, generalize to never-before-observed information, thereby enabling transfer of learning. WIREs Cogn Sci 2017, 8:e1373. doi: 10.1002/wcs.1373 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Bebermeier, Sarah; Nussbeck, Fridtjof W.; Ontrup, Greta
Lecturers teaching statistics are faced with several challenges supporting students' learning in appropriate ways. A variety of methods and tools exist to facilitate students' learning on statistics courses. The online questionnaires presented in this report are a new, slightly different computer-based tool: the central aim was to support students…
Full Text Available Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM. The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.
Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego
Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.
Isabella Norén Creutz
Full Text Available The objective of this paper is to explore the discourses of learning that are actualized in workplace e-learning. It aims to understand how learning is defined in research within this field. The empirical material consists of academic research articles on e-learning in the workplace, published from 2000 to 2013. The findings are presented as four metaphors highlighting four overlapping time periods with different truth regimes: Celebration, Questioning, Reflection and Dissolution. It is found that learning as a phenomenon tends to be marginalized in relation to the digital technology used. Based on this, we discuss a proposal for a more critical and problematized approach to e-learning, and a deeper understanding of the challenges and opportunities for employees and organizations to acquire knowledge in the digital age.
Pearce, Marcus T
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Karpiak, Christie P.
Undergraduate psychology majors (N = 51) at a mid-sized private university took a statistics examination on the first day of the research methods course, a course for which a grade of "C" or higher in statistics is a prerequisite. Students who had taken a problem-based learning (PBL) section of the statistics course (n = 15) were compared to those…
Full Text Available Teaching a statistics course for undergraduate computer science students can be very challenging: As statistics teachers we are usually faced with problems ranging from a complete disinterest in the subject to lack of basic knowledge in mathematics and anxiety for failing the exam, since statistics has the reputation of having high failure rates. In our case, we additionally struggle with difficulties in the timing of the lectures as well as often occurring absence of the students due to spare-time jobs or a long traveling time to the university. This paper reveals how these issues can be addressed by the introduction of a blended learning module in statistics. In the following, we describe an e-learning development process used to implement time- and location-independent learning in statistics. The study focuses on a six-step-approach for developing the blended learning module. In addition, the teaching framework for the blended module is presented, including suggestions for increasing the interest in learning the course. Furthermore, the first experimental in-class usage, including evaluation of the students’ expectations, has been completed and the outcome is discussed.
Slone, Lauren Krogh; Johnson, Scott P
Past research suggests that infants have powerful statistical learning abilities; however, studies of infants' visual statistical learning offer differing accounts of the developmental trajectory of and constraints on this learning. To elucidate this issue, the current study tested the hypothesis that young infants' segmentation of visual sequences depends on redundant statistical cues to segmentation. A sample of 20 2-month-olds and 20 5-month-olds observed a continuous sequence of looming shapes in which unit boundaries were defined by both transitional probability and co-occurrence frequency. Following habituation, only 5-month-olds showed evidence of statistically segmenting the sequence, looking longer to a statistically improbable shape pair than to a probable pair. These results reaffirm the power of statistical learning in infants as young as 5 months but also suggest considerable development of statistical segmentation ability between 2 and 5 months of age. Moreover, the results do not support the idea that infants' ability to segment visual sequences based on transitional probabilities and/or co-occurrence frequencies is functional at the onset of visual experience, as has been suggested previously. Rather, this type of statistical segmentation appears to be constrained by the developmental state of the learner. Factors contributing to the development of statistical segmentation ability during early infancy, including memory and attention, are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Hoyle, David C
Statistical mechanics techniques have proved to be useful tools in quantifying the accuracy with which signal vectors are extracted from experimental data. However, analysis has previously been limited to specific model forms for the population covariance C, which may be inappropriate for real world data sets. In this paper we obtain new statistical mechanical results for a general population covariance matrix C. For data sets consisting of p sample points in R N we use the replica method to study the accuracy of orthogonal signal vectors estimated from the sample data. In the asymptotic limit of N,p→∞ at fixed α = p/N, we derive analytical results for the signal direction learning curves. In the asymptotic limit the learning curves follow a single universal form, each displaying a retarded learning transition. An explicit formula for the location of the retarded learning transition is obtained and we find marked variation in the location of the retarded learning transition dependent on the distribution of population covariance eigenvalues. The results of the replica analysis are confirmed against simulation
Statistical learning is widely used in business analytics to discover structure or exploit patterns from historical data, and build models that capture relationships between an outcome of interest and a set of variables. Optimal learning on the other hand, solves the operational side of the problem, by iterating between decision making and data acquisition/learning. All too often the two problems go hand-in-hand, which exhibit a feedback loop between statistics and optimization. We apply this statistical/optimal learning concept on a context of fundraising marketing campaign problem arising in many non-profit organizations. Many such organizations use direct-mail marketing to cultivate one-time donors and convert them into recurring contributors. Cultivated donors generate much more revenue than new donors, but also lapse with time, making it important to steadily draw in new cultivations. The direct-mail budget is limited, but better-designed mailings can improve success rates without increasing costs. We first apply statistical learning to analyze the effectiveness of several design approaches used in practice, based on a massive dataset covering 8.6 million direct-mail communications with donors to the American Red Cross during 2009-2011. We find evidence that mailed appeals are more effective when they emphasize disaster preparedness and training efforts over post-disaster cleanup. Including small cards that affirm donors' identity as Red Cross supporters is an effective strategy, while including gift items such as address labels is not. Finally, very recent acquisitions are more likely to respond to appeals that ask them to contribute an amount similar to their most recent donation, but this approach has an adverse effect on donors with a longer history. We show via simulation that a simple design strategy based on these insights has potential to improve success rates from 5.4% to 8.1%. Given these findings, when new scenario arises, however, new data need to
Mareschal, Denis; French, Robert M
Even newborn infants are able to extract structure from a stream of sensory inputs; yet how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-nothing in nature. As chunks are learnt their component parts become more and more tightly bound together. TRACX2 successfully models the data from five experiments from the infant visual statistical learning literature, including tasks involving forward and backward transitional probabilities, low-salience embedded chunk items, part-sequences and illusory items. The model also captures performance differences across ages through the tuning of a single-learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain-general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Batterink, Laura J
The identification of words in continuous speech, known as speech segmentation, is a critical early step in language acquisition. This process is partially supported by statistical learning, the ability to extract patterns from the environment. Given that speech segmentation represents a potential bottleneck for language acquisition, patterns in speech may be extracted very rapidly, without extensive exposure. This hypothesis was examined by exposing participants to continuous speech streams composed of novel repeating nonsense words. Learning was measured on-line using a reaction time task. After merely one exposure to an embedded novel word, learners demonstrated significant learning effects, as revealed by faster responses to predictable than to unpredictable syllables. These results demonstrate that learners gained sensitivity to the statistical structure of unfamiliar speech on a very rapid timescale. This ability may play an essential role in early stages of language acquisition, allowing learners to rapidly identify word candidates and "break in" to an unfamiliar language.
Haebig, Eileen; Saffran, Jenny R; Ellis Weismer, Susan
Word learning is an important component of language development that influences child outcomes across multiple domains. Despite the importance of word knowledge, word-learning mechanisms are poorly understood in children with specific language impairment (SLI) and children with autism spectrum disorder (ASD). This study examined underlying mechanisms of word learning, specifically, statistical learning and fast-mapping, in school-aged children with typical and atypical development. Statistical learning was assessed through a word segmentation task and fast-mapping was examined in an object-label association task. We also examined children's ability to map meaning onto newly segmented words in a third task that combined exposure to an artificial language and a fast-mapping task. Children with SLI had poorer performance on the word segmentation and fast-mapping tasks relative to the typically developing and ASD groups, who did not differ from one another. However, when children with SLI were exposed to an artificial language with phonemes used in the subsequent fast-mapping task, they successfully learned more words than in the isolated fast-mapping task. There was some evidence that word segmentation abilities are associated with word learning in school-aged children with typical development and ASD, but not SLI. Follow-up analyses also examined performance in children with ASD who did and did not have a language impairment. Children with ASD with language impairment evidenced intact statistical learning abilities, but subtle weaknesses in fast-mapping abilities. As the Procedural Deficit Hypothesis (PDH) predicts, children with SLI have impairments in statistical learning. However, children with SLI also have impairments in fast-mapping. Nonetheless, they are able to take advantage of additional phonological exposure to boost subsequent word-learning performance. In contrast to the PDH, children with ASD appear to have intact statistical learning, regardless of
Full Text Available The statistical regularities of a sequence of visual shapes can be learned incidentally. Arciuli et al. (2014 recently argued that intentional instructions only improve learning at slow presentation rates as they favor the use of explicit strategies. The aim of the present study was (1 to test this assumption directly by investigating how instructions (incidental vs. intentional and presentation rate (fast vs. slow affect the acquisition of knowledge and (2 to examine how these factors influence the conscious vs. unconscious nature of the knowledge acquired. To this aim, we exposed participants to four triplets of shapes, presented sequentially in a pseudo-random order, and assessed their degree of learning in a subsequent completion task that integrated confidence judgments. Supporting Arciuli et al.’s claim, participant performance only benefited from intentional instructions at slow presentation rates. Moreover, informing participants beforehand about the existence of statistical regularities increased their explicit knowledge of the sequences, an effect that was not modulated by presentation speed. These results support that, although visual statistical learning can take place incidentally and, to some extent, outside conscious awareness, factors such as presentation rate and prior knowledge can boost learning of these regularities, presumably by favoring the acquisition of explicit knowledge.
Schwab, Jessica F; Schuler, Kathryn D; Stillman, Chelsea M; Newport, Elissa L; Howard, James H; Howard, Darlene V
Language learners must place unfamiliar words into categories, often with few explicit indicators about when and how that word can be used grammatically. Reeder, Newport, and Aslin (2013) showed that college students can learn grammatical form classes from an artificial language by relying solely on distributional information (i.e., contextual cues in the input). Here, 2 experiments revealed that healthy older adults also show such statistical learning, though they are poorer than young at distinguishing grammatical from ungrammatical strings. This finding expands knowledge of which aspects of learning vary with aging, with potential implications for second language learning in late adulthood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Tomcho, Thomas J.; Rice, Diana; Foels, Rob; Folmsbee, Leah; Vladescu, Jason; Lissman, Rachel; Matulewicz, Ryan; Bopp, Kara
Research methods and statistics courses constitute a core undergraduate psychology requirement. We analyzed course syllabi and faculty self-reported coverage of both research methods and statistics course learning objectives to assess the concordance with APA's learning objectives (American Psychological Association, 2007). We obtained a sample of…
Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián
Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.
A brief overview is given of the theoretical underpinnings of the Membrane Paradigm for black-hole physics. Then those underpinnings are used to elucidate the Paradigm's view that the laws of black-hole thermodynamics (including the statistical origin of black-hole entropy) are just a special case of the laws of thermodynamics for an ordinary, rotating, thermal reservoir
Smilkstein, G; Aspy, C B; Quiggins, P A
Conjugal violence has been described as having multiple etiologies. The variables are so numerous that intervention and research protocols are difficult to effect. This paper proposes a paradigm that establishes conjugal conflict and violence as separate entities. According to the paradigm, conjugal conflict is viewed as "an inevitable part of human association," whereas conjugal violence is determined to be a learned behavioral tactic that is employed as a coping strategy when an individual's conflict threshold potential is exceeded. Evidence will be offered that violence is learned from family of origin and from observing what is common or accepted practice in the community. Use of this paradigm would give primacy to community education programs that advance the concept of conflict resolution through rational discourse.
Otsuka, Sachio; Saiki, Jun
Prior studies have shown that visual statistical learning (VSL) enhances familiarity (a type of memory) of sequences. How do statistical regularities influence the processing of each triplet element and inserted distractors that disrupt the regularity? Given that increased attention to triplets induced by VSL and inhibition of unattended triplets, we predicted that VSL would promote memory for each triplet constituent, and degrade memory for inserted stimuli. Across the first two experiments, we found that objects from structured sequences were more likely to be remembered than objects from random sequences, and that letters (Experiment 1) or objects (Experiment 2) inserted into structured sequences were less likely to be remembered than those inserted into random sequences. In the subsequent two experiments, we examined an alternative account for our results, whereby the difference in memory for inserted items between structured and random conditions is due to individuation of items within random sequences. Our findings replicated even when control letters (Experiment 3A) or objects (Experiment 3B) were presented before or after, rather than inserted into, random sequences. Our findings suggest that statistical learning enhances memory for each item in a regular set and impairs memory for items that disrupt the regularity. Copyright © 2015 Elsevier B.V. All rights reserved.
Ruffman, Ted; Taumoepeau, Mele; Perkins, Chris
Many authors have argued that infants understand goals, intentions, and beliefs. We posit that infants' success on such tasks might instead reveal an understanding of behaviour, that infants' proficient statistical learning abilities might enable such insights, and that maternal talk scaffolds children's learning about the social world as well. We…
In this paper, we present a statistical-mechanical analysis of deep learning. We elucidate some of the essential components of deep learning — pre-training by unsupervised learning and fine tuning by supervised learning. We formulate the extraction of features from the training data as a margin criterion in a high-dimensional feature-vector space. The self-organized classifier is then supplied with small amounts of labelled data, as in deep learning. Although we employ a simple single-layer perceptron model, rather than directly analyzing a multi-layer neural network, we find a nontrivial phase transition that is dependent on the number of unlabelled data in the generalization error of the resultant classifier. In this sense, we evaluate the efficacy of the unsupervised learning component of deep learning. The analysis is performed by the replica method, which is a sophisticated tool in statistical mechanics. We validate our result in the manner of deep learning, using a simple iterative algorithm to learn the weight vector on the basis of belief propagation.
Saadati, Farzaneh; Ahmad Tarmizi, Rohani; Mohd Ayub, Ahmad Fauzi; Abu Bakar, Kamariah
Because students' ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is 'value added' because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students' problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students.
Emberson, Lauren L; Rubinstein, Dani Y
The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1-dog1, bird2-dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1-dog_picture1, bird_picture2-dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual
Full Text Available Modifying patients’ expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients’ expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli. Two methods are often used to investigate the PE: (1 paradigms, in which participants passively observe PEs (”passive” paradigms and (2 paradigms, which encourage a behavioral adaptation following a PE (“active” paradigms. These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1 the confrontation with an expectation violation situation and (2 an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed
McLoughlin, M. Padraig M. M.
The author of this paper submits the thesis that learning requires doing; only through inquiry is learning achieved, and hence this paper proposes a programme of use of a modified Moore method in a Probability and Mathematical Statistics (PAMS) course sequence to teach students PAMS. Furthermore, the author of this paper opines that set theory…
Oleksandr M. Korniiets
Full Text Available The article deals with the application of social services WEB 2.0 for personal learning environment creation that is used for professional orientation work of social educator. The feedback is must be in personal learning environment for the effective professional orientation work. This feedback can be organized through statistical monitoring. The typical solution for organizing personal learning environment with built-in statistical surveys and statistical data processing is considered in the article. The possibilities of the statistical data collection and processing services on the example of Google Analytics are investigated.
Heathcock, Jill C; Bhat, Anjana N; Lobo, Michele A; Galloway, James C
Infants born preterm differ in their spontaneous kicking, as well as their learning and memory abilities in the mobile paradigm, compared with infants born full-term. In the mobile paradigm, a supine infant's ankle is tethered to a mobile so that leg kicks cause a proportional amount of mobile movement. The purpose of this study was to investigate the relative kicking frequency of the tethered (right) and nontethered (left) legs in these 2 groups of infants. Ten infants born full-term and 10 infants born preterm (infants participated in the study. The relative kicking frequencies of the tethered and nontethered legs were analyzed during learning and short-term and long-term memory periods of the mobile paradigm. Infants born full-term showed an increase in the relative kicking frequency of the tethered leg during the learning period and the short-term memory period but not for the long-term memory period. Infants born preterm did not show a change in kicking pattern for learning or memory periods, and consistently kicked both legs in relatively equal amounts. Infants born full-term adapted their baseline kicking frequencies in a task-specific manner to move the mobile and then retained this adaptation for the short-term memory period. In contrast, infants born preterm showed no adaptation, suggesting a lack of purposeful leg control. This lack of control may reflect a general decrease in the ability of infants born preterm to use their limb movements to interact with their environment. As such, the mobile paradigm may be clinically useful in the early assessment and intervention of infants born preterm and at risk for future impairment.
Dwyer, Dominic B; Falkai, Peter; Koutsouleris, Nikolaos
Machine learning approaches for clinical psychology and psychiatry explicitly focus on learning statistical functions from multidimensional data sets to make generalizable predictions about individuals. The goal of this review is to provide an accessible understanding of why this approach is important for future practice given its potential to augment decisions associated with the diagnosis, prognosis, and treatment of people suffering from mental illness using clinical and biological data. To this end, the limitations of current statistical paradigms in mental health research are critiqued, and an introduction is provided to critical machine learning methods used in clinical studies. A selective literature review is then presented aiming to reinforce the usefulness of machine learning methods and provide evidence of their potential. In the context of promising initial results, the current limitations of machine learning approaches are addressed, and considerations for future clinical translation are outlined.
Full Text Available The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.
Full Text Available Because students' ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is 'value added' because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students' problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students.
Hauschild, A.C.; Baumbach, Jan; Baumbach, J.
sophisticated statistical learning techniques for VOC-based feature selection and supervised classification into patient groups. We analyzed breath data from 84 volunteers, each of them either suffering from chronic obstructive pulmonary disease (COPD), or both COPD and bronchial carcinoma (COPD + BC), as well...... as from 35 healthy volunteers, comprising a control group (CG). We standardized and integrated several statistical learning methods to provide a broad overview of their potential for distinguishing the patient groups. We found that there is strong potential for separating MCC/IMS chromatograms of healthy...... patients from healthy controls. We conclude that these statistical learning methods have a generally high accuracy when applied to well-structured, medical MCC/IMS data....
Peretz, Isabelle; Saffran, Jenny; Schön, Daniele; Gosselin, Nathalie
The acquisition of both speech and music uses general principles: learners extract statistical regularities present in the environment. Yet, individuals who suffer from congenital amusia (commonly called tone-deafness) have experienced lifelong difficulties in acquiring basic musical skills, while their language abilities appear essentially intact. One possible account for this dissociation between music and speech is that amusics lack normal experience with music. If given appropriate exposure, amusics might be able to acquire basic musical abilities. To test this possibility, a group of 11 adults with congenital amusia, and their matched controls, were exposed to a continuous stream of syllables or tones for 21-minute. Their task was to try to identify three-syllable nonsense words or three-tone motifs having an identical statistical structure. The results of five experiments show that amusics can learn novel words as easily as controls, whereas they systematically fail on musical materials. Thus, inappropriate musical exposure cannot fully account for the musical disorder. Implications of the results for the domain specificity of statistical learning are discussed. © 2012 New York Academy of Sciences.
This paper is a presentation made in support of the statistics profession. This field can say it has had a major impact in most major fields of study presently undertaken by man, yet it is not perceived as an important, or critical field of study. It is not a growth field either, witness the almost level number of faculty and new PhD`s produced over the past twenty years. The author argues the profession must do a better job of selling itself to the students it educates. Awaken them to the impact of statistics in their lives and their business worlds, so that they see beyond the formulae to the application of these principles.
Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.
Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.
Conversano, Claudio; Vichi, Maurizio
This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...
Ouyang, Long; Boroditsky, Lera; Frank, Michael C
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of association with other words (e.g., they both tend to occur with words like "deliver," "truck," "package"). In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants to entirely novel words, whereas real language learners encounter input that contains some known words that are semantically organized. In three experiments, we show that (a) the presence of familiar semantic reference points facilitates distributional learning and (b) this effect crucially depends both on the presence of known words and the adherence of these known words to some semantic organization. Copyright © 2016 Cognitive Science Society, Inc.
Jiang, Yuhong V; Sha, Li Z; Remington, Roger W
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention.
Over the past decade, it has been clear that even very young infants are sensitive to the statistical structure of language input presented to them, and use the distributional regularities to induce simple grammars. But can such statistically-driven learning also explain the acquisition of more complex grammar, particularly when the grammar includes recursion? Recent claims (e.g., Hauser, Chomsky, and Fitch, 2002) have suggested that the answer is no, and that at least recursion must be an innate capacity of the human language acquisition device. In this talk evidence will be presented that indicates that, in fact, statistically-driven learning (embodied in recurrent neural networks) can indeed enable the learning of complex grammatical patterns, including those that involve recursion. When the results are generalized to idealized machines, it is found that the networks are at least equivalent to Push Down Automata. Perhaps more interestingly, with limited and finite resources (such as are presumed to exist in the human brain) these systems demonstrate patterns of performance that resemble those in humans.
Hypotheses generally conform to paradigms which, over time, change, usually tardily, after they have become increasingly difficult to sustain under the impact of non-conforming evidence and alternative hypotheses, but more important, when they no longer are comfortably ensconced in the surrounding social-economic-political-cultural milieu. It is asserted that this milieu is the most important factor in shaping scientific theorizing. Some examples are cited: the rejection of the evidence that the world orbits around the sun (suspected by Pythagoras) in favor of centuries-long firm adherence to the Ptolemaic geocentric system; the early acceptance of Natural Selection in spite of its tautological essence and only conjectural supporting evidence, because it justified contemporaneous social-political ideologies as typified by, e.g., Spencer and Malthus. Economic, social, and cultural factors are cited as providing the ground, i.e., ideational substrate, for what is cited as the Discreetness-Chance Paradigm (DCP), that has increasingly dominated physics, biology, and medicine for over a century and which invokes small, discrete packets of energy/matter (quanta, genes, microorganisms, aberrant cells) functioning within an environment of statistical, not determined, causality. There is speculation on a possible paradigmatic shift from the DCP, which has fostered the proliferation, parallel with ("splitting") taxonomy, of alleged individual disease entities, their diagnoses, and, when available, their specific remedies, something particularly prominent in, e.g., psychiatry's Diagnostic and Statistical Manual, a codified compendium of alleged mental and behavioral disorders, but evident in any textbook of diagnosis and treatment of physical ailments. This presumed paradigm shift may be reflected in Western medicine, presently increasingly empirical and atomized, towards a growing acceptance of a more generalized, subject-oriented, approach to health and disease, a non
This paper identifies the need for a deliberate approach to theory building in the context of researching cognitive and learning style differences in human performance. A case for paradigm shift and a focus upon research epistemology is presented, building upon a recent critique of style research. A proposal for creating paradigm shift is made,…
Wang, Pan; Feng, Shuai; Fan, Zhun
This paper addresses some issues on the weighted linear integration of modular neural networks (MNN: a paradigm of hybrid multi-learning machines). First, from the general meaning of variable weights and variable elements synthesis, three basic kinds of integrated models are discussed...... a general form while the corresponding computational algorithms are described briefly. The authors present a new training algorithm of sub-networks named “'Expert in one thing and good at many' (EOGM).” In this algorithm, every sub-network is trained on a primary dataset with some of its near neighbors...... as the accessorial datasets. Simulated results with a kind of dynamic integration methods show the effectiveness of these algorithms, where the performance of the algorithm with EOGM is better than that of the algorithm with a common training method....
Wu, Yazhou; Zhang, Ling; Liu, Ling; Zhang, Yanqi; Liu, Xiaoyu; Yi, Dong
It is clear that the teaching of medical statistics needs to be improved, yet areas for priority are unclear as medical students' learning and application of statistics at different levels is not well known. Our goal is to assess the attitudes of medical students toward the learning and application of medical statistics, and discover their…
The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has
Vargas-Vargas, Manuel; Mondejar-Jimenez, Jose; Santamaria, Maria-Letica Meseguer; Alfaro-Navarro, Jose-Luis; Fernandez-Aviles, Gema
This document sets out a novel teaching methodology as used in subjects with statistical content, traditionally regarded by students as "difficult". In a virtual learning environment, instructional techniques little used in mathematical courses were employed, such as the Jigsaw cooperative learning method, which had to be adapted to the…
The purpose of this study is to define teacher views about the difficulties in learning and teaching middle school statistics subjects. To serve this aim, a number of interviews were conducted with 10 middle school maths teachers in 2011-2012 school year in the province of Trabzon. Of the qualitative descriptive research methods, the…
Full Text Available It is well established that mood influences many cognitive processes, such as learning and executive functions. Although statistical learning is assumed to be part of our daily life, as mood does, the influence of mood on statistical learning has never been investigated before. In the present study, a sad vs. neutral mood was induced to the participants through the listening of stories while they were exposed to a stream of visual shapes made up of the repeated presentation of four triplets, namely sequences of three shapes presented in a fixed order. Given that the inter-stimulus interval was held constant within and between triplets, the only cues available for triplet segmentation were the transitional probabilities between shapes. Direct and indirect measures of learning taken either immediately or 20 minutes after the exposure/mood induction phase revealed that participants learned the statistical regularities between shapes. Interestingly, although participants from the sad and neutral groups performed similarly in these tasks, subjective measures (confidence judgments taken after each trial revealed that participants who experienced the sad mood induction showed increased conscious access to their statistical knowledge. These effects were not modulated by the time delay between the exposure/mood induction and the test phases. These results are discussed within the scope of the robustness principle and the influence of negative affects on processing style.
Samsa, Gregory P.; LeBlanc, Thomas W.; Zaas, Aimee; Howie, Lynn; Abernethy, Amy P.
The core pedagogic problem considered here is how to effectively teach statistics to physicians who are engaged in a "learning health system" (LHS). This is a special case of a broader issue--namely, how to effectively teach statistics to academic physicians for whom research--and thus statistics--is a requirement for professional…
Salverda, Anne Pier
Lieberman, Borovsky, Hatrak, and Mayberry (2015) used a modified version of the visual-world paradigm to examine the real-time processing of signs in American Sign Language. They examined the activation of phonological and semantic competitors in native signers and late-learning signers and concluded that their results provide evidence that the…
Monroy, Claire D; Gerson, Sarah A; Hunnius, Sabine
Humans are sensitive to the statistical regularities in action sequences carried out by others. In the present eyetracking study, we investigated whether this sensitivity can support the prediction of upcoming actions when observing unfamiliar action sequences. In two between-subjects conditions, we examined whether observers would be more sensitive to statistical regularities in sequences performed by a human agent versus self-propelled 'ghost' events. Secondly, we investigated whether regularities are learned better when they are associated with contingent effects. Both implicit and explicit measures of learning were compared between agent and ghost conditions. Implicit learning was measured via predictive eye movements to upcoming actions or events, and explicit learning was measured via both uninstructed reproduction of the action sequences and verbal reports of the regularities. The findings revealed that participants, regardless of condition, readily learned the regularities and made correct predictive eye movements to upcoming events during online observation. However, different patterns of explicit-learning outcomes emerged following observation: Participants were most likely to re-create the sequence regularities and to verbally report them when they had observed an actor create a contingent effect. These results suggest that the shift from implicit predictions to explicit knowledge of what has been learned is facilitated when observers perceive another agent's actions and when these actions cause effects. These findings are discussed with respect to the potential role of the motor system in modulating how statistical regularities are learned and used to modify behavior.
The article explores paradigms for approaching course content to be studied in the classroom. These paradigms, or global views about what is of interest or importance and ways of knowing, relate to key questions in gerontology, such as what is the relevant domain/content to be studied, what is the central level of analysis or action, what are…
Full Text Available The paper presents the application of a hybrid method (blended learning - linking traditional education with on-line education to teach selected problems of mathematical statistics. This includes the teaching of the application of mathematical statistics to evaluate laboratory experimental results. An on-line statistics course was developed to form an integral part of the module ‘methods of statistical evaluation of experimental results’. The course complies with the principles outlined in the Polish National Framework of Qualifications with respect to the scope of knowledge, skills and competencies that students should have acquired at course completion. The paper presents the structure of the course and the educational content provided through multimedia lessons made accessible on the Moodle platform. Following courses which used the traditional method of teaching and courses which used the hybrid method of teaching, students test results were compared and discussed to evaluate the effectiveness of the hybrid method of teaching when compared to the effectiveness of the traditional method of teaching.
The purpose of this paper is to introduce a new paradigm being able to conceptualize content and process aspects of Operations Strategy. Based on a critical reading of literature; two opposing paradigms of Operations Strategy are identified and described. The first focuses on content issues...... of Operations Strategy and relies on a normative orientation and the second focuses on process issues of Operations Strategy and relies on a descriptive orientation. To compare and evaluate the two paradigms; the results of a longitudinal case-study of Operations Strategy formulation and implementation...... in practice are shown. These results promote the need for a new or third paradigm to integrate and balance the two former paradigms. The new paradigm is labeled as a moderate constructivist paradigm using the metaphor of chaos and seems suitable for conceptualizing Operations Strategy as it is in practice...
Chen, Liwen; Chen, Tung-Liang; Chen, Nian-Shing
Statistics has been recognised as one of the most anxiety-provoking subjects to learn in the higher education context. Educators have continuously endeavoured to find ways to integrate digital technologies and innovative pedagogies in the classroom to eliminate the fear of statistics. The purpose of this study is to systematically identify…
Gómez, Rebecca L
Statistical structure abounds in language. Human infants show a striking capacity for using statistical learning (SL) to extract regularities in their linguistic environments, a process thought to bootstrap their knowledge of language. Critically, studies of SL test infants in the minutes immediately following familiarization, but long-term retention unfolds over hours and days, with almost no work investigating retention of SL. This creates a critical gap in the literature given that we know little about how single or multiple SL experiences translate into permanent knowledge. Furthermore, different memory systems with vastly different encoding and retention profiles emerge at different points in development, with the underlying memory system dictating the fidelity of the memory trace hours later. I describe the scant literature on retention of SL, the learning and retention properties of memory systems as they apply to SL, and the development of these memory systems. I propose that different memory systems support retention of SL in infant and adult learners, suggesting an explanation for the slow pace of natural language acquisition in infancy. I discuss the implications of developing memory systems for SL and suggest that we exercise caution in extrapolating from adult to infant properties of SL.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Richter, Sophie Helene; Sartorius, Alexander; Gass, Peter; Vollmayr, Barbara
Learned helplessness has excellent validity as an animal model for depression, but problems in reproducibility limit its use and the high degree of stress involved in the paradigm raises ethical concerns. We therefore aimed to identify which and how many trials of the learned helplessness paradigm are necessary to distinguish between helpless and non-helpless rats. A trial-by-trial reanalysis of tests from 163 rats with congenital learned helplessness or congenital non-learned helplessness and comparison of 82 rats exposed to inescapable shock with 38 shock-controls revealed that neither the first test trials, when rats showed unspecific hyperlocomotion, nor trials of the last third of the test, when almost all animals responded quickly to the stressor, contributed to sensitivity and specificity of the test. Considering only trials 3-10 improved the classification of helpless and non-helpless rats. The refined analysis allows abbreviation of the test for learned helplessness from 15 trials to 10 trials thereby reducing pain and stress of the experimental animals without losing statistical power.
Kostrubiec, Viviane; Dumas, Guillaume; Zanone, Pier-Giorgio; Kelso, J A Scott
The Virtual Teacher paradigm, a version of the Human Dynamic Clamp (HDC), is introduced into studies of learning patterns of inter-personal coordination. Combining mathematical modeling and experimentation, we investigate how the HDC may be used as a Virtual Teacher (VT) to help humans co-produce and internalize new inter-personal coordination pattern(s). Human learners produced rhythmic finger movements whilst observing a computer-driven avatar, animated by dynamic equations stemming from the well-established Haken-Kelso-Bunz (1985) and Schöner-Kelso (1988) models of coordination. We demonstrate that the VT is successful in shifting the pattern co-produced by the VT-human system toward any value (Experiment 1) and that the VT can help humans learn unstable relative phasing patterns (Experiment 2). Using transfer entropy, we find that information flow from one partner to the other increases when VT-human coordination loses stability. This suggests that variable joint performance may actually facilitate interaction, and in the long run learning. VT appears to be a promising tool for exploring basic learning processes involved in social interaction, unraveling the dynamics of information flow between interacting partners, and providing possible rehabilitation opportunities.
Sproesser, Ute; Engel, Joachim; Kuntze, Sebastian
Supporting motivational variables such as self-concept or interest is an important goal of schooling as they relate to learning and achievement. In this study, we investigated whether specific interest and self-concept related to the domains of statistics and mathematics can be fostered through a four-lesson intervention focusing on statistics.…
Mutihac, R.; Mutihac, R.C.
A broad range of approaches has been proposed and applied for the complex and rather difficult task of object recognition that involves the determination of object characteristics and object classification into one of many a priori object types. Our paper revises briefly the three main different paradigms in pattern recognition, namely Bayesian statistics, neural networks, and expert systems. (author)
Much has been written on the learning needs of dyslexic and dyscalculic students in primary and early secondary education. However, it is not clear that the necessary disability support staff and specialist literature are available to ensure that these needs are being adequately met within the context of learning statistics and general quantitative skills in the self-directed learning environments encountered in higher education. This commentary draws attention to dyslexia and dyscalculia as two potentially unrecognized conditions among undergraduate medical students and in turn, highlights key developments from recent literature in the diagnosis of these conditions. With a view to assisting medical educators meet the needs of dyscalculic learners and the more varied needs of dyslexic learners, a comprehensive list of suggestions is provided as to how learning resources can be designed from the outset to be more inclusive. A hitherto neglected area for future research is also identified through a call for a thorough investigation of the meaning of statistical literacy within the context of the undergraduate medical curriculum.
Much has been written on the learning needs of dyslexic and dyscalculic students in primary and early secondary education. However, it is not clear that the necessary disability support staff and specialist literature are available to ensure that these needs are being adequately met within the context of learning statistics and general quantitative skills in the self-directed learning environments encountered in higher education. This commentary draws attention to dyslexia and dyscalculia as two potentially unrecognized conditions among undergraduate medical students and in turn, highlights key developments from recent literature in the diagnosis of these conditions. With a view to assisting medical educators meet the needs of dyscalculic learners and the more varied needs of dyslexic learners, a comprehensive list of suggestions is provided as to how learning resources can be designed from the outset to be more inclusive. A hitherto neglected area for future research is also identified through a call for a thorough investigation of the meaning of statistical literacy within the context of the undergraduate medical curriculum. PMID:20165516
Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T
Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.
Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe
Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.
Full Text Available Morphological analysis is a prerequisite for many natural language processing tasks. For inflectionally rich languages such as Croatian, morphological analysis typically relies on a morphological lexicon, which lists the lemmas and their paradigms. However, a real-life morphological analyzer must also be able to handle properly the out-of-vocabulary words. We address the task of predicting the correct inflectional paradigm of unknown Croatian words. We frame this as a supervised machine learning problem: we train a classifier to predict whether a candidate lemma-paradigm pair is correct based on a number of string- and corpus-based features. The candidate lemma-paradigm pairs are generated using a handcrafted morphology grammar. Our aim is to examine the machine learning aspect of the problem: we test a comprehensive set of features and evaluate the classification accuracy using different feature subsets. We show that satisfactory classification accuracy (92% can be achieved with SVM using a combination of string- and corpus-based features. On a per word basis, the F1-score is 53% and accuracy is 70%, which outperforms a frequency-based baseline by a wide margin. We discuss a number of possible directions for future research.
Zhang, Yi; Melko, Roger G.; Kim, Eun-Ah
After decades of progress and effort, obtaining a phase diagram for a strongly correlated topological system still remains a challenge. Although in principle one could turn to Wilson loops and long-range entanglement, evaluating these nonlocal observables at many points in phase space can be prohibitively costly. With growing excitement over topological quantum computation comes the need for an efficient approach for obtaining topological phase diagrams. Here we turn to machine learning using quantum loop topography (QLT), a notion we have recently introduced. Specifically, we propose a construction of QLT that is sensitive to quasiparticle statistics. We then use mutual statistics between the spinons and visons to detect a Z2 quantum spin liquid in a multiparameter phase space. We successfully obtain the quantum phase boundary between the topological and trivial phases using a simple feed-forward neural network. Furthermore, we demonstrate advantages of our approach for the evaluation of phase diagrams relating to speed and storage. Such statistics-based machine learning of topological phases opens new efficient routes to studying topological phase diagrams in strongly correlated systems.
Pompe, P.P.M.; Feelders, A.J.; Feelders, A.J.
Recent literature strongly suggests that machine learning approaches to classification outperform "classical" statistical methods. We make a comparison between the performance of linear discriminant analysis, classification trees, and neural networks in predicting corporate bankruptcy. Linear
Pierrehumbert, Janet B
In learning to perceive and produce speech, children master complex language-specific patterns. Daunting language-specific variation is found both in the segmental domain and in the domain of prosody and intonation. This article reviews the challenges posed by results in phonetic typology and sociolinguistics for the theory of language acquisition. It argues that categories are initiated bottom-up from statistical modes in use of the phonetic space, and sketches how exemplar theory can be used to model the updating of categories once they are initiated. It also argues that bottom-up initiation of categories is successful thanks to the perception-production loop operating in the speech community. The behavior of this loop means that the superficial statistical properties of speech available to the infant indirectly reflect the contrastiveness and discriminability of categories in the adult grammar. The article also argues that the developing system is refined using internal feedback from type statistics over the lexicon, once the lexicon is well-developed. The application of type statistics to a system initiated with surface statistics does not cause a fundamental reorganization of the system. Instead, it exploits confluences across levels of representation which characterize human language and make bootstrapping possible.
Purwins, Hendrik; Marchini, Marco; Marxer, Richard
of sounds into phonetic/instrument categories and learning of instrument event sequences is performed jointly using a Hierarchical Dirichlet Process Hidden Markov Model. Whereas machines often learn by processing a large data base and subsequently updating parameters of the algorithm, humans learn...... and their respective transition counts. We propose to use online learning for the co-evolution of both CI user and machine in (re-)learning musical language.  Marco Marchini and Hendrik Purwins. Unsupervised analysis and generation of audio percussion sequences. In International Symposium on Computer Music Modeling...... categories) as well as the temporal context horizon (e.g. storing up to 2-note sequences or up to 10-note sequences) is adaptable. The framework in  is based on two cognitively plausible principles: unsupervised learning and statistical learning. Opposed to supervised learning in primary school children...
Pusic, Martin V.; Boutis, Kathy; Pecaric, Martin R.; Savenkov, Oleksander; Beckstead, Jason W.; Jaber, Mohamad Y.
Learning curves are a useful way of representing the rate of learning over time. Features include an index of baseline performance (y-intercept), the efficiency of learning over time (slope parameter) and the maximal theoretical performance achievable (upper asymptote). Each of these parameters can be statistically modelled on an individual and…
Full Text Available De nombreuses recherches se penchent sur l'évolution du champ de l'Apprentissage des Langues Assisté par Ordinateur (Alao, nous aborderons ici les problématiques liées aux nouvelles approches en sciences cognitives et en particulier la cognition distribuée et les Acao (Apprentissages Collectifs Assistés par Ordinateur, Computer Supported Collaborative Learning, CSCL, dans la terminologie anglophone. Après un survol des débats menés à l'intérieur de ces champs, nous nous questionnerons sur leurs retentissements en Alao en nous focalisant sur les travaux anglo-saxons, actuellement plus nombreux que les recherches francophones. Notre article propose d'apporter quelques éléments de réponse aux interrogations suivantes : de quelle manière les différents courants en apprentissage de langues médiatisé réagissent-ils aux débats portant sur l'indissociabilité du social et du culturel (notamment technologique en apprentissage et en cognition ? Que nous disent les mutations actuelles en Alao sur la pertinence de cette approche ? Quel serait son potentiel pour les discussions théoriques et méthodologiques en Alao ?The evolution of Computer Assisted Language Learning (CALL is the subject of many studies. We will discuss the influence of cognitive sciences and more specifically the distributed cognition paradigm and the latest developments in Computer Supported Collaborative Learning (CSCL. After a review of the discussions carried out in these fields, we will point out their repercussions on CALL, by focusing on current literature in English, more abundant than French literature is at the time. Some of the questions we will try to elucidate are the following: how do the various trends in CALL respond to the discussions about the non-dissociability of social and cultural (in particular technological properties in learning and cognition? What do the current changes in CALL tell us of relevance to this approach? Which would be its
Rock, Adam J.; Coventry, William L.; Morgan, Methuen I.; Loi, Natasha M.
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal, Ginsburg, & Schau, 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof, Ceroni, Jeong, & Moghaddam, 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to...
.... Our research blends methods from several fields-statistics and probability, signal and image processing, mathematical physics, scientific computing, statistical learning theory, and differential...
James, Jija S; Rajesh, PG; Chandran, Anuvitha VS; Kesavadas, Chandrasekharan
In this article, we first review some aspects of functional magnetic resonance imaging (fMRI) paradigm designing for major cognitive functions by using stimulus delivery systems like Cogent, E-Prime, Presentation, etc., along with their technical aspects. We also review the stimulus presentation possibilities (block, event-related) for visual or auditory paradigms and their advantage in both clinical and research setting. The second part mainly focus on various fMRI data post-processing tools such as Statistical Parametric Mapping (SPM) and Brain Voyager, and discuss the particulars of various preprocessing steps involved (realignment, co-registration, normalization, smoothing) in these software and also the statistical analysis principles of General Linear Modeling for final interpretation of a functional activation result
Full Text Available This study aimed to investigate whether interindividual differences in autonomic inhibitory control predict safety learning and fear extinction in an interoceptive fear conditioning paradigm. Data from a previously reported study (N = 40 were extended (N = 17 and re-analyzed to test whether healthy participants' resting heart rate variability (HRV - a proxy of cardiac vagal tone - predicts learning performance. The conditioned stimulus (CS was a slight sensation of breathlessness induced by a flow resistor, the unconditioned stimulus (US was an aversive short-lasting suffocation experience induced by a complete occlusion of the breathing circuitry. During acquisition, the paired group received 6 paired CS-US presentations; the control group received 6 explicitly unpaired CS-US presentations. In the extinction phase, both groups were exposed to 6 CS-only presentations. Measures included startle blink EMG, skin conductance responses (SCR and US-expectancy ratings. Resting HRV significantly predicted the startle blink EMG learning curves both during acquisition and extinction. In the unpaired group, higher levels of HRV at rest predicted safety learning to the CS during acquisition. In the paired group, higher levels of HRV were associated with better extinction. Our findings suggest that the strength or integrity of prefrontal inhibitory mechanisms involved in safety- and extinction learning can be indexed by HRV at rest.
Roser, Matthew E.; Aslin, Richard N.; McKenzie, Rebecca; Zahra, Daniel; Fiser, József
Individuals with autism spectrum disorder (ASD) are often characterized as having social engagement and language deficiencies, but a sparing of visuo-spatial processing and short-term memory, with some evidence of supra-normal levels of performance in these domains. The present study expanded on this evidence by investigating the observational learning of visuospatial concepts from patterns of covariation across multiple exemplars. Child and adult participants with ASD, and age-matched control participants, viewed multi-shape arrays composed from a random combination of pairs of shapes that were each positioned in a fixed spatial arrangement. After this passive exposure phase, a post-test revealed that all participant groups could discriminate pairs of shapes with high covariation from randomly paired shapes with low covariation. Moreover, learning these shape-pairs with high covariation was superior in adults with ASD than in age-matched controls, while performance in children with ASD was no different than controls. These results extend previous observations of visuospatial enhancement in ASD into the domain of learning, and suggest that enhanced visual statistical learning may have arisen from a sustained bias to attend to local details in complex arrays of visual features. PMID:25151115
Fauziah, D.; Mardiyana; Saputro, D. R. S.
Mathematics authentic assessment is a form of meaningful measurement of student learning outcomes for the sphere of attitude, skill and knowledge in mathematics. The construction of attitude, skill and knowledge achieved through the fulfilment of tasks which involve active and creative role of the students. One type of authentic assessment is student mini projects, started from planning, data collecting, organizing, processing, analysing and presenting the data. The purpose of this research is to learn the process of using authentic assessments on statistics learning which is conducted by teachers and to discuss specifically the use of mini projects to improving students’ learning in the school of Surakarta. This research is an action research, where the data collected through the results of the assessments rubric of student mini projects. The result of data analysis shows that the average score of rubric of student mini projects result is 82 with 96% classical completeness. This study shows that the application of authentic assessment can improve students’ mathematics learning outcomes. Findings showed that teachers and students participate actively during teaching and learning process, both inside and outside of the school. Student mini projects also provide opportunities to interact with other people in the real context while collecting information and giving presentation to the community. Additionally, students are able to exceed more on the process of statistics learning using authentic assessment.
Iniesta, Raquel; Stahl, Daniel Richard; McGuffin, Peter
Psychiatric research has entered the age of ‘Big Data’. Datasets now routinely involve thousands of heterogeneous vari- ables, including clinical, neuroimaging, genomic, proteomic, transcriptomic and other ‘omic’ measures. The analysis of these datasets is challenging, especially when the number of measurements exceeds the number of individuals, and may be further complicated by missing data for some subjects and variables that are highly correlated. Statistical learning- based models are a n...
Alhawiti, Mohammed M.; Abdelhamid, Yasser
With the advent of web based learning and content management tools, e-learning has become a matured learning paradigm, and changed the trend of instructional design from instructor centric learning paradigm to learner centric approach, and evolved from "one instructional design for many learners" to "one design for one learner"…
Stewart, James H.; Atkin, Julia A.
Three research paradigms, those of Ausubel, Gagné and Piaget, have received a great deal of attention in the literature of science education. In this article a fourth paradigm is presented - an information processing psychology paradigm. The article is composed of two sections. The first section describes a model of memory developed by information processing psychologists. The second section describes how such a model could be used to guide science education research on learning and problem solving.Received: 19 October 1981
Orlov A. I.
On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data
Malzahn, Doerthe; Opper, Manfred
Using a variational technique, we generalize the statistical physics approach of learning from random examples to make it applicable to real data. We demonstrate the validity and relevance of our method by computing approximate estimators for generalization errors that are based on training data alone
This book develops two key machine learning principles: the semi-supervised paradigm and learning with interdependent data. It reveals new applications, primarily web related, that transgress the classical machine learning framework through learning with interdependent data. The book traces how the semi-supervised paradigm and the learning to rank paradigm emerged from new web applications, leading to a massive production of heterogeneous textual data. It explains how semi-supervised learning techniques are widely used, but only allow a limited analysis of the information content and thus d
Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse
Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.
Rodriguez-Sickert, Carlos; Cosmelli, Diego; Claro, Francisco; Fuentes, Miguel Angel
We develop here a multi-agent model of the creation of knowledge (scientific progress or technological evolution) within a community of researchers devoted to such endeavors. In the proposed model, agents learn in a physical-technological landscape, and weight is attached to both individual search and social influence. We find that the combination of these two forces together with random experimentation can account for both i) marginal change, that is, periods of normal science or refinements on the performance of a given technology (and in which the community stays in the neighborhood of the current paradigm); and ii) radical change, which takes the form of scientific paradigm shifts (or discontinuities in the structure of performance of a technology) that is observed as a swift migration of the knowledge community towards the new and superior paradigm. The efficiency of the search process is heavily dependent on the weight that agents posit on social influence. The occurrence of a paradigm shift becomes more likely when each member of the community attaches a small but positive weight to the experience of his/her peers. For this parameter region, nevertheless, a conservative force is exerted by the representatives of the current paradigm. However, social influence is not strong enough to seriously hamper individual discovery, and can act so as to empower successful individual pioneers who have conquered the new and superior paradigm.
Full Text Available We develop here a multi-agent model of the creation of knowledge (scientific progress or technological evolution within a community of researchers devoted to such endeavors. In the proposed model, agents learn in a physical-technological landscape, and weight is attached to both individual search and social influence. We find that the combination of these two forces together with random experimentation can account for both i marginal change, that is, periods of normal science or refinements on the performance of a given technology (and in which the community stays in the neighborhood of the current paradigm; and ii radical change, which takes the form of scientific paradigm shifts (or discontinuities in the structure of performance of a technology that is observed as a swift migration of the knowledge community towards the new and superior paradigm. The efficiency of the search process is heavily dependent on the weight that agents posit on social influence. The occurrence of a paradigm shift becomes more likely when each member of the community attaches a small but positive weight to the experience of his/her peers. For this parameter region, nevertheless, a conservative force is exerted by the representatives of the current paradigm. However, social influence is not strong enough to seriously hamper individual discovery, and can act so as to empower successful individual pioneers who have conquered the new and superior paradigm.
Minsu Song; Jonghyun Kim
Motor imagery (MI) has been widely used in neurorehabilitation and brain computer interface. The size of event-related desynchronization (ERD) is a key parameter for successful motor imaginary rehabilitation and BCI adaptation. Many studies have used visual guidance for enhancement/ amplification of motor imagery ERD amplitude, but their enhancements were not significant. We propose a novel ERD enhancing paradigm using body-ownership illusion, or also known as rubber hand illusion (RHI). The system was made by motorized, moving rubber hand which can simulate wrist extension. The amplifying effects of the proposed RHI paradigm were evaluated by comparing ERD sizes of the proposed paradigm with motor imagery and actual motor execution paradigms. The comparison result shows that the improvement of ERD size due to the proposed paradigm was statistically significant (pparadigms.
Chiesi, Francesca; Primi, Caterina; Bilgin, Ayse Aysin; Lopez, Maria Virginia; del Carmen Fabrizio, Maria; Gozlu, Sitki; Tuan, Nguyen Minh
The aim of the current study was to provide evidence that an abbreviated version of the Approaches and Study Skills Inventory for Students (ASSIST) was invariant across different languages and educational contexts in measuring university students' learning approaches to statistics. Data were collected on samples of university students attending…
Neumann, David L.; Neumann, Michelle M.; Hood, Michelle
The discipline of statistics seems well suited to the integration of technology in a lecture as a means to enhance student learning and engagement. Technology can be used to simulate statistical concepts, create interactive learning exercises, and illustrate real world applications of statistics. The present study aimed to better understand the…
Full Text Available This book is published by Idea Group Publishing. The book has threesections which consist of sixteen chapters. In addition of those it has an author bibliography and an index. 28 authors, including editors, have contributed to the book.This book consists of three sections. First section focuses on strategies and paradigms. Second section is about course development instruction and quality issues.
An, Sang Ha; Chang, Soon Heung; Heo, Gyun Young; Seo, Ho Joon; Kim, Su Young
As systems become more complex and more critical in our daily lives, the need for the maintenance based on the reliable monitoring and diagnosis has become more apparent. However, in reality, the general opinion has been that 'maintenance is a necessary evil' or 'nothing can be done to improve maintenance costs'. Perhaps these were true statements twenty years ago when many of the diagnostic technologies were not fully developed. The developments of microprocessor or computer based instrumentation that can be used to monitor the operating condition of plant equipment, machinery and systems have provided the means to manage the maintenance operation. They have provided the means to reduce or eliminate unnecessary repairs, prevent catastrophic machine failures and reduce the negative impact of the maintenance operation on the profitability of manufacturing and production plants. Condition-based maintenance (CBM) techniques help determine the condition of in-service equipment in order to predict when maintenance should be performed. Most of the statistical learning techniques are only valid as long as the physics of a system does not change. If any significant change such as the replacement of a component or equipment occurs in the system, the statistical learning model should be re-trained or re-developed to adapt the new system. In this research, authors will propose a statistical learning framework which can be applicable for various CBMs, and the concept of the adaptive retraining technique will be described to support the execution of the framework so that the monitoring system does not need to be re-developed or re-trained even though there are any significant changes in the system or component
Joseph George Mallia
Full Text Available Intercultural communication has led to a greater need for the use of a lingua franca such as English to be used internationally in both interpersonal and transactional domains of life among culturally-diverse societies. Despite the cultural diversity in which English is taught, a ‗one size fits all‘ strategy, essentially based on communicative language teaching (CLT and universally available textbooks seems to be the main, if not only, contemporary teaching paradigm that is actively proposed, particularly in non-Western environments. This often goes against the ‗culture of teaching‘ present in these very same communities, where the cultural expectations, facilities or logistics may not favour the successful use of CLT. Furthermore, many non-Western communities may not necessarily identify with the ‗culture in teaching‘, wherelanguage being taught is embedded in textbook cultural scenarios which many not be meaningful, helpful or relevant.Rather than CLT, studies in English native and non-native countries are generating a body of evidence showing that students with the strongest academic outcomes have teachers who use effective instructional practices such as explicit teaching.For example, while many non-Western countries are strongly encouraged to use CLT, paradoxically, English native speaker countries such as Australia have adopted explicit teaching even at the national school curriculum level. This paper outlines the main characteristics of explicit teaching and why non-Western learning communities should take a more pro-active role in establishing culturally-appropriate English courses based on the explicit teaching paradigm.
Bougarel, Laure; Guitton, Jérôme; Zimmer, Luc; Vaugeois, Jean-Marie; El Yacoubi, Malika
H/Rouen (displaying a helpless phenotype in the tail suspension test) mice exhibiting features of depressive disorders and NH/Rouen (displaying non-helpless phenotype) mice were previously created through behavioural screening and selective breeding. Learned helplessness (LH), in which footshock stress induces a coping deficit, models some aspects of depression in rodents, but so far, fewer LH studies have been performed in mice than in rats. To study H/Rouen and NH/Rouen in the LH paradigm. When CD1 mice were submitted to footshock with various training durations and shock intensities, the most suitable parameters to induce a behavioural deficit were 0.3 mA and four training sessions. A significantly longer latency to escape shocks was found in male H/Rouen mice compared to male NH/Rouen mice. On the other hand, once shocked, NH/Rouen mice showed more severe coping deficits than H/Rouen mice. In addition, a sub-chronic treatment with fluoxetine lacked efficacy in NH/Rouen mice, whereas it improved performances in H/Rouen mice. We also found that a shock reminder at day 8, subsequent to inescapable shocks, maintained helplessness for 20 days. Finally, female H/Rouen mice responded to chronic fluoxetine administration after 10 days of treatment, while a 20-day treatment was necessary to improve the behavioural deficit in H/Rouen male mice. H/Rouen and NH/Rouen lines displayed different despair-related behaviour in the LH paradigm. Fluoxetine had beneficial effects after sub-chronic or chronic but not acute treatment of H/Rouen mice, thus providing a pharmacological validation of the protocols.
Alberto VALENTÍN CENTENO
Full Text Available Teaching statistics course Applied Psychology, was based on different teaching models that incorporate active teaching methodologies. In this experience have combined approaches that prioritize the use of ICT with other where evaluation becomes an element of learning. This has involved the use of virtual platforms to support teaching that facilitate learning and activities where no face-to-face are combined. The design of the components of the course is inspired by the dimensions proposed by Carless (2003 model. This model uses evaluation as a learning element. The development of this experience has shown how the didactic proposal has been positively interpreted by students. Students recognized that they had to learn and deeply understand the basic concepts of the subject, so that they can teach and assess their peers.
Omigie, Diana; Stewart, Lauren
Congenital amusia is a lifelong disorder whereby individuals have pervasive difficulties in perceiving and producing music. In contrast, typical individuals display a sophisticated understanding of musical structure, even in the absence of musical training. Previous research has shown that they acquire this knowledge implicitly, through exposure to music's statistical regularities. The present study tested the hypothesis that congenital amusia may result from a failure to internalize statistical regularities - specifically, lower-order transitional probabilities. To explore the specificity of any potential deficits to the musical domain, learning was examined with both tonal and linguistic material. Participants were exposed to structured tonal and linguistic sequences and, in a subsequent test phase, were required to identify items which had been heard in the exposure phase, as distinct from foils comprising elements that had been present during exposure, but presented in a different temporal order. Amusic and control individuals showed comparable learning, for both tonal and linguistic material, even when the tonal stream included pitch intervals around one semitone. However analysis of binary confidence ratings revealed that amusic individuals have less confidence in their abilities and that their performance in learning tasks may not be contingent on explicit knowledge formation or level of awareness to the degree shown in typical individuals. The current findings suggest that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors.
Full Text Available Congenital amusia is a lifelong disorder whereby individuals have pervasive difficulties in perceiving and producing music. In contrast, typical individuals display a sophisticated understanding of musical structure, even in the absence of musical training. Previous research has shown that they acquire this knowledge implicitly, through exposure to music’s statistical regularities. The present study tested the hypothesis that congenital amusia may result from a failure to internalize statistical regularities - specifically, lower-order transitional probabilities. To explore the specificity of any potential deficits to the musical domain, learning was examined with both tonal and linguistic material. Participants were exposed to structured tonal and linguistic sequences and, in a subsequent test phase, were required to identify items which had been heard in the exposure phase, as distinct from foils comprising elements that had been present during exposure, but presented in a different temporal order. Amusic and control individuals showed comparable learning, for both tonal and linguistic material, even when the tonal stream included pitch intervals around one semitone. However analysis of binary confidence ratings revealed that amusic individuals have less confidence in their abilities and that their performance in learning tasks may not be contingent on explicit knowledge formation or level of awareness to the degree shown in typical individuals. The current findings suggest that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors.
Shi Yonghong; Liao Shu; Shen Dinggang
Purpose: In adaptive radiation therapy of prostate cancer, fast and accurate registration between the planning image and treatment images of the patient is of essential importance. With the authors' recently developed deformable surface model, prostate boundaries in each treatment image can be rapidly segmented and their correspondences (or relative deformations) to the prostate boundaries in the planning image are also established automatically. However, the dense correspondences on the nonboundary regions, which are important especially for transforming the treatment plan designed in the planning image space to each treatment image space, are remained unresolved. This paper presents a novel approach to learn the statistical correlation between deformations of prostate boundary and nonboundary regions, for rapidly estimating deformations of the nonboundary regions when given the deformations of the prostate boundary at a new treatment image. Methods: The main contributions of the proposed method lie in the following aspects. First, the statistical deformation correlation will be learned from both current patient and other training patients, and further updated adaptively during the radiotherapy. Specifically, in the initial treatment stage when the number of treatment images collected from the current patient is small, the statistical deformation correlation is mainly learned from other training patients. As more treatment images are collected from the current patient, the patient-specific information will play a more important role in learning patient-specific statistical deformation correlation to effectively reflect prostate deformation of the current patient during the treatment. Eventually, only the patient-specific statistical deformation correlation is used to estimate dense correspondences when a sufficient number of treatment images have been acquired from the current patient. Second, the statistical deformation correlation will be learned by using a
Mechanisms of incorporating machine learning paradigms in design optimization have been investigated in the current research. The primary focus of the work is on machine learning algorithms which use computational models that are analogous to the hypothesized principles of natural or biological learning. Examples from structural and aerodynamic optimization have been used to demonstrate the potential of the proposed schemes. The first strategy examined in the current work seeks to improve the convergence of optimization problems by pruning the search space of weak variables. Such variables are identified by learning from a database of existing designs using neural networks. By using clustering techniques, different sets of weak variables are identified in different regions of the design space. Parameter sensitivity information obtained in the process of identifying weak variables provides accurate heuristics for formulating design rules. The impact of this methodology on obtaining converged designs has been investigated for a turbine design problem. Optimization results from a three-stage power turbine and an aircraft engine turbine are presented in this thesis. The second scheme is an evolutionary design optimization technique which gets progressively 'smarter' during the optimization process by learning from computed domain knowledge. This technique employs adaptive learning mechanisms (classifiers) which recognize the influence of the design variables on the problem solution and then generalize them to dynamically create or change design rules during optimization. This technique, when applied to a constrained optimization problem, shows progressive improvement in convergence of search, as successive generations of rules evolve by learning from the environment. To investigate this methodology, a truss optimization problem is solved with an objective of minimizing the truss weight subject to stress constraints in the truss members. A distinct convergent trend is
Full Text Available The article is analysing the research that was already carried out in order to determine correlation between a physical environment of schools and educational paradigms. While selecting materials for the analysis, the attention was focused on studies conducted in the USA and European countries. Based on these studies the methodological attitudes towards coherence of the education and spatial structures were tried to identify. Homogeneity and conformity of an educational character and a physical learning environment became especially important during changes of educational conceptions. The issue how educational paradigms affect the architecture of school buildings is not yet analysed in Lithuania, therefore the results of this research could actualize a theme on correlation between educational paradigms and the architecture of school buildings and form initial guidelines for the development of the modern physical learning environment.
Widakdo, W. A.
Seeing the importance of the role of mathematics in everyday life, mastery of the subject areas of mathematics is a must. Representation ability is one of the fundamental ability that used in mathematics to make connection between abstract idea with logical thinking to understanding mathematics. Researcher see the lack of mathematical representation and try to find alternative solution to dolve it by using project based learning. This research use literature study from some books and articles in journals to see the importance of mathematical representation abiliy in mathemtics learning and how project based learning able to increase this mathematical representation ability on the topic of Statistics. The indicators for mathematical representation ability in this research classifies namely visual representation (picture, diagram, graph, or table); symbolize representation (mathematical statement. Mathematical notation, numerical/algebra symbol) and verbal representation (written text). This article explain about why project based learning able to influence student’s mathematical representation by using some theories in cognitive psychology, also showing the example of project based learning that able to use in teaching statistics, one of mathematics topic that very useful to analyze data.
Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett
The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.
Fairfield-Sonn, James W.; Kolluri, Bharat; Rogers, Annette; Singamsetti, Rao
This paper examines several ways in which teaching effectiveness and student learning in an undergraduate Business Statistics course can be enhanced. First, we review some key concepts in Business Statistics that are often challenging to teach and show how using real data sets assist students in developing deeper understanding of the concepts.…
Çöltekin, Ça ̆grı; Nerbonne, John; Lenci, Alessandro; Padró, Muntsa; Poibeau, Thierry; Villavicencio, Aline
This paper presents an unsupervised and incremental model of learning segmentation that combines multiple cues whose use by children and adults were attested by experimental studies. The cues we exploit in this study are predictability statistics, phonotactics, lexical stress and partial lexical
Full Text Available Non-invasive brain stimulation (NIBS has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1 on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI. We compared anodal transcranial direct current stimulation (tDCS, paired associative stimulation (PAS25, and intermittent theta burst stimulation (iTBS, along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28 were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online, 1 day after training (consolidation, and 1 week after training (retention. We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning.
Lopez-Alonso, Virginia; Liew, Sook-Lei; Fernández del Olmo, Miguel; Cheeran, Binith; Sandrini, Marco; Abe, Mitsunari; Cohen, Leonardo G.
Non-invasive brain stimulation (NIBS) has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1) on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task) and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI). We compared anodal transcranial direct current stimulation (tDCS), paired associative stimulation (PAS25), and intermittent theta burst stimulation (iTBS), along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28) were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online), 1 day after training (consolidation), and 1 week after training (retention). We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning. PMID:29740271
Fernández Macedo, Georgina Valeria; Cladouchos, María Laura; Sifonios, Laura; Cassanelli, Pablo Martín; Wikinski, Silvia
Stress is a common antecedent reported by people suffering major depression. In these patients, extrahypothalamic brain areas, like the hippocampus and basolateral amygdala (BLA), have been found to be affected. The BLA synthesizes CRF, a mediator of the stress response, and projects to hippocampus. The main hippocampal target for this peptide is the CRF subtype 1 receptor (CRF1). Evidence points to a relationship between dysregulation of CRF/CRF1 extrahypothalamic signaling and depression. Because selective serotonin reuptake inhibitors (SSRIs) are the first-line pharmacological treatment for depression, we investigated the effect of chronic treatment with the SSRI fluoxetine on long-term changes in CRF/CRF1 signaling in animals showing a depressive-like behavior. Male Wistar rats were exposed to the learned helplessness paradigm (LH). After evaluation of behavioral impairment, the animals were treated with fluoxetine (10 mg/kg i.p.) or saline for 21 days. We measured BLA CRF expression with RT-PCR and CRF1 expression in CA3 and the dentate gyrus of the hippocampus with in situ hybridization. We also studied the activation of one of CRF1's major intracellular signaling targets, the extracellular signal-related kinases 1 and 2 (ERK1/2) in CA3. In saline-treated LH animals, CRF expression in the BLA increased, while hippocampal CRF1 expression and ERK1/2 activation decreased. Treatment with fluoxetine reversed the changes in CRF and CRF1 expressions, but not in ERK1/2 activation. In animals exposed to the learned helplessness paradigm, there are long-term changes in CRF and CRF1 expression that are restored with a behaviorally effective antidepressant treatment.
Jocham, Gerhard; Brodersen, Kay H.; Constantinescu, Alexandra O.; Kahn, Martin C.; Ianni, Angela M.; Walton, Mark E.; Rushworth, Matthew F.S.; Behrens, Timothy E.J.
Summary When an organism receives a reward, it is crucial to know which of many candidate actions caused this reward. However, recent work suggests that learning is possible even when this most fundamental assumption is not met. We used novel reward-guided learning paradigms in two fMRI studies to show that humans deploy separable learning mechanisms that operate in parallel. While behavior was dominated by precise contingent learning, it also revealed hallmarks of noncontingent learning strategies. These learning mechanisms were separable behaviorally and neurally. Lateral orbitofrontal cortex supported contingent learning and reflected contingencies between outcomes and their causal choices. Amygdala responses around reward times related to statistical patterns of learning. Time-based heuristic mechanisms were related to activity in sensorimotor corticostriatal circuitry. Our data point to the existence of several learning mechanisms in the human brain, of which only one relies on applying known rules about the causal structure of the task. PMID:26971947
Yousef, Darwish Abdulrahman
Purpose: Although there are many studies addressing the learning styles of business students as well as students of other disciplines, there are few studies which address the learning style preferences of statistics students. The purpose of this study is to explore the learning style preferences of statistics students at a United Arab Emirates…
Full Text Available Learning, defined both as the cognitive process of acquiring knowledge, and as the knowledge received by instruction, depends on the environment. Our world, analyzed as an educational environment, evolved from a natural native stage to a scientific technological based phase. Educative system, developed as a public service, including formal, non-formal and informal education, originated its foundations on the textbook, and at present, teacher preparation is based on the same technique. This article is designed as a conceptual basis analyze of learning in a scientific environment, in order to synthesize the interdependencies between the cognitive process of acquiring knowledge and the methods applied in knowledge conversion.
different from one another. They have different prior knowledge and different learning styles so it is a challenging task to teach them all in the same way. Furthermore the world of statistics has become so huge that it is impossible to cover everything. The structure imposed by the Bologna agreement gives...... can design the course – or a part of the course – so that it fits their individual learning style and their prior knowledge. Some prefer to look at examples first and afterwards look at which theories it is based on. Others want to do it the opposite way. Some wants to work with the problem themselves...
Rosinda Martins Oliveira
Full Text Available The Rey Auditory Verbal Learning paradigm is worldwide used in clinical and research settings. There is consensus about its psychometric robustessness and that its various scores provide relevant information about different aspects of memory and learning. However, there are only a few studies in Brazil employing this paradigm and none of them with children. This paper describes the performance of 119 Brazilian children in a version of Rey´s paradigm. The correlations between scores showed the internal consistency of this version. Also, the pattern of results observed was very similar to that observed in foreign studies with adults and children. There was correlation between age in months and recall scores, showing that age affects the rhythm of learning. These results were discussed based on the information processing theory.O paradigma de aprendizagem auditivo-verbal de Rey é utilizado em todo o mundo, tanto em pesquisa quanto na clínica. Há consenso sobre sua robustez psicométrica e de que seus vários escores fornecem informações relevantes sobre diferentes aspectos da memória e da aprendizagem. No entanto, existem apenas alguns poucos estudos no Brasil envolvendo este paradigma e nenhum deles com crianças. Este artigo descreve o desempenho de 119 crianças brasileiras em uma versão do paradigma de Rey. As correlações entre escores mostraram a consistência interna desta versão. Além disso, o padrão de resultados encontrado foi muito similar àquele observado em estudos estrangeiros com adultos e crianças. Verificou-se correlação entre idade em meses e os escores de evocação, mostrando que a idade afeta o ritmo de aprendizagem. Estes resultados foram discutidos a partir da teoria do processamento da informação.
Marlene A. Smith
Full Text Available We describe how statistical predictive models might play an expanded role in educational analytics by giving students automated, real-time information about what their current performance means for eventual success in eLearning environments. We discuss how an online messaging system might tailor information to individual students using predictive analytics. The proposed system would be data-driven and quantitative; e.g., a message might furnish the probability that a student will successfully complete the certificate requirements of a massive open online course. Repeated messages would prod underperforming students and alert instructors to those in need of intervention. Administrators responsible for accreditation or outcomes assessment would have ready documentation of learning outcomes and actions taken to address unsatisfactory student performance. The article’s brief introduction to statistical predictive models sets the stage for a description of the messaging system. Resources and methods needed to develop and implement the system are discussed.
Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.
For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…
Statistical approaches to emergent knowledge have tended to focus on the process by which experience of individual episodes accumulates into generalizable experience across episodes. However, there is a seemingly opposite, but equally critical, process that such experience affords: the process by which, from a space of types (e.g. onions—a semantic class that develops through exposure to individual episodes involving individual onions), we can perceive or create, on-the-fly, a specific token (a specific onion, perhaps one that is chopped) in the absence of any prior perceptual experience with that specific token. This article reviews a selection of statistical learning studies that lead to the speculation that this process—the generation, on the basis of semantic memory, of a novel episodic representation—is itself an instance of a statistical, in fact associative, process. The article concludes that the same processes that enable statistical abstraction across individual episodes to form semantic memories also enable the generation, from those semantic memories, of representations that correspond to individual tokens, and of novel episodic facts about those tokens. Statistical learning is a window onto these deeper processes that underpin cognition. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872378
Altmann, Gerry T M
Statistical approaches to emergent knowledge have tended to focus on the process by which experience of individual episodes accumulates into generalizable experience across episodes. However, there is a seemingly opposite, but equally critical, process that such experience affords: the process by which, from a space of types (e.g. onions-a semantic class that develops through exposure to individual episodes involving individual onions), we can perceive or create, on-the-fly, a specific token (a specific onion, perhaps one that is chopped) in the absence of any prior perceptual experience with that specific token. This article reviews a selection of statistical learning studies that lead to the speculation that this process-the generation, on the basis of semantic memory, of a novel episodic representation-is itself an instance of a statistical, in fact associative, process. The article concludes that the same processes that enable statistical abstraction across individual episodes to form semantic memories also enable the generation, from those semantic memories, of representations that correspond to individual tokens, and of novel episodic facts about those tokens. Statistical learning is a window onto these deeper processes that underpin cognition.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Koutsopoulos, Kostis C.; Kotsanis, Yannis C.
This paper presents the basic concept of the EU Network School on Cloud: Namely, that present conditions require a new teaching and learning paradigm based on the integrated dimension of education, when considering the use of cloud computing. In other words, it is suggested that there is a need for an integrated approach which is simultaneously…
Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas
Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual
Li, Junhua; Zhang, Liqing
Brain-computer interface (BCI) allows the use of brain activities for people to directly communicate with the external world or to control external devices without participation of any peripheral nerves and muscles. Motor imagery is one of the most popular modes in the research field of brain-computer interface. Although motor imagery BCI has some advantages compared with other modes of BCI, such as asynchronization, it is necessary to require training sessions before using it. The performance of trained BCI system depends on the quality of training samples or the subject engagement. In order to improve training effect and decrease training time, we proposed a new paradigm where subjects participated in training more actively than in the traditional paradigm. In the traditional paradigm, a cue (to indicate what kind of motor imagery should be imagined during the current trial) is given to the subject at the beginning of a trial or during a trial, and this cue is also used as a label for this trial. It is usually assumed that labels for trials are accurate in the traditional paradigm, although subjects may not have performed the required or correct kind of motor imagery, and trials may thus be mislabeled. And then those mislabeled trials give rise to interference during model training. In our proposed paradigm, the subject is required to reconfirm the label and can correct the label when necessary. This active training paradigm may generate better training samples with fewer inconsistent labels because it overcomes mistakes when subject's motor imagination does not match the given cues. The experiments confirm that our proposed paradigm achieves better performance; the improvement is significant according to statistical analysis.
Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen
Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.
Pavlik, John V.
Emerging technologies are fueling a third paradigm of education. Digital, networked and mobile media are enabling a disruptive transformation of the teaching and learning process. This paradigm challenges traditional assumptions that have long characterized educational institutions and processes, including basic notions of space, time, content,…
Antovich, Dylan M.; Graf Estes, Katharine
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable-level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14-month-olds'…
Makridakis, Spyros; Assimakopoulos, Vassilios
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784
Makridakis, Spyros; Spiliotis, Evangelos; Assimakopoulos, Vassilios
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.
Swingler, Maxine V.; Morrow, Lorna I.
Statistics and research methods are embedded in the university curricula for psychology, STEM, and more widely. Statistical skills are also associated with the development of psychological literacy and graduate attributes. Yet there is concern about students’ mathematical and statistical skills in their transition from school to HE. A major challenge facing the teaching and learning of statistics in HE is the high levels of statistics anxiety and low levels of statistics self-efficacy experie...
Baldassi, Carlo; Gerace, Federica; Saglietti, Luca; Zecchina, Riccardo
We present a brief introduction to the statistical mechanics approaches for the study of inverse problems in data science. We then provide concrete new results on inferring couplings from sampled configurations in systems characterized by an extensive number of stable attractors in the low temperature regime. We also show how these result are connected to the problem of learning with realistic weak signals in computational neuroscience. Our techniques and algorithms rely on advanced mean-field methods developed in the context of disordered systems.
Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.
Malzahn, Dorthe; Opper, Manfred
We employ the replica method of statistical physics to study the average case performance of learning systems. The new feature of our theory is that general distributions of data can be treated, which enables applications to real data. For a class of Bayesian prediction models which are based...... on Gaussian processes, we discuss Bootstrap estimates for learning curves....
This book is for people who want to learn probability and statistics quickly It brings together many of the main ideas in modern statistics in one place The book is suitable for students and researchers in statistics, computer science, data mining and machine learning This book covers a much wider range of topics than a typical introductory text on mathematical statistics It includes modern topics like nonparametric curve estimation, bootstrapping and classification, topics that are usually relegated to follow-up courses The reader is assumed to know calculus and a little linear algebra No previous knowledge of probability and statistics is required The text can be used at the advanced undergraduate and graduate level Larry Wasserman is Professor of Statistics at Carnegie Mellon University He is also a member of the Center for Automated Learning and Discovery in the School of Computer Science His research areas include nonparametric inference, asymptotic theory, causality, and applications to astrophysics, bi...
This book introduces a paradigm of reverse hypothesis machines (RHM), focusing on knowledge innovation and machine learning. Knowledge- acquisition -based learning is constrained by large volumes of data and is time consuming. Hence Knowledge innovation based learning is the need of time. Since under-learning results in cognitive inabilities and over-learning compromises freedom, there is need for optimal machine learning. All existing learning techniques rely on mapping input and output and establishing mathematical relationships between them. Though methods change the paradigm remains the same—the forward hypothesis machine paradigm, which tries to minimize uncertainty. The RHM, on the other hand, makes use of uncertainty for creative learning. The approach uses limited data to help identify new and surprising solutions. It focuses on improving learnability, unlike traditional approaches, which focus on accuracy. The book is useful as a reference book for machine learning researchers and professionals as ...
Ross, David A.; Olson, Ingrid R.; Marks, Lawrence E.; Gore, John C.
The ability to identify and reproduce sounds of specific frequencies is remarkable and uncommon. The etiology and defining characteristics of this skill, absolute pitch (AP), have been very controversial. One theory suggests that AP requires a specific type of early musical training and that the ability to encode and remember tones depends on these learned musical associations. An alternate theory argues that AP may be strongly dependent on hereditary factors and relatively independent of musical experience. To date, it has been difficult to test these hypotheses because all previous paradigms for identifying AP have required subjects to employ knowledge of musical nomenclature. As such, these tests are insensitive to the possibility of discovering AP in either nonmusicians or musicians of non-Western training. Based on previous literature in pitch memory, a paradigm is presented that is intended to distinguish between AP possessors and nonpossessors independent of the subjects' musical experience. The efficacy of this method is then tested with 20 classically defined AP possessors and 22 nonpossessors. Data from these groups strongly support the validity of the paradigm. The use of a nonmusical paradigm to identify AP may facilitate research into many aspects of this phenomenon.
DeHart, Mary; Ham, Jim
The purpose of this article is to share the stories of an Introductory Statistics service-learning project in which students from both New Jersey and Michigan design and conduct phone surveys that lead to publication in local newspapers; to discuss the pedagogical benefits and challenges of the project; and to provide information for those who…
The purpose of this study was to determine if course format significantly impacted student learning and course completion rates in an introductory statistics course taught at Harford Community College. In addition to the traditional lecture format, the College offers an online, and a hybrid (blend of traditional and online) version of this class.…
Ringsted, C; Østergaard, D; Scherpbier, A
Assessment of clinical competence is facing a paradigm shift in more than one sense. The shift relates to test content, which increasingly covers a broader spectrum of competences than mere medical expertise, and to test methods, with an increasing focus on testing performance in realistic settings....... Also there is a shift in the concept of assessment in that instruction and assessment are no longer seen as being separate in time and purpose, but as integral parts of the learning process. The nature of the new paradigm for assessment is well described but the challenge to programme directors...... is to specify the evaluation situations and develop appropriate methods. This paper describes the intrinsic rational validation process in outlining an assessment programme for first-year anaesthesiology residency training according to the new paradigm. The applicability to other residency programmes and higher...
Fernandes, Tania; Kolinsky, Regine; Ventura, Paulo
This study combined artificial language learning (ALL) with conventional experimental techniques to test whether statistical speech segmentation outputs are integrated into adult listeners' mental lexicon. Lexicalization was assessed through inhibitory effects of novel neighbors (created by the parsing process) on auditory lexical decisions to…
Vaughn, Brandon K.
This study considers the effectiveness of a "balanced amalgamated" approach to teaching graduate level introductory statistics. Although some research stresses replacing traditional lectures with more active learning methods, the approach of this study is to combine effective lecturing with active learning and team projects. The results of this…
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.
PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator
Full Text Available Now software developed on the basis of artificial neural networks (ANN has been actively implemented in construction companies to support decision-making in organization and management of construction processes. ANN learning is the main stage of its development. A key question for supervised learning is how many number of training examples we need to approximate the true relationship between network inputs and output with the desired accuracy. Also designing of ANN architecture is related to learning problem known as “curse of dimensionality”. This problem is important for the study of construction process management because of the difficulty to get training data from construction sites. In previous studies the authors have designed a 4-layer feedforward ANN with a unit model of 12-5-4-1 to approximate estimation and prediction of roofing process. This paper presented the statistical learning side of created ANN with simple-error-minimization algorithm. The sample size to efficient training and the confidence interval of network outputs defined. In conclusion the authors predicted successful ANN learning in a large construction business company within a short space of time.
Watson, William R.; Watson, Sunnie Lee; Reigeluth, Charles M.
Educational reform efforts have failed to create widespread improvement. The authors argue that rather than trying to improve the existing system of education, a new learner-centered paradigm is needed that supports individualized learning. Such a significantly different system of education will require the systemic application of technology to…
Vahedi, Shahrum; Farrokhi, Farahman; Gahramani, Farahnaz; Issazadegan, Ali
Approximately 66-80%of graduate students experience statistics anxiety and some researchers propose that many students identify statistics courses as the most anxiety-inducing courses in their academic curriculums. As such, it is likely that statistics anxiety is, in part, responsible for many students delaying enrollment in these courses for as long as possible. This paper proposes a canonical model by treating academic procrastination (AP), learning strategies (LS) as predictor variables and statistics anxiety (SA) as explained variables. A questionnaire survey was used for data collection and 246-college female student participated in this study. To examine the mutually independent relations between procrastination, learning strategies and statistics anxiety variables, a canonical correlation analysis was computed. Findings show that two canonical functions were statistically significant. The set of variables (metacognitive self-regulation, source management, preparing homework, preparing for test and preparing term papers) helped predict changes of statistics anxiety with respect to fearful behavior, Attitude towards math and class, Performance, but not Anxiety. These findings could be used in educational and psychological interventions in the context of statistics anxiety reduction.
This article discusses major theoretical debates and paradigms from the last decades in general education and their specific influences in mathematics education contexts. Behaviourism, cognitive science, constructivism, situated cognition, critical theory, place-based learning, postmodernism and poststructuralism and their significant aspects in…
Full Text Available This paper focuses on key issues in relation to the choice of basic language of communication of marketing as a practical and academic field. Principally, marketing managers prefer descriptive way of expression, but they should use the advantages of language of numbers much more. By doing so, they will advance decision-making process - and the communication with finance and top management. In this regard, models offered by academic community could be helpful. This especially pertains to those positive or normative verbal approaches and models in which mathematics and statistical solutions have been embedded, as well as to those which emphasize financial criteria in decision-making. Concerning the process of creation and verification of scientific knowledge, the choice between languages of words and numbers is the part of much wider dimension, because it is inseparable from the decision on basic research orientation. Quantitative paradigm is more appropriate for hypotheses testing, while qualitative paradigm gives greater contribution in their generation. Competition factor could become the key driver of changes by which existing "parallel worlds" of main paradigms would be integrating, for the sake of disciplinary knowledge advancement.
Koparan, Timur; Güven, Bülent
The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35 in the experimental group and 35 in the control group, took this test twice, one before the application and one after the application. All the raw scores were turned into linear points by using the Winsteps 3.72 modelling program that makes the Rasch analysis and t-tests, and an ANCOVA analysis was carried out with the linear points. Depending on the findings, it was concluded that the project-based learning approach increases students' level of statistical literacy for data representation. Students' levels of statistical literacy before and after the application were shown through the obtained person-item maps.
Wang, Benchi; Theeuwes, Jan
Recently, Wang and Theeuwes (Journal of Experimental Psychology: Human Perception and Performance, 44(1), 13-17, 2018a) demonstrated the role of lingering selection biases in an additional singleton search task in which the distractor singleton appeared much more often in one location than in all other locations. For this location, there was less capture and selection efficiency was reduced. It was argued that statistical learning induces plasticity within the spatial priority map such that particular locations that are high likely to contain a distractor are suppressed relative to all other locations. The current study replicated these findings regarding statistical learning (Experiment 1) and investigated whether similar effects can be obtained by cueing the distractor location in a top-down way on a trial-by-trial basis. The results show that top-down cueing of the distractor location with long (1,500 ms; Experiment 2) and short stimulus-onset symmetries (SOAs) (600 ms; Experiment 3) does not result in suppression: The amount of capture nor the efficiency of selection was affected by the cue. If anything, we found an attentional benefit (instead of the suppression) for the short SOA. We argue that through statistical learning, weights within the attentional priority map are changed such that one location containing a salient distractor is suppressed relative to all other locations. Our cueing experiments show that this effect cannot be accomplished by active, top-down suppression. Consequences for recent theories of distractor suppression are discussed.
Abrahamson, Dor; Wilensky, Uri
We introduce a design-based research framework, "learning axes and bridging tools," and demonstrate its application in the preparation and study of an implementation of a middle-school experimental computer-based unit on probability and statistics, "ProbLab" (Probability Laboratory, Abrahamson and Wilensky 2002 [Abrahamson, D., & Wilensky, U.…
Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...
Nielson, Kristy A; Correro, Anthony N
The Deese-Roediger-McDermott (DRM) paradigm examines false memory by introducing words associated with a non-presented 'critical lure' as memoranda, which typically causes the lures to be remembered as frequently as studied words. Our prior work has shown enhanced veridical memory and reduced misinformation effects when arousal is induced after learning (i.e., during memory consolidation). These effects have not been examined in the DRM task, or with signal detection analysis, which can elucidate the mechanisms underlying memory alterations. Thus, 130 subjects studied and then immediately recalled six DRM lists, one after another, and then watched a 3-min arousing (n=61) or neutral (n=69) video. Recognition tested 70min later showed that arousal induced after learning led to better delayed discrimination of studied words from (a) critical lures, and (b) other non-presented 'weak associates.' Furthermore, arousal reduced liberal response bias (i.e., the tendency toward accepting dubious information) for studied words relative to all foils, including critical lures and 'weak associates.' Thus, arousal induced after learning effectively increased the distinction between signal and noise by enhancing access to verbatim information and reducing endorsement of dubious information. These findings provide important insights into the cognitive mechanisms by which arousal modulates early memory consolidation processes. Copyright © 2017 Elsevier Inc. All rights reserved.
Luo, Li; Cheng, Xiaohua; Wang, Shiyuan; Zhang, Junxue; Zhu, Wenbo; Yang, Jiaying; Liu, Pei
Blended learning that combines a modular object-oriented dynamic learning environment (Moodle) with face-to-face teaching was applied to a medical statistics course to improve learning outcomes and evaluate the impact factors of students' knowledge, attitudes and practices (KAP) relating to e-learning. The same real-name questionnaire was administered before and after the intervention. The summed scores of every part (knowledge, attitude and practice) were calculated using the entropy method. A mixed linear model was fitted using the SAS PROC MIXED procedure to analyse the impact factors of KAP. Educational reform, self-perceived character, registered permanent residence and hours spent online per day were significant impact factors of e-learning knowledge. Introversion and middle type respondents' average scores were higher than those of extroversion type respondents. Regarding e-learning attitudes, educational reform, community number, Internet age and hours spent online per day had a significant impact. Specifically, participants whose Internet age was no greater than 6 years scored 7.00 points lower than those whose Internet age was greater than 10 years. Regarding e-learning behaviour, educational reform and parents' literacy had a significant impact, as the average score increased 10.05 points (P learning KAP. Additionally, this type of blended course can be implemented in many other curriculums.
Høeg, Mette Obling
: Findings in the fields of neuroscience and cognitive science suggest that our mind and consciousness are inherently embodied. According to this perspective everything we can learn and know is related to our embodied interactions with the world. Findings in these scientific fields also suggest that most...... of our thoughts and thinking processes are unconscious, and that abstract concepts, to a wide extend, are metaphorical. These ideas belong to the embodied cognition paradigm in cognitive science and lead us to the philosophical paradigm of experientialism. According to this paradigm human beings learn...
Chen, Chih-Ming; Chang, Chia-Cheng
Many studies have identified web-based cooperative learning as an increasingly popular educational paradigm with potential to increase learner satisfaction and interactions. However, peer-to-peer interaction often suffers barriers owing to a failure to explore useful social interaction information in web-based cooperative learning environments.…
Liao, Ying; Lin, Wen-He
In the era when digitalization is pursued, numbers are the major medium of information performance and statistics is the primary instrument to interpret and analyze numerical information. For this reason, the cultivation of fundamental statistical literacy should be a key in the learning area of mathematics at the stage of compulsory education.…
Koparan, Timur; Güven, Bülent
This study investigates the effect of the project based learning approach on 8th grade students' attitude towards statistics. With this aim, an attitude scale towards statistics was developed. Quasi-experimental research model was used in this study. Following this model in the control group the traditional method was applied to teach statistics…
Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato
In our previous study (Daikoku, Yatomi, & Yumoto, 2014), we demonstrated that the N1m response could be a marker for the statistical learning process of pitch sequence, in which each tone was ordered by a Markov stochastic model. The aim of the present study was to investigate how the statistical learning of music- and language-like auditory sequences is reflected in the N1m responses based on the assumption that both language and music share domain generality. By using vowel sounds generated by a formant synthesizer, we devised music- and language-like auditory sequences in which higher-ordered transitional rules were embedded according to a Markov stochastic model by controlling fundamental (F0) and/or formant frequencies (F1-F2). In each sequence, F0 and/or F1-F2 were spectrally shifted in the last one-third of the tone sequence. Neuromagnetic responses to the tone sequences were recorded from 14 right-handed normal volunteers. In the music- and language-like sequences with pitch change, the N1m responses to the tones that appeared with higher transitional probability were significantly decreased compared with the responses to the tones that appeared with lower transitional probability within the first two-thirds of each sequence. Moreover, the amplitude difference was even retained within the last one-third of the sequence after the spectral shifts. However, in the language-like sequence without pitch change, no significant difference could be detected. The pitch change may facilitate the statistical learning in language and music. Statistically acquired knowledge may be appropriated to process altered auditory sequences with spectral shifts. The relative processing of spectral sequences may be a domain-general auditory mechanism that is innate to humans. Copyright © 2014 Elsevier Inc. All rights reserved.
Sabrina Oktoria Sihombing
Full Text Available A paradigm influences what we see and conceive about certain facts. Paradigm can also influence what we accept as a truth. Yet, the debate over which paradigm and methodology is best suit for marketing and consumer behavior has begun since 1980s. Many researchers criticized the domination of logical empiricism paradigm and offered alternative paradigm to understand marketing and consumer behavior. This article discusses several paradigms and methodology, which are part of qualitative paradigm, and compares them with positivism paradigm. This article will also point to the importance of reconciliation between qualitative and quantitative paradigm in order to improve marketing and consumer behavior studies.
Qiu, Zhaoyang; Allison, Brendan Z; Jin, Jing; Zhang, Yu; Wang, Xingyu; Li, Wei; Cichocki, Andrzej
motor imagery (MI) is a mental representation of motor behavior. The MI-based brain computer interfaces (BCIs) can provide communication for the physically impaired. The performance of MI-based BCI mainly depends on the subject's ability to self-modulate electroencephalogram signals. Proper training can help naive subjects learn to modulate brain activity proficiently. However, training subjects typically involve abstract motor tasks and are time-consuming. to improve the performance of naive subjects during motor imagery, a novel paradigm was presented that would guide naive subjects to modulate brain activity effectively. In this new paradigm, pictures of the left or right hand were used as cues for subjects to finish the motor imagery task. Fourteen healthy subjects (11 male, aged 22-25 years, and mean 23.6±1.16) participated in this study. The task was to imagine writing a Chinese character. Specifically, subjects could imagine hand movements corresponding to the sequence of writing strokes in the Chinese character. This paradigm was meant to find an effective and familiar action for most Chinese people, to provide them with a specific, extensively practiced task and help them modulate brain activity. results showed that the writing task paradigm yielded significantly better performance than the traditional arrow paradigm (p paradigm was easier. the proposed new motor imagery paradigm could guide subjects to help them modulate brain activity effectively. Results showed that there were significant improvements using new paradigm, both in classification accuracy and usability.
Sabrina Oktoria Sihombing
A paradigm influences what we see and conceive about certain facts. Paradigm can also influence what we accept as a truth. Yet, the debate over which paradigm and methodology is best suit for marketing and consumer behavior has begun since 1980s. Many researchers criticized the domination of logical empiricism paradigm and offered alternative paradigm to understand marketing and consumer behavior. This article discusses several paradigms and methodology, which are part of qualitative paradigm...
Vahedi, Shahrum; Farrokhi, Farahman; Gahramani, Farahnaz; Issazadegan, Ali
Objective: Approximately 66-80%of graduate students experience statistics anxiety and some researchers propose that many students identify statistics courses as the most anxiety-inducing courses in their academic curriculums. As such, it is likely that statistics anxiety is, in part, responsible for many students delaying enrollment in these courses for as long as possible. This paper proposes a canonical model by treating academic procrastination (AP), learning strategies (LS) as predictor variables and statistics anxiety (SA) as explained variables. Methods: A questionnaire survey was used for data collection and 246-college female student participated in this study. To examine the mutually independent relations between procrastination, learning strategies and statistics anxiety variables, a canonical correlation analysis was computed. Results: Findings show that two canonical functions were statistically significant. The set of variables (metacognitive self-regulation, source management, preparing homework, preparing for test and preparing term papers) helped predict changes of statistics anxiety with respect to fearful behavior, Attitude towards math and class, Performance, but not Anxiety. Conclusion: These findings could be used in educational and psychological interventions in the context of statistics anxiety reduction. PMID:24644468
Chakraborti, Anirban; Challet, Damien; Chatterjee, Arnab; Marsili, Matteo; Zhang, Yi-Cheng; Chakrabarti, Bikas K.
Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.
Jerez, José M; Molina, Ignacio; García-Laencina, Pedro J; Alba, Emilio; Ribelles, Nuria; Martín, Miguel; Franco, Leonardo
Missing data imputation is an important task in cases where it is crucial to use all available data and not discard records with missing values. This work evaluates the performance of several statistical and machine learning imputation methods that were used to predict recurrence in patients in an extensive real breast cancer data set. Imputation methods based on statistical techniques, e.g., mean, hot-deck and multiple imputation, and machine learning techniques, e.g., multi-layer perceptron (MLP), self-organisation maps (SOM) and k-nearest neighbour (KNN), were applied to data collected through the "El Álamo-I" project, and the results were then compared to those obtained from the listwise deletion (LD) imputation method. The database includes demographic, therapeutic and recurrence-survival information from 3679 women with operable invasive breast cancer diagnosed in 32 different hospitals belonging to the Spanish Breast Cancer Research Group (GEICAM). The accuracies of predictions on early cancer relapse were measured using artificial neural networks (ANNs), in which different ANNs were estimated using the data sets with imputed missing values. The imputation methods based on machine learning algorithms outperformed imputation statistical methods in the prediction of patient outcome. Friedman's test revealed a significant difference (p=0.0091) in the observed area under the ROC curve (AUC) values, and the pairwise comparison test showed that the AUCs for MLP, KNN and SOM were significantly higher (p=0.0053, p=0.0048 and p=0.0071, respectively) than the AUC from the LD-based prognosis model. The methods based on machine learning techniques were the most suited for the imputation of missing values and led to a significant enhancement of prognosis accuracy compared to imputation methods based on statistical procedures. Copyright © 2010 Elsevier B.V. All rights reserved.
Phelps, Amy L.; Dostilio, Lina
The present study addresses the efficacy of using service-learning methods to meet the GAISE guidelines (http://www.amstat.org/education/gaise/GAISECollege.htm) in a second business statistics course and further explores potential advantages of assigning a service-learning (SL) project as compared to the traditional statistics project assignment.…
Stevens, David J.; Arciuli, Joanne; Anderson, David I.
This study examined the effect of a prior bout of exercise on implicit cognition. Specifically, we examined whether a prior bout of moderate intensity exercise affected performance on a statistical learning task in healthy adults. A total of 42 participants were allocated to one of three conditions--a control group, a group that exercised for…
Full Text Available Self-regulated learning has become an important construct in education research in the last few years. Selfregulated learning in its simple form is the learner’s ability to monitor and control the learning process. There is increasing research in the literature on how to support students become more self-regulated learners. However, the advancement in the information technology has led to paradigm changes in the design and development of educational content. The concept of learning object instructional technology has emerged as a result of this shift in educational technology paradigms. This paper presents the results of a study that investigated the potential educational effectiveness of a pedagogical framework based on the self-regulated learning theories to support the design of learning object systems to help computer science students. A prototype learning object system was developed based on the contemporary research on self-regulated learning. The system was educationally evaluated in a quasi-experimental study over two semesters in a core programming languages concepts course. The evaluation revealed that a learning object system that takes into consideration contemporary research on self-regulated learning can be an effective learning environment to support computer science education.
The political and technological circumstances of the past two decades have culminated in opposing epistemic paradigms of college readiness, where millennial students' conceptual understanding of "learning" is both narrowed to meet the demands of school systems bound to accountability and amplified by a rapidly evolving digital world. The…
Buchs, Céline; Gilles, Ingrid; Antonietti, Jean-Philippe; Butera, Fabrizio
Despite the potential benefits of cooperative learning at university, its implementation is challenging. Here, we propose a theory-based 90-min intervention with 185 first-year psychology students in the challenging domain of statistics, consisting of an exercise phase and an individual learning post-test. We compared three conditions that…
Gaggioli, Andrea; Raspelli, Simona; Grassi, Alessandra; Pallavicini, Federica; Cipresso, Pietro; Wiederhold, Brenda K; Riva, Giuseppe
In this paper we introduce a new ubiquitous computing paradigm for behavioral health care: "Interreality". Interreality integrates assessment and treatment within a hybrid environment, that creates a bridge between the physical and virtual worlds. Our claim is that bridging virtual experiences (fully controlled by the therapist, used to learn coping skills and emotional regulation) with real experiences (allowing both the identification of any critical stressors and the assessment of what has been learned) using advanced technologies (virtual worlds, advanced sensors and PDA/mobile phones) may improve existing psychological treatment. To illustrate the proposed concept, a clinical scenario is also presented and discussed: Daniela, a 40 years old teacher, with a mother affected by Alzheimer's disease.
Külzow, Nadine; Kerti, Lucia; Witte, Veronica A; Kopp, Ute; Breitenstein, Caterina; Flöel, Agnes
Object-location memory is critical in every-day life and known to deteriorate early in the course of neurodegenerative disease. We adapted the previously established learning paradigm "LOCATO" for use in healthy older adults and patients with mild cognitive impairment (MCI). Pictures of real-life buildings were associated with positions on a two-dimensional street map by repetitions of "correct" object-location pairings over the course of five training blocks, followed by a recall task. Correct/incorrect associations were indicated by button presses. The original two 45-item sets were reduced to 15 item-sets, and tested in healthy older adults and MCI for learning curve, recall, and re-test effects. The two 15-item versions showed comparable learning curves and recall scores within each group. While learning curves increased linearly in both groups, MCI patients performed significantly worse on learning and recall compared to healthy controls. Re-testing after 6 month showed small practice effects only. LOCATO is a simple standardized task that overcomes several limitation of previously employed visuospatial task by using real-life stimuli, minimizing verbal encoding, avoiding fine motor responses, combining explicit and implicit statistical learning, and allowing to assess learning curve in addition to recall. Results show that the shortened version of LOCATO meets the requirements for a robust and ecologically meaningful assessment of object-location memory in older adults with and without MCI. It can now be used to systematically assess acquisition of object-location memory and its modulation through adjuvant therapies like pharmacological or non-invasive brain stimulation. Copyright © 2014 Elsevier B.V. All rights reserved.
Wu, Jianning; Wu, Bin
The accurate identification of gait asymmetry is very beneficial to the assessment of at-risk gait in the clinical applications. This paper investigated the application of classification method based on statistical learning algorithm to quantify gait symmetry based on the assumption that the degree of intrinsic change in dynamical system of gait is associated with the different statistical distributions between gait variables from left-right side of lower limbs; that is, the discrimination of...
Wurtz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaplan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-building elements and their functions in a fully-designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.
Perkovic, Sonja; Orquin, Jacob Lund
Ecological rationality results from matching decision strategies to appropriate environmental structures, but how does the matching happen? We propose that people learn the statistical structure of the environment through observation and use this learned structure to guide ecologically rational behavior. We tested this hypothesis in the context of organic foods. In Study 1, we found that products from healthful food categories are more likely to be organic than products from nonhealthful food categories. In Study 2, we found that consumers' perceptions of the healthfulness and prevalence of organic products in many food categories are accurate. Finally, in Study 3, we found that people perceive organic products as more healthful than nonorganic products when the statistical structure justifies this inference. Our findings suggest that people believe organic foods are more healthful than nonorganic foods and use an organic-food cue to guide their behavior because organic foods are, on average, 30% more healthful.
Hartel, Pieter H.; Hertzberger, L.O.
Recent issues of the bulletin of the ACM SIGCSE have been scrutinised to find evidence that the use of laboratory sessions and different programming paradigms improve learning difficult concepts and techniques, such as recursion and problemsolving. Many authors in the surveyed literature be- lieve
Watanabe, Kei; Funahashi, Shintaro
The dual-task paradigm is a procedure in which subjects are asked to perform two behavioral tasks concurrently, each of which involves a distinct goal with a unique stimulus-response association. Due to the heavy demand on subject's cognitive abilities, human studies using this paradigm have provided detailed insights regarding how the components of cognitive systems are functionally organized and implemented. Although dual-task paradigms are widely used in human studies, they are seldom used in nonhuman animal studies. We propose a novel dual-task paradigm for monkeys that requires the simultaneous performance of two cognitively demanding component tasks, each of which uses an independent effector for behavioral responses (hand and eyes). We provide a detailed description of an optimal training protocol for this paradigm, which has been lacking in the existing literature. An analysis of behavioral performance showed that the proposed dual-task paradigm (1) was quickly learned by monkeys (less than 40 sessions) with step-by-step training protocols, (2) produced specific behavioral effects, known as dual-task interference in human studies, and (3) achieved rigid and independent control of the effectors for behavioral responses throughout the trial. The proposed dual-task paradigm has a scalable task structure, in that each of the two component tasks can be easily replaced by other tasks, while preserving the overall structure of the paradigm. This paradigm should be useful for investigating executive control that underlies dual-task performance at both the behavioral and neuronal levels. Copyright © 2015 Elsevier B.V. All rights reserved.
New challenges require new approaches and many of the suggested solutions are in conflict with how we were taught in our formative years. Team work in high school and college was mostly found in sports-related activities. Collaboration in a classroom was not encouraged and could be viewed as cheating. We did not learn to share knowledge in a group nor to group problem solving, yet we are told that those are the very skills we need to have to survive in today's workplace. The first step to success is to look at how we were taught to learn and make the shift to learning in a different manner.
Hartog, R.J.M.; Schaaf, van der, H.; Kassahun, A.
Most well-known Learning Management Systems (LMS) are based on a paradigm of learning objects to be uploaded into the system. Most formulations of this paradigm implicitly assume that the learning objects are self contained learning objects such as FLASH objects or JAVA applets or presentational learning objects such as slide presentations. These are typically client side objects. However, a demand for learning support that activates the student can often be satisfied better with a server app...
Guergachi, Aziz A.
The paradigms of OLAP, multidimensional modeling and data mining have first emerged in the areas of market analysis and finance to address various needs of people working in these areas. Does this mean that they are useful and applicable in these areas only? Or, can they also be applicable in the other more traditional areas of science and engineering? What characterize the systems for which these paradigms are suitable? What are the goals of these paradigms? How do they relate to the traditional body of knowledge that has been developed throughout the centuries in the areas of mathematics, statistics, systems science and engineering? Where, how and to what extent can we leverage the conventional wisdom that has been accumulated in the aforementioned disciplines to develop a foundational basis for the above paradigms? The goal of this paper is to address these questions at the foundational level. We argue that the paradigms of OLAP, multidimensional modeling and data mining can also be applied successfully to complex engineering systems, such as membrane-based water/wastewater treatment plants, for example. We develop mathematically-based axiomatic definition of the concepts of 'dimension,' 'dimension level,' 'dimension hierarchy' and 'measure' using set theory and equivalence relations.
Jonee Kulman Brigham
Full Text Available This article explores, in four main sections, the idea of designing and applying human-environment paradigms. First, Caring Ecology criteria for human-environment paradigms are proposed that combine the principles of caring in Partnership Studies, with compatible ecological conceptions of humans’ dependent and integrated relationship within Earth systems. Next, these criteria are used to evaluate the strengths and weaknesses of five environmental paradigms which sets the stage for the following section critiquing the current “Anthropocene” paradigm and proposing a counter-paradigm: the “Apprenticene.” Paradigms suggest roles and actions and “Apprenticene Practices” are proposed, calling for humans to see our dependence on Earth systems, heal our story as we accept past failures, and learn by apprenticing ourselves to the Earth system. Finally, these Apprenticene Practices are illustrated in an example of a creative practice called Earth Systems Journey that engages youth with an integrated experience of their human-natural environment. The paper concludes with reflections on how Partnership Studies and ecological principles can work together to support a thriving future for humans and the rest of nature.
Rickard, K A; Gallahue, D L; Gruen, G E; Tridle, M; Bewley, N; Steele, K
An alternative paradigm for nutrition and fitness education centers on understanding and developing skill in implementing a play approach to learning about healthful eating and promoting active play in the context of the child, the family, and the school. The play approach is defined as a process for learning that is intrinsically motivated, enjoyable, freely chosen, nonliteral, safe, and actively engaged in by young learners. Making choices, assuming responsibility for one's decisions and actions, and having fun are inherent components of the play approach to learning. In this approach, internal cognitive transactions and intrinsic motivation are the primary forces that ultimately determine healthful choices and life habits. Theoretical models of children's learning--the dynamic systems theory and the cognitive-developmental theory of Jean Piaget--provide a theoretical basis for nutrition and fitness education in the 21st century. The ultimate goal is to develop partnerships of children, families, and schools in ways that promote the well-being of children and translate into healthful life habits. The play approach is an ongoing process of learning that is applicable to learners of all ages.
Koparan, Timur; Güven, Bülent
This study examines the effect of project based learning on 8th grade students' statistical literacy levels. A performance test was developed for this aim. Quasi-experimental research model was used in this article. In this context, the statistics were taught with traditional method in the control group and it was taught using project based…
Full Text Available We propose that neglect includes a disorder of representational updating. Representational updating refers to our ability to build mental models and adapt those models to changing experience. This updating ability depends on the processes of priming, working memory, and statistical learning. These processes in turn interact with our capabilities for sustained attention and precise temporal processing. We review evidence showing that all these non-spatial abilities are impaired in neglect, and we discuss how recognition of such deficits can lead to novel approaches for rehabilitating neglect.
Khamparia, Aditya; Pandey, Babita
E-learning and online education has made great improvements in the recent past. It has shifted the teaching paradigm from conventional classroom learning to dynamic web based learning. Due to this, a dynamic learning material has been delivered to learners, instead ofstatic content, according to their skills, needs and preferences. In this…
Basili, Victor R.
The Software Engineering Laboratory uses a paradigm for improving the software process and product, called the quality improvement paradigm. This paradigm has evolved over the past 18 years, along with our software development processes and product. Since 1976, when we first began the SEL, we have learned a great deal about improving the software process and product, making a great many mistakes along the way. Quality improvement paradigm, as it is currently defined, can be broken up into six steps: characterize the current project and its environment with respect to the appropriate models and metrics; set the quantifiable goals for successful project performance and improvement; choose the appropriate process model and supporting methods and tools for this project; execute the processes, construct the products, and collect, validate, and analyze the data to provide real-time feedback for corrective action; analyze the data to evaluate the current practices, determine problems, record findings, and make recommendations for future project improvements; and package the experience gained in the form of updated and refined models and other forms of structured knowledge gained from this and prior projects and save it in an experience base to be reused on future projects.
Natalia Loaiza Velásquez
Full Text Available Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA, Chi-Square Test, Student’s T Test, Linear Regression, Pearson’s Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon’s Diversity Index, Tukey’s Test, Cluster Analysis, Spearman’s Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements. Rev. Biol. Trop. 59 (3: 983-992. Epub 2011 September 01.Los biólogos tropicales estudian la biodiversidad más rica y amenazada del planeta, y en estos tiempos de cambio climático y mega-extinción, la necesidad de investigación de buena calidad es más acuciante que en el pasado. Sin embargo, el componente estadístico en la investigación publicada por los autores tropicales adolece a veces
Lee, Sihyoung; Yang, Seungji; Ro, Yong Man; Kim, Hyoung Joong
As the Internet and multimedia technology is becoming advanced, the number of digital multimedia contents is also becoming abundant in learning area. In order to facilitate the access of digital knowledge and to meet the need of a lifelong learning, e-learning could be the helpful alternative way to the conventional learning paradigms. E-learning is known as a unifying term to express online, web-based and technology-delivered learning. Mobile-learning (m-learning) is defined as e-learning through mobile devices using wireless transmission. In a survey, more than half of the people remarked that the re-consumption was one of the convenient features in e-learning. However, it is not easy to find user's preferred segmentation from a full version of lengthy e-learning content. Especially in m-learning, a content-summarization method is strongly required because mobile devices are limited to low processing power and battery capacity. In this paper, we propose a new user preference model for re-consumption to construct personalized summarization for re-consumption. The user preference for re-consumption is modeled based on user actions with statistical model. Based on the user preference model for re-consumption with personalized user actions, our method discriminates preferred parts over the entire content. Experimental results demonstrated successful personalized summarization.
Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.
Full Text Available In order to perceive the world, we need more than just raw sensory input: a subliminal paradigm of thought is required to interpret raw sensory data and, thereby, create the objects and events we perceive around ourselves. As such, the world we see reflects our own unexamined, culture-bound assumptions and expectations, which explains why every generation in history has believed that it more or less understood the world. Today, we perceive a world of objects and events outside and independent of mind, which merely reflects our current paradigm of thought. Anomalies that contradict this paradigm have been accumulated by physicists over the past couple of decades, which will eventually force our culture to move to a new paradigm. Under this new paradigm, a form of universal mind will be viewed as nature’s sole fundamental entity. In this paper, I offer a sketch of what the new paradigm may look like.
Full Text Available Aged-related cognitive ability is highly variable, ranging from unimpaired to severe impairments. The Morris water maze (a reliable tool for assessing memory has been used to distinguish aged rodents that are superior learners from those that are learning impaired. This task, however, is not practical for pre- and post-pharmacological treatment, as the memory of the task is long lasting. In contrast, the object location memory task, also a spatial learning paradigm, results in a less robust memory that decays quickly. We demonstrate for the first time how these two paradigms can be used together to assess hippocampal cognitive impairments before and after pharmacological or genetic manipulations in rodents. Rats were first segregated into superior learning and learning impaired groups using the object location memory task, and their performance was correlated with future outcome on this task and on the Morris water maze. This method provides a tool to evaluate the effect of treatments on cognitive impairment associated with aging and neurodegenerative disorders.
Partha Sindu I Gede
Full Text Available The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work . Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in
A concept map (CM) is a hierarchically arranged, graphic representation of the relationships among concepts. Concept mapping (CMING) is the process of constructing a CM. This paper examines whether a CMING strategy can be useful in helping students to improve their learning performance in a business and economics statistics course. A single…
Lampropoulos, Aristomenis S
This timely book presents Applications in Recommender Systems which are making recommendations using machine learning algorithms trained via examples of content the user likes or dislikes. Recommender systems built on the assumption of availability of both positive and negative examples do not perform well when negative examples are rare. It is exactly this problem that the authors address in the monograph at hand. Specifically, the books approach is based on one-class classification methodologies that have been appearing in recent machine learning research. The blending of recommender systems and one-class classification provides a new very fertile field for research, innovation and development with potential applications in “big data” as well as “sparse data” problems. The book will be useful to researchers, practitioners and graduate students dealing with problems of extensive and complex data. It is intended for both the expert/researcher in the fields of Pattern Recognition, Machine Learning and ...
Bott, Lewis; Hoffman, Aaron B.; Murphy, Gregory L.
Many theories of category learning assume that learning is driven by a need to minimize classification error. When there is no classification error, therefore, learning of individual features should be negligible. We tested this hypothesis by conducting three category learning experiments adapted from an associative learning blocking paradigm. Contrary to an error-driven account of learning, participants learned a wide range of information when they learned about categories, and blocking effe...
Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.
Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History
Somers, M J
Two neural network paradigms--multilayer perceptron and learning vector quantization--were used to study voluntary employee turnover with a sample of 577 hospital employees. The objectives of the study were twofold. The 1st was to assess whether neural computing techniques offered greater predictive accuracy than did conventional turnover methodologies. The 2nd was to explore whether computer models of turnover based on neural network technologies offered new insights into turnover processes. When compared with logistic regression analysis, both neural network paradigms provided considerably more accurate predictions of turnover behavior, particularly with respect to the correct classification of leavers. In addition, these neural network paradigms captured nonlinear relationships that are relevant for theory development. Results are discussed in terms of their implications for future research.
Saji, Genn; Timofeev, Boris
The study of the effects behind the degradation of components and material is becoming increasingly important for the safe operation of aged plants especially when it comes to life-extension. Since the Russian nuclear community began to examine life extension issues nearly fifteen years ago, there is much to learn from these Russian pioneering studies, a portion of which were performed under the TACIS (Technical Assistance for Commonwealth of Independent States) international collaboration program with EU countries. At the Ninth International Conference, recent data were introduced regarding the ageing effects of mechanical properties of various kinds of steel and the welding joints of Russian NPP components. The full title of the conference was Material Issues in Design, Manufacturing and Operation of Nuclear Power Plants Equipment. The meeting was organized by the Central Research Institute of Structural Materials (CRISM) 'Prometey' in cooperation with the IAEA and JRC-EU. In reviewing the recent data presented at the Ninth Conference, the authors think that the paradigms of structural integrity issues in aged plants are now reasonably well established in (1) fracture mechanics and irradiation hardening of reactor vessels and core internals and (2) thermal ageing and annealing effects. Yet even when considering these well established paradigms, the current direction of study is not adequately addressing the effects of a corrosive environment. The first author believes that the current approach of low cycle fatigue is far from able to prevent and predict environmentally assisted cracks. This fundamental flaw stems from design codes, which do not incorporate the basic knowledge of corrosion mechanisms. Our focus in researching aged plants should be re-directed toward environmentally assisted cracking, typically the film rupture-slip dissolution mechanism of crack propagation under the effect of long cell action on local cells, as discussed by the first author in
Katharine eGraf Estes
Full Text Available The acoustic variation in language presents learners with a substantial challenge. To learn by tracking statistical regularities in speech, infants must recognize words across tokens that differ based on characteristics such as the speaker’s voice, affect, or the sentence context. Previous statistical learning studies have not investigated how these types of surface form variation affect learning. The present experiments used tasks tailored to two distinct developmental levels to investigate the robustness of statistical learning to variation. Experiment 1 examined statistical word segmentation in 11-month-olds and found that infants can recognize statistically segmented words across a change in the speaker’s voice from segmentation to testing. The direction of infants’ preferences suggests that recognizing words across a voice change is more difficult than recognizing them in a consistent voice. Experiment 2 tested whether 17-month-olds can generalize the output of statistical learning across variation to support word learning. The infants were successful in their generalization; they associated referents with statistically defined words despite a change in voice from segmentation to label learning. Infants’ learning patterns also indicate that they formed representations of across-word syllable sequences during segmentation. Thus, low probability sequences can act as object labels in some conditions. The findings of these experiments suggest that the units that emerge during statistical learning are not perceptually constrained, but rather are robust to naturalistic acoustic variation.
The rise of oil prices, the difficulties in markets liberalization, and the poor results of competition have convinced many that a new energy paradigm is necessary. Taking the original definition of scientific paradigm, it doesn't seem that a practical solution could be found outside the present paradigm of energy policy, made of privatisation, liberalisation and competition [it
Lancaster, Gillian; Francis, Brian; Allen, Ruth
The Lancaster Postgraduate Statistics Centre (PSC) encompasses all aspects of Postgraduate Teaching and Learning within the Mathematics and Statistics department. It is the only UK HEFCE-funded Centre for Excellence in Teaching and Learning that uniquely specialises in postgraduate statistics, and rewards the research and teaching excellence of the Statistics Group. The award-winning purpose-built PSC building opened in February 2008, and features many modern state of the art facilities. Our ...
On-line learning is a process which is facilitated through the use of the Internet and the World Wide Web. It has the potential for stimulating learning on a social constructivist paradigm given the wide range of applications available on the Internet and the web. The social constructivist paradigm is associated with creative ...
Ladstein, Jarle; Evensmoen, Hallvard R; Håberg, Asta K; Kristoffersen, Anders; Goa, Pål E
To compare 2D and 3D echo-planar imaging (EPI) in a higher cognitive level fMRI paradigm. In particular, to study the link between the presence of task-correlated physiological fluctuations and motion and the fMRI contrast estimates from either 2D EPI or 3D EPI datasets, with and without adding nuisance regressors to the model. A signal model in the presence of partly task-correlated fluctuations is derived, and predictions for contrast estimates with and without nuisance regressors are made. Thirty-one healthy volunteers were scanned using 2D EPI and 3D EPI during a virtual environmental learning paradigm. In a subgroup of 7 subjects, heart rate and respiration were logged, and the correlation with the paradigm was evaluated. FMRI analysis was performed using models with and without nuisance regressors. Differences in the mean contrast estimates were investigated by analysis-of-variance using Subject, Sequence, Day, and Run as factors. The distributions of group level contrast estimates were compared. Partially task-correlated fluctuations in respiration, heart rate and motion were observed. Statistically significant differences were found in the mean contrast estimates between the 2D EPI and 3D EPI when using a model without nuisance regressors. The inclusion of nuisance regressors for cardiorespiratory effects and motion reduced the difference to a statistically non-significant level. Furthermore, the contrast estimate values shifted more when including nuisance regressors for 3D EPI compared to 2D EPI. The results are consistent with 3D EPI having a higher sensitivity to fluctuations compared to 2D EPI. In the presence partially task-correlated physiological fluctuations or motion, proper correction is necessary to get expectation correct contrast estimates when using 3D EPI. As such task-correlated physiological fluctuations or motion is difficult to avoid in paradigms exploring higher cognitive functions, 2D EPI seems to be the preferred choice for higher
Full Text Available Observed associations between events can be validated by statistical information of reliability or by testament of communicative sources. We tested whether toddlers learn from their own observation of efficiency, assessed by statistical information on reliability of interventions, or from communicatively presented demonstration, when these two potential types of evidence of validity of interventions on a novel artifact are contrasted with each other. Eighteen-month-old infants observed two adults, one operating the artifact by a method that was more efficient (2/3 probability of success than that of the other (1/3 probability of success. Compared to the Baseline condition, in which communicative signals were not employed, infants tended to choose the less reliable method to operate the artifact when this method was demonstrated in a communicative manner in the Experimental condition. This finding demonstrates that, in certain circumstances, communicative sanctioning of reliability may override statistical evidence for young learners. Such a bias can serve fast and efficient transmission of knowledge between generations.
Full Text Available The accurate identification of gait asymmetry is very beneficial to the assessment of at-risk gait in the clinical applications. This paper investigated the application of classification method based on statistical learning algorithm to quantify gait symmetry based on the assumption that the degree of intrinsic change in dynamical system of gait is associated with the different statistical distributions between gait variables from left-right side of lower limbs; that is, the discrimination of small difference of similarity between lower limbs is considered the reorganization of their different probability distribution. The kinetic gait data of 60 participants were recorded using a strain gauge force platform during normal walking. The classification method is designed based on advanced statistical learning algorithm such as support vector machine algorithm for binary classification and is adopted to quantitatively evaluate gait symmetry. The experiment results showed that the proposed method could capture more intrinsic dynamic information hidden in gait variables and recognize the right-left gait patterns with superior generalization performance. Moreover, our proposed techniques could identify the small significant difference between lower limbs when compared to the traditional symmetry index method for gait. The proposed algorithm would become an effective tool for early identification of the elderly gait asymmetry in the clinical diagnosis.
Wu, Jianning; Wu, Bin
The accurate identification of gait asymmetry is very beneficial to the assessment of at-risk gait in the clinical applications. This paper investigated the application of classification method based on statistical learning algorithm to quantify gait symmetry based on the assumption that the degree of intrinsic change in dynamical system of gait is associated with the different statistical distributions between gait variables from left-right side of lower limbs; that is, the discrimination of small difference of similarity between lower limbs is considered the reorganization of their different probability distribution. The kinetic gait data of 60 participants were recorded using a strain gauge force platform during normal walking. The classification method is designed based on advanced statistical learning algorithm such as support vector machine algorithm for binary classification and is adopted to quantitatively evaluate gait symmetry. The experiment results showed that the proposed method could capture more intrinsic dynamic information hidden in gait variables and recognize the right-left gait patterns with superior generalization performance. Moreover, our proposed techniques could identify the small significant difference between lower limbs when compared to the traditional symmetry index method for gait. The proposed algorithm would become an effective tool for early identification of the elderly gait asymmetry in the clinical diagnosis.
We obtain the relaxation time for the shear viscous stress for various geometries using the 'membrane paradigm' formula proposed recently. We consider the generic Schwarzschild-anti-de Sitter black holes (SAdS), the generic Dp-brane, the Klebanov-Tseytlin (KT) geometry, and the N=2* theory. The formula is the 'shear mode' result and is not fully trustable, but it may be helpful to learn some generic behaviors about the relaxation time. For example, a simple formula summarizes all known results for SAdS, and a single expression summarizes the results for the Dp-brane and the KT geometry.
Lu, Kai; Vicario, David S
Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.
Noor, Ahmed K.
The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.
Yu, Shimeng; Gao, Bin; Fang, Zheng; Yu, Hongyu; Kang, Jinfeng; Wong, H-S Philip
Hardware implementation of neuromorphic computing is attractive as a computing paradigm beyond the conventional digital computing. In this work, we show that the SET (off-to-on) transition of metal oxide resistive switching memory becomes probabilistic under a weak programming condition. The switching variability of the binary synaptic device implements a stochastic learning rule. Such stochastic SET transition was statistically measured and modeled for a simulation of a winner-take-all network for competitive learning. The simulation illustrates that with such stochastic learning, the orientation classification function of input patterns can be effectively realized. The system performance metrics were compared between the conventional approach using the analog synapse and the approach in this work that employs the binary synapse utilizing the stochastic learning. The feasibility of using binary synapse in the neurormorphic computing may relax the constraints to engineer continuous multilevel intermediate states and widens the material choice for the synaptic device design.
Reza Karimi, RPh, PhD
Full Text Available Purpose: An Integrative Student Learning (ISL activity was developed with the intent to enhance the dynamic of student teamwork and enhance student learning by fostering critical-thinking skills, self-directed learning skills, and active learning. Case Study: The ISL activity consists of three portions: teambuilding, teamwork, and a facilitator driven “closing the loop” feedback discussion. For teambuilding, a set of clue sheets or manufacturer‘s drug containers were distributed among student pairs who applied their pharmaceutical knowledge to identify two more student pairs with similar clues or drugs, thus building a team of six. For teamwork, each team completed online exams, composed of integrated pharmaceutical science questions with clinical correlates, using only selected online library resources. For the feedback discussion, facilitators evaluated student impressions, opened a discussion about the ISL activity, and provided feedback to teams’ impressions and questions. This study describes three different ISL activities developed and implemented over three days with first year pharmacy students. Facilitators’ interactions with students and three surveys indicated a majority of students preferred ISL over traditional team activities and over 90% agreed ISL activities promoted active learning, critical-thinking, self-directed learning, teamwork, and student confidence in online library searches. Conclusions: The ISL activity has proven to be an effective learning activity that promotes teamwork and integration of didactic pharmaceutical sciences to enhance student learning of didactic materials and confidence in searching online library resources. It was found that all of this can be accomplished in a short amount of class time with a very reasonable amount of preparation.
Full Text Available Purpose: An Integrative Student Learning (ISL activity was developed with the intent to enhance the dynamic of student teamwork and enhance student learning by fostering critical-thinking skills, self-directed learning skills, and active learning. Case Study: The ISL activity consists of three portions: teambuilding, teamwork, and a facilitator driven "closing the loop" feedback discussion. For teambuilding, a set of clue sheets or manufacturer's drug containers were distributed among student pairs who applied their pharmaceutical knowledge to identify two more student pairs with similar clues or drugs, thus building a team of six. For teamwork, each team completed online exams, composed of integrated pharmaceutical science questions with clinical correlates, using only selected online library resources. For the feedback discussion, facilitators evaluated student impressions, opened a discussion about the ISL activity, and provided feedback to teams' impressions and questions. This study describes three different ISL activities developed and implemented over three days with first year pharmacy students. Facilitators' interactions with students and three surveys indicated a majority of students preferred ISL over traditional team activities and over 90% agreed ISL activities promoted active learning, critical-thinking, self-directed learning, teamwork, and student confidence in online library searches. Conclusions: The ISL activity has proven to be an effective learning activity that promotes teamwork and integration of didactic pharmaceutical sciences to enhance student learning of didactic materials and confidence in searching online library resources. It was found that all of this can be accomplished in a short amount of class time with a very reasonable amount of preparation. Type: Case Study
El Naqa, Issam; Bradley, Jeffrey D; Deasy, Joseph O; Lindsay, Patricia E; Hope, Andrew J
Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for 'generalizabilty' validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model
Frank, T.D.; Blau, Julia J.C.; Turvey, M.T.
Adaptation and re-adaptation processes are studied in terms of dynamic attractors that evolve and devolve. In doing so, a theoretical account is given for the fundamental observation that adaptation and re-adaptation processes do not exhibit one-trial learning. Moreover, the emergence of the latent aftereffect in the extended prism paradigm is addressed
Forster, Malcolm R
Statisticians and philosophers of science have many common interests but restricted communication with each other. This volume aims to remedy these shortcomings. It provides state-of-the-art research in the area of philosophy of statistics by encouraging numerous experts to communicate with one another without feeling "restricted” by their disciplines or thinking "piecemeal” in their treatment of issues. A second goal of this book is to present work in the field without bias toward any particular statistical paradigm. Broadly speaking, the essays in this Handbook are concerned with problems of induction, statistics and probability. For centuries, foundational problems like induction have been among philosophers' favorite topics; recently, however, non-philosophers have increasingly taken a keen interest in these issues. This volume accordingly contains papers by both philosophers and non-philosophers, including scholars from nine academic disciplines.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…
Blended learning combines face-to-face class based and online teaching and learning delivery in order to increase flexibility in how, when, and where students study and learn. The development, integration, and promotion of blended learning in frameworks of curriculum design can optimize the opportunities afforded by information and communication technologies and, concomitantly, accommodate a broad range of student learning styles. This study critically reviews the potential benefits of blende...
An overview is given of certain aspects of fundamental statistical theories as applied to strongly magnetized plasmas. Emphasis is given to the gyrokinetic formalism, the historical development of realizable Markovian closures, and recent results in the statistical theory of turbulent generation of long-wavelength flows that generalize and provide further physical insight to classic calculations of eddy viscosity. A Hamiltonian formulation of turbulent flow generation is described and argued to be very useful.
Nguyen, ThuyUyen H.; Charity, Ian; Robson, Andrew
This study investigates students' perceptions of computer-based learning environments, their attitude towards business statistics, and their academic achievement in higher education. Guided by learning environments concepts and attitudinal theory, a theoretical model was proposed with two instruments, one for measuring the learning environment and…
Lee, Lung-Sheng; Lai, Chun-Chin
The paradigm of human resource development has shifted to workplace learning and performance. Workplace can be an organization, an office, a kitchen, a shop, a farm, a website, even a home. Workplace learning is a dynamic process to solve workplace problems through learning. An identification of global trends of workplace learning can help us to…
Zuhurudeen, Fathima Manaar; Huang, Yi Ting
Empirical evidence for statistical learning comes from artificial language tasks, but it is unclear how these effects scale up outside of the lab. The current study turns to a real-world test case of statistical learning where native English speakers encounter the syntactic regularities of Arabic through memorization of the Qur'an. This unique input provides extended exposure to the complexity of a natural language, with minimal semantic cues. Memorizers were asked to distinguish unfamiliar nouns and verbs based on their co-occurrence with familiar pronouns in an Arabic language sample. Their performance was compared to that of classroom learners who had explicit knowledge of pronoun meanings and grammatical functions. Grammatical judgments were more accurate in memorizers compared to non-memorizers. No effects of classroom experience were found. These results demonstrate that real-world exposure to the statistical properties of a natural language facilitates the acquisition of grammatical categories. Copyright © 2015 Elsevier B.V. All rights reserved.
Montag, Jessica L.; Jones, Michael N.; Smith, Linda B.
Young children learn language from the speech they hear. Previous work suggests that the statistical diversity of words and of linguistic contexts is associated with better language outcomes. One potential source of lexical diversity is the text of picture books that caregivers read aloud to children. Many parents begin reading to their children shortly after birth, so this is potentially an important source of linguistic input for many children. We constructed a corpus of 100 children’s pict...
Blended learning combines face-to-face class based and online teaching and learning delivery in order to increase flexibility in how, when, and where students study and learn. The development, integration, and promotion of blended learning in frameworks of curriculum design can optimize the opportunities afforded by information and communication…
Gilman, Jessica H
...; however, a strong tie to traditional teaching paradigms still remains. In order to effectively train officers to utilize the fundamental knowledge taught in Modules A and B, the reliance on traditional teaching methodology must be destroyed and replaced by a dynamic critical thinking approach.
Wilson, Stephen M; Yen, Melodie; Eriksson, Dana K
Research on neuroplasticity in recovery from aphasia depends on the ability to identify language areas of the brain in individuals with aphasia. However, tasks commonly used to engage language processing in people with aphasia, such as narrative comprehension and picture naming, are limited in terms of reliability (test-retest reproducibility) and validity (identification of language regions, and not other regions). On the other hand, paradigms such as semantic decision that are effective in identifying language regions in people without aphasia can be prohibitively challenging for people with aphasia. This paper describes a new semantic matching paradigm that uses an adaptive staircase procedure to present individuals with stimuli that are challenging yet within their competence, so that language processing can be fully engaged in people with and without language impairments. The feasibility, reliability and validity of the adaptive semantic matching paradigm were investigated in sixteen individuals with chronic post-stroke aphasia and fourteen neurologically normal participants, in comparison to narrative comprehension and picture naming paradigms. All participants succeeded in learning and performing the semantic paradigm. Test-retest reproducibility of the semantic paradigm in people with aphasia was good (Dice coefficient = 0.66), and was superior to the other two paradigms. The semantic paradigm revealed known features of typical language organization (lateralization; frontal and temporal regions) more consistently in neurologically normal individuals than the other two paradigms, constituting evidence for validity. In sum, the adaptive semantic matching paradigm is a feasible, reliable and valid method for mapping language regions in people with aphasia. © 2018 Wiley Periodicals, Inc.
Samara, Anna; Caravolas, Markéta
Potential implicit orthographic learning deficits were investigated in adults with dyslexia. An artificial grammar learning paradigm served to assess dyslexic and typical readers' ability to exploit information about chunk frequency, letter-position patterns, and specific string similarity, all of which have analogous constructs in real…
Full Text Available With the increasing demand for high-resolution remote sensing images for mapping and monitoring the Earth’s environment, geometric positioning accuracy improvement plays a significant role in the image preprocessing step. Based on the statistical learning theory, we propose a new method to improve the geometric positioning accuracy without ground control points (GCPs. Multi-temporal images from the ZY-3 satellite are tested and the bias-compensated rational function model (RFM is applied as the block adjustment model in our experiment. An easy and stable weight strategy and the fast iterative shrinkage-thresholding (FIST algorithm which is widely used in the field of compressive sensing are improved and utilized to define the normal equation matrix and solve it. Then, the residual errors after traditional block adjustment are acquired and tested with the newly proposed inherent error compensation model based on statistical learning theory. The final results indicate that the geometric positioning accuracy of ZY-3 satellite imagery can be improved greatly with our proposed method.
Hülya Turgut Yıldız
• A. Eyüce on Learning from Istanbul. Representing different regions, the papers offer an exposition of philosophies and discourses, cases and experiments, and programs and approaches as voices that call for integrating ‘people-environments’ paradigm into teaching practices in an effective and efficient manner.
Qin, H; Dubnau, J
Individuals who experience traumatic events may develop persistent posttraumatic stress disorder. Patients with this disorder are commonly treated with exposure therapy, which has had limited long-term success. In experimental neurobiology, fear extinction is a model for exposure therapy. In this behavioral paradigm, animals are repeatedly exposed in a safe environment to the fearful stimulus, which leads to greatly reduced fear. Studying animal models of extinction already has lead to better therapeutic strategies and development of new candidate drugs. Lack of a powerful genetic model of extinction, however, has limited progress in identifying underlying molecular and genetic factors. In this study, we established a robust behavioral paradigm to study the short-term effect (acquisition) of extinction in Drosophila melanogaster. We focused on the extinction of olfactory aversive 1-day memory with a task that has been the main workhorse for genetics of memory in flies. Using this paradigm, we show that extinction can inhibit each of two genetically distinct forms of consolidated memory. We then used a series of single-gene mutants with known impact on associative learning to examine the effects on extinction. We find that extinction is intact in each of these mutants, suggesting that extinction learning relies on different molecular mechanisms than does Pavlovian learning.
Broeck, C. van den
The problem of how one can learn from examples is illustrated on the case of a student perception trained by the Hebb rule on examples generated by a teacher perception. Two basic quantities are calculated: the training error and the generalization error. The obtained results are found to be typical. Other training rules are discussed. For the case of an Ising student with an Ising teacher, the existence of a first order phase transition is shown. Special effects such as dilution, queries, rejection, etc. are discussed and some results for multilayer networks are reviewed. In particular, the properties of a self-similar committee machine are derived. Finally, we discuss the statistic of generalization, with a review of the Hoeffding inequality, the Dvoretzky Kiefer Wolfowitz theorem and the Vapnik Chervonenkis theorem. (author). 29 refs, 6 figs
Koparan, Timur; Güven, Bülent
The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35…
Zou, Zhengxia; Shi, Zhenwei
We propose a new paradigm for target detection in high resolution aerial remote sensing images under small target priors. Previous remote sensing target detection methods frame the detection as learning of detection model + inference of class-label and bounding-box coordinates. Instead, we formulate it from a Bayesian view that at inference stage, the detection model is adaptively updated to maximize its posterior that is determined by both training and observation. We call this paradigm "random access memories (RAM)." In this paradigm, "Memories" can be interpreted as any model distribution learned from training data and "random access" means accessing memories and randomly adjusting the model at detection phase to obtain better adaptivity to any unseen distribution of test data. By leveraging some latest detection techniques e.g., deep Convolutional Neural Networks and multi-scale anchors, experimental results on a public remote sensing target detection data set show our method outperforms several other state of the art methods. We also introduce a new data set "LEarning, VIsion and Remote sensing laboratory (LEVIR)", which is one order of magnitude larger than other data sets of this field. LEVIR consists of a large set of Google Earth images, with over 22 k images and 10 k independently labeled targets. RAM gives noticeable upgrade of accuracy (an mean average precision improvement of 1% ~ 4%) of our baseline detectors with acceptable computational overhead.
The PAD Class (Presentation-Assimilation-Discussion) is a new paradigm for classroom teaching combining strengths of lecture and discussion. With half class time allocated for teacher's presentation and the other half for students' discussion, an assimilation stage was inserted between presentation and discussion for independent and individualized learning. Since its first success in 2014, the PAD method has gained national popularity in China and been successfully put into practice by thousands of college teachers in nearly all subjects, e.g., science, engineering, medical sciences, social sciences, humanities and arts. This paper analyzed the psychological and pedagogical rationales underlying the PAD Class to explicate its effectiveness in enhancing active learning.
Christensen, Torben Spanget
analysis of subject didactics by Sigmund Ongstad. The two positions offer fundamentally different insights into didactics. Nielsen’s position establishes didactics as a knowledge domain and Ongstad’s position points to the dynamics of subject didactics by analyzing communication as a basic aspect. Krogh...... this article. A possible utilitarian didactical paradigm, already indicated by Krogh as a historical paradigm prominent in our time, is also discussed. It is suggested that reflection could be seen as a normative response to the utilitarian paradigm, and not as a paradigm in its own right. It is concluded...
Hilal Seda YILDIZ AYBEK
Full Text Available With the rapid development of Internet technologies, various paradigms of learning can be adapted to e-learning environments. One of these paradigms, Computer-Supported Collaborative Learning (CSCL, can be presented to learners through web-based systems such as LMS while incorporating peer-to-peer (P2P learning, measurement, and evaluation strategies. In this book titled Intelligent Data Analysis for e-Learning Enhancing Security and Trustworthiness in Online Learning Systems, various strategies and applications are presented to ensure trustworthiness in e-learning environments, especially where the CSCL paradigm is adopted. A comprehensive literature review on student security, privacy, and trustworthiness has been presented in a very detailed and comprehensive way. This allowed readers to conceptually prepare for detailed applications in the later parts of the book and case studies at the Universitat Oberta de Catalunya. In addition to the applications that are presented in detail, the approaches and techniques such as Learning Analytics, Educational Data Mining, distributed computing, and massive data processing are shared through detailed applications of how to adapt to the measurement and evaluation applications offered in online learning environments in the context of trustworthiness.
Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presents algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.
Full Text Available The main objective of this paper is to present a unified view of Winnicott’s contribution to psychoanalysis. Part I (Sections 1-4 starts off by recalling that, according to some important commentators, Winnicott introduced a change in paradigms in psychoanalysis. In order to show that this change can be viewed as an overall “switch in paradigms”, in the sense given by T. S. Kuhn, this paper presents an account of the Kuhn’s view of science and offers a reconstruction of Freud’s Oedipal, Triangular or “Toddler-in-the-Mother’s-Bed” Paradigm. Part II (Sections 5-13 shows that as early as the 1920’s Winnicott encountered insurmountable anomalies in the Oedipal paradigm and, for that reason, started what can be called revolutionary research for a new framework of psychoanalysis. This research led Winnicott, especially during the last period of his life, to produce an alternative dual or “Baby-on-the-Mother’s-Lap” Paradigm. This new paradigm is described in some detail, especially the paradigmatic dual mother-baby relation and Winnicott’s dominant theory of maturation. Final remarks are made regarding Winnicott’s heritage and the future of psychoanalysis.
Olani, A.; Hoekstra, R.; Harskamp, E.; van der Werf, G.
Introduction: The study investigated the degree to which students' statistical reasoning abilities, statistics self-efficacy, and perceived value of statistics improved during a reform based introductory statistics course. The study also examined whether the changes in these learning outcomes differed with respect to the students' mathematical…
Hansen, Niels Christian; Loui, Psyche; Vuust, Peter
Statistical learning underlies the generation of expectations with different degrees of uncertainty. In music, uncertainty applies to expectations for pitches in a melody. This uncertainty can be quantified by Shannon entropy from distributions of expectedness ratings for multiple continuations o...
Lee, Junghwan; Zo, Hangjung; Lee, Hwansoo
The innovation of online technologies and the rapid diffusion of smart devices are changing workplace learning environment. Smart learning, as emerging learning paradigm, enables employees' learning to take place anywhere and anytime. Workplace learning studies, however, have focused on traditional e-learning environment, and they have failed…
Fehér, Olga; Ljubičić, Iva; Suzuki, Kenta; Okanoya, Kazuo; Tchernichovski, Ofer
At the onset of vocal development, both songbirds and humans produce variable vocal babbling with broadly distributed acoustic features. Over development, these vocalizations differentiate into the well-defined, categorical signals that characterize adult vocal behaviour. A broadly distributed signal is ideal for vocal exploration, that is, for matching vocal production to the statistics of the sensory input. The developmental transition to categorical signals is a gradual process during which the vocal output becomes differentiated and stable. But does it require categorical input? We trained juvenile zebra finches with playbacks of their own developing song, produced just a few moments earlier, updated continuously over development. Although the vocalizations of these self-tutored (ST) birds were initially broadly distributed, birds quickly developed categorical signals, as fast as birds that were trained with a categorical, adult song template. By contrast, siblings of those birds that received no training (isolates) developed phonological categories much more slowly and never reached the same level of category differentiation as their ST brothers. Therefore, instead of simply mirroring the statistical properties of their sensory input, songbirds actively transform it into distinct categories. We suggest that the early self-generation of phonological categories facilitates the establishment of vocal culture by making the song easier to transmit at the micro level, while promoting stability of shared vocabulary at the group level over generations.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Authors.
Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout ...
transform the existing methods of learning using wireless and mobile technologies at the ... The proposed system, Wi-Learn, enfolds a set of mobile collaborative applications .... NOBE B (3G). – gateway to .... 2G Mobile phone. Low maximum ...
Sandberg, J.; Maris, M.; Arnedillo Sánchez, I.; Isaías, P.
A review of mobile learning research shows that studies take various research approaches and apply a varied number of research methods, ranging from primarily quantitative and experimental to purely qualitative and descriptive. This paper presents a classification framework to position mobile
Lu, H.; Rojas, R.R.; Beckers, T.; Yuille, A.; Love, B.C.; McRae, K.; Sloutsky, V.M.
Recent experiments (Beckers, De Houwer, Pineño, & Miller, 2005;Beckers, Miller, De Houwer, & Urushihara, 2006) have shown that pretraining with unrelated cues can dramatically influence the performance of humans in a causal learning paradigm and rats in a standard Pavlovian conditioning paradigm.
The New Environmental or Ecological Paradigm (NEP) is widely acknowledged as a reliable multiple-item scale to capture environmental attitudes or beliefs. It has been used in statistical analyses for almost 30 years, primarily by psychologists, but also by political scientists, sociologists and geographers. The scale's theoretical foundation is,…
Learning Prototypical Cases OFF-BROADWAY, MCI and RMHC -* are three CBR-ML systems that learn case prototypes. We feel that methods that enable the...at Irvine Machine Learning Repository, including heart disease and breast cancer databases. OFF-BROADWAY, MCI and RMHC -* made the following notable
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Gordon, Goren; Dorfman, Nimrod; Ahissar, Ehud
Rats move their whiskers to acquire information about their environment. It has been observed that they palpate novel objects and objects they are required to localize in space. We analyze whisker-based object localization using two complementary paradigms, namely, active learning and intrinsic-reward reinforcement learning. Active learning algorithms select the next training samples according to the hypothesized solution in order to better discriminate between correct and incorrect labels. Intrinsic-reward reinforcement learning uses prediction errors as the reward to an actor-critic design, such that behavior converges to the one that optimizes the learning process. We show that in the context of object localization, the two paradigms result in palpation whisking as their respective optimal solution. These results suggest that rats may employ principles of active learning and/or intrinsic reward in tactile exploration and can guide future research to seek the underlying neuronal mechanisms that implement them. Furthermore, these paradigms are easily transferable to biomimetic whisker-based artificial sensors and can improve the active exploration of their environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.
Nielson, Perpetua Lynne; Bean, Nathan William Bean; Larsen, Ross Allen Andrew
We examine the impact of a flipped classroom model of learning on student performance and satisfaction in a large undergraduate introductory statistics class. Two professors each taught a lecture-section and a flipped-class section. Using MANCOVA, a linear combination of final exam scores, average quiz scores, and course ratings was compared for…
Osowiec, Darlene A
Within the hypnosis field, there is a disparity between clinical and research worldviews. Clinical practitioners work with patients who are dealing with serious, often unique, real-world problems-lived experience. Researchers adhere to objective measurements, standardization, data, and statistics. Although there is overlap, an ongoing divergence can be counterproductive to the hypnosis field and to the larger professional and social contexts. The purpose of this article is: (1) to examine some of the major assumptions, the history, and the philosophy that undergird the definition of science, which was constructed in the mid-17th century; (2) to discover how science is a product of prevailing social forces and is undergoing a paradigm shift; and (3) to understand the more encompassing, holistic paradigm with implications for the hypnosis field.
Curran-Everett, Douglas; Williams, Calvin L.
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…
Full Text Available We perform a review of Web Mining techniques and we describe a Bootstrap Statistics methodology applied to pattern model classifier optimization and verification for Supervised Learning for Tour-Guide Robot knowledge repository management. It is virtually impossible to test thoroughly Web Page Classifiers and many other Internet Applications with pure empirical data, due to the need for human intervention to generate training sets and test sets. We propose using the computer-based Bootstrap paradigm to design a test environment where they are checked with better reliability
A study of Irish multinational companies identified antecedents to organizational learning: nature of global business, anthropomorphism, dissatisfaction with traditional paradigms, customer-responsive culture, and intellectual capital. The path to the learning organization builds on these antecedents in an environment of innovation focused on…
Schmand, B.; Kop, W. J.; Kuipers, T.; Bosveld, J.
Implicit verbal learning of psychotic patients (n = 59) and non-psychotic control patients (n = 20) was studied using stem completion and association tasks in lexical and semantic priming paradigms. Performance on these tasks was contrasted with explicit memory on Rey's verbal learning test.
Soft computing (SC) consists of several computing paradigms, including neural networks, fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as genetic algorithms. The integration of those constituent methodologies forms the core of SC. In addition, the synergy allows SC to incorporate human knowledge effectively, deal with imprecision and uncertainty, and learn to adapt to unknown or changing environments for better performance. Together with other modern technologies, SC and its applications exert unprecedented influence on intelligent systems that mimic hum
Bryderup, Inge; Larson, Anne; Trentel, Marlene Quisgaard
For years, increased use of ICT in education and training has been part of the Danish education policy, and the number of computers in schools and the actual use of ICT have grown. At the same time, school leaders' and teachers' pedagogical paradigm in primary and lower secondary schools seems...... to be changing from a lifelong learning paradigm (focussed on student-centred, active, and autonomous leaning) to a more traditional paradigm (focussed on curriculum-centred teaching and instructions). The aim of this paper is to describe this development in relation to the way ICT is used as well as to changes...... in educational policy. Beck and Beck-Gernsheim's (2002) theory about ‘institutionalized individualization' as characteristic of the reflexive society serves as a theoretical framework for better understanding the observed changes....
Hammack, Phillip L.
Through the application of life course theory to the study of sexual orientation, this paper specifies a new paradigm for research on human sexual orientation that seeks to reconcile divisions among biological, social science, and humanistic paradigms. Recognizing the historical, social, and cultural relativity of human development, this paradigm…
Greer, Janet Agnes
The American education system must utilize collaboration to meet the challenges and demands our culture poses for schools. Deeply rooted processes and structures favor teaching and learning in isolation and hinder the shift to a more collaborative paradigm. Professional learning communities (PLCs) support continuous teacher learning, improved efficacy, and program implementation. The PLC provides the framework for the development and enhancement of teacher collaboration and teacher collaborat...
Escudero, Paola; Mulak, Karen E; Fu, Charlene S L; Singh, Leher
To succeed at cross-situational word learning, learners must infer word-object mappings by attending to the statistical co-occurrences of novel objects and labels across multiple encounters. While past studies have investigated this as a learning mechanism for infants and monolingual adults, bilinguals' cross-situational word learning abilities have yet to be tested. Here, we compared monolinguals' and bilinguals' performance on a cross-situational word learning paradigm that featured phonologically distinct word pairs (e.g., BON-DEET) and phonologically similar word pairs that varied by a single consonant or vowel segment (e.g., BON-TON, DEET-DIT, respectively). Both groups learned the novel word-referent mappings, providing evidence that cross-situational word learning is a learning strategy also available to bilingual adults. Furthermore, bilinguals were overall more accurate than monolinguals. This supports that bilingualism fosters a wide range of cognitive advantages that may benefit implicit word learning. Additionally, response patterns to the different trial types revealed a relative difficulty for vowel minimal pairs than consonant minimal pairs, replicating the pattern found in monolinguals by Escudero et al. (2016) in a different English accent. Specifically, all participants failed to learn vowel contrasts differentiated by vowel height. We discuss evidence for this bilingual advantage as a language-specific or general advantage.
Merino, S.; Martinez, J.; Gutierrez, G.; Galan, J. L.; Rodriguez, P.; Munoz, M. L.; Gonzalez, J. M.; Cordero, P.; Padilla, Y.; Mora, A.; Merida, E.; Rodriguez, F.
For many years, university teaching was based mainly on lectures, but critics point out that lecturing is mainly a one-way method of communication that does not involve significant audience participation. Nowadays e-learning has become a distance learning paradigm using information technology as the Internet, intranets, emails and multimedia…
Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann
Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.
Full Text Available Smoking has been proven to negatively affect health in a multitude of ways. As of 2009, smoking has been considered the leading cause of preventable morbidity and mortality in the United States, continuing to plague the country’s overall health. This study aims to investigate the viability and effectiveness of some machine learning algorithms for predicting the smoking status of patients based on their blood tests and vital readings results. The analysis of this study is divided into two parts: In part 1, we use One-way ANOVA analysis with SAS tool to show the statistically significant difference in blood test readings between smokers and non-smokers. The results show that the difference in INR, which measures the effectiveness of anticoagulants, was significant in favor of non-smokers which further confirms the health risks associated with smoking. In part 2, we use five machine learning algorithms: Naïve Bayes, MLP, Logistic regression classifier, J48 and Decision Table to predict the smoking status of patients. To compare the effectiveness of these algorithms we use: Precision, Recall, F-measure and Accuracy measures. The results show that the Logistic algorithm outperformed the four other algorithms with Precision, Recall, F-Measure, and Accuracy of 83%, 83.4%, 83.2%, 83.44%, respectively.
Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro
Jafar Asgari Arani
Full Text Available ABSTRACT- This study uses Depth Interview as a research instrument to study users' perceptions and acceptance of the potential use of mobile phones in a prospective design for learning Medical English, a mandatory course in medicine. Almost all the respondents (93% across graduates and students were unanimous about the need to learn English through M-Learning. When respondents were asked to suggest ideas on how medical English can be taught through mobiles (unaided question, 49% suggested SMS and 27% suggested SMS & Live Calls. The survey indicates that there is a unanimous demand to learn English amongst students and graduates of medicine in a new e-education setting called M-learning. Constraints imposed by one’s occupation and available resources expose the limitations of traditional learning and opens up a huge opportunity for M-English learning. Irrespective of differences, potential learners accepted the credibility of M-learning and displayed willingness to be an active user of an M-learning module. Diversity of responses on potential frequency of usage for SMSs, preferences regarding listening to IVR, speaking to Live Callers, traditional classroom learning, M-learning and testing options will establish at a primary level that the means of ‘engagement’, ‘presence’ and ‘flexibility’ can be significantly different ‘between’ and ‘within’ different educational groups.
Research on learning science in informal settings and the formal (sometimes experimental) study of learning in classrooms or psychological laboratories tend to be separate domains, even drawing on different theories and methods. These differences make it difficult to compare knowing and learning observed in one paradigm/context with those observed…
Ren, Zhipeng; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Zhipeng Ren; Daoyi Dong; Huaxiong Li; Chunlin Chen; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Ren, Zhipeng
In this paper, a new training paradigm is proposed for deep reinforcement learning using self-paced prioritized curriculum learning with coverage penalty. The proposed deep curriculum reinforcement learning (DCRL) takes the most advantage of experience replay by adaptively selecting appropriate transitions from replay memory based on the complexity of each transition. The criteria of complexity in DCRL consist of self-paced priority as well as coverage penalty. The self-paced priority reflects the relationship between the temporal-difference error and the difficulty of the current curriculum for sample efficiency. The coverage penalty is taken into account for sample diversity. With comparison to deep Q network (DQN) and prioritized experience replay (PER) methods, the DCRL algorithm is evaluated on Atari 2600 games, and the experimental results show that DCRL outperforms DQN and PER on most of these games. More results further show that the proposed curriculum training paradigm of DCRL is also applicable and effective for other memory-based deep reinforcement learning approaches, such as double DQN and dueling network. All the experimental results demonstrate that DCRL can achieve improved training efficiency and robustness for deep reinforcement learning.
Asher, Derrik E.; Craig, Alexis B.; Zaldivar, Andrew; Brewer, Alyssa A.; Krichmar, Jeffrey L.
Serotonin (5-HT) is a neuromodulator that has been attributed to cost assessment and harm aversion. In this review, we look at the role 5-HT plays in making decisions when subjects are faced with potential harmful or costly outcomes. We review approaches for examining the serotonergic system in decision-making. We introduce our group’s paradigm used to investigate how 5-HT affects decision-making. In particular, our paradigm combines techniques from computational neuroscience, socioeconomic game theory, human–robot interaction, and Bayesian statistics. We will highlight key findings from our previous studies utilizing this paradigm, which helped expand our understanding of 5-HT’s effect on decision-making in relation to cost assessment. Lastly, we propose a cyclic multidisciplinary approach that may aid in addressing the complexity of exploring 5-HT and decision-making by iteratively updating our assumptions and models of the serotonergic system through exhaustive experimentation. PMID:24319413
Siadat, M. Vali; Musial, Paul M.; Sagher, Yoram
This study reports the effects of an integrated instructional program (the Keystone Method) on the students' performance in mathematics and reading, and tracks students' persistence and retention. The subject of the study was a large group of students in remedial mathematics classes at the college, willing to learn but lacking basic educational…
Mario Miguel Ojeda Ramírez
Full Text Available Currently some teachers implement different methods in order to promote education linked to reality, to provide more effective training and a meaningful learning. Activemethods aim to increase motivation and create scenarios in which student participation is central to achieve a more meaningful learning. This paper reports on the implementation of a process of educational innovation in the course of Topics of Multivariate Statistics offered in the degree in Statistical Sciences and Techniques at the Universidad Veracruzana (Mexico. The strategies used as sets for data collection, design and project development and realization of individual and group presentations are described. Information and communication technologies (ICT used are: EMINUS, distributed education platform of the Universidad Veracruzana, and managing files with Dropbox, plus communication via WhatsApp. The R software was used for statistical analysis and for making presentations in academic forums. To explore students' perceptions depth interviews were conducted and indicators for evaluating the student satisfaction were defined; the results show positive evidence, concluding that students were satisfied with the way that the course was designed and implemented. They also stated that they feel able to apply what they have learned. The opinions put that using these strategies they were feeling in preparation for their professional life. Finally, some suggestions for improving the course in future editions are included.
Yang, Peng; Kajiwara, Riki; Tonoki, Ayako; Itoh, Motoyuki
We designed an automated device to study active avoidance learning abilities of zebrafish. Open source tools were used for the device control, statistical computing, and graphic outputs of data. Using the system, we developed active avoidance tests to examine the effects of trial spacing and aging on learning. Seven-month-old fish showed stronger avoidance behavior as measured by color preference index with discrete spaced training as compared to successive spaced training. Fifteen-month-old fish showed a similar trend, but with reduced cognitive abilities compared with 7-month-old fish. Further, in 7-month-old fish, an increase in learning ability during trials was observed with discrete, but not successive, spaced training. In contrast, 15-month-old fish did not show increase in learning ability during trials. Therefore, these data suggest that discrete spacing is more effective for learning than successive spacing, with the zebrafish active avoidance paradigm, and that the time course analysis of active avoidance using discrete spaced training is useful to detect age-related learning impairment. Copyright © 2017 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.
He, Wu; Cernusca, Dan; Abdous, M'hammed
The use of distance courses in learning is growing exponentially. To better support faculty and students for teaching and learning, distance learning programs need to constantly innovate and optimize their IT infrastructures. The new IT paradigm called "cloud computing" has the potential to transform the way that IT resources are utilized and…
Zala, Sarah M.; Määttänen, Ilmari
The zebrafish ( Danio rerio) is increasingly becoming an important model species for studies on the genetic and neural mechanisms controlling behaviour and cognition. Here, we utilized a conditioned place preference (CPP) paradigm to study social learning in zebrafish. We tested whether social interactions with conditioned demonstrators enhance the ability of focal naïve individuals to learn an associative foraging task. We found that the presence of conditioned demonstrators improved focal fish foraging behaviour through the process of social transmission, whereas the presence of inexperienced demonstrators interfered with the learning of the control focal fish. Our results indicate that zebrafish use social learning for finding food and that this CPP paradigm is an efficient assay to study social learning and memory in zebrafish.
Daugbjerg, Carsten; Farsund, Arild Aurvåg; Langhelle, Oluf
This paper argues that a policy regime based on a paradigm mix may be resilient when challenged by changing power balances and new agendas. Controversies between the actors can be contained within the paradigm mix as it enables them to legitimize different ideational positions. Rather than engaging...... context changed. The paradigm mix proved sufficiently flexible to accommodate food security concerns and at the same time continue to take steps toward further liberalization. Indeed, the main players have not challenged the paradigm mix....
... HIV Syndicated Content Website Feedback HIV/AIDS Basic Statistics Recommend on Facebook Tweet Share Compartir HIV and ... HIV. Interested in learning more about CDC's HIV statistics? Terms, Definitions, and Calculations Used in CDC HIV ...
Gibbons, Jean Dickinson
Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente
Dalla, Christina; Shors, Tracey J
Males and females learn and remember differently at different times in their lives. These differences occur in most species, from invertebrates to humans. We review here sex differences as they occur in laboratory rodent species. We focus on classical and operant conditioning paradigms, including classical eyeblink conditioning, fear-conditioning, active avoidance and conditioned taste aversion. Sex differences have been reported during acquisition, retention and extinction in most of these paradigms. In general, females perform better than males in the classical eyeblink conditioning, in fear-potentiated startle and in most operant conditioning tasks, such as the active avoidance test. However, in the classical fear-conditioning paradigm, in certain lever-pressing paradigms and in the conditioned taste aversion, males outperform females or are more resistant to extinction. Most sex differences in conditioning are dependent on organizational effects of gonadal hormones during early development of the brain, in addition to modulation by activational effects during puberty and adulthood. Critically, sex differences in performance account for some of the reported effects on learning and these are discussed throughout the review. Because so many mental disorders are more prevalent in one sex than the other, it is important to consider sex differences in learning when applying animal models of learning for these disorders. Finally, we discuss how sex differences in learning continue to alter the brain throughout the lifespan. Thus, sex differences in learning are not only mediated by sex differences in the brain, but also contribute to them.
Díaz, Zuleyka; Segovia, María Jesús; Fernández, José
Prediction of insurance companies insolvency has arisen as an important problem in the field of financial research. Most methods applied in the past to tackle this issue are traditional statistical techniques which use financial ratios as explicative variables. However, these variables often do not satisfy statistical assumptions, which complicates the application of the mentioned methods. In this paper, a comparative study of the performance of two non-parametric machine learning techniques ...
Bond, Marjorie E.; Perkins, Susan N.; Ramirez, Caroline
Although statistics education research has focused on students' learning and conceptual understanding of statistics, researchers have only recently begun investigating students' perceptions of statistics. The term perception describes the overlap between cognitive and non-cognitive factors. In this mixed-methods study, undergraduate students…
Nolan, Bernard T.; Fienen, Michael N.; Lorenz, David L.
We used a statistical learning framework to evaluate the ability of three machine-learning methods to predict nitrate concentration in shallow groundwater of the Central Valley, California: boosted regression trees (BRT), artificial neural networks (ANN), and Bayesian networks (BN). Machine learning methods can learn complex patterns in the data but because of overfitting may not generalize well to new data. The statistical learning framework involves cross-validation (CV) training and testing data and a separate hold-out data set for model evaluation, with the goal of optimizing predictive performance by controlling for model overfit. The order of prediction performance according to both CV testing R2 and that for the hold-out data set was BRT > BN > ANN. For each method we identified two models based on CV testing results: that with maximum testing R2 and a version with R2 within one standard error of the maximum (the 1SE model). The former yielded CV training R2 values of 0.94–1.0. Cross-validation testing R2 values indicate predictive performance, and these were 0.22–0.39 for the maximum R2 models and 0.19–0.36 for the 1SE models. Evaluation with hold-out data suggested that the 1SE BRT and ANN models predicted better for an independent data set compared with the maximum R2 versions, which is relevant to extrapolation by mapping. Scatterplots of predicted vs. observed hold-out data obtained for final models helped identify prediction bias, which was fairly pronounced for ANN and BN. Lastly, the models were compared with multiple linear regression (MLR) and a previous random forest regression (RFR) model. Whereas BRT results were comparable to RFR, MLR had low hold-out R2 (0.07) and explained less than half the variation in the training data. Spatial patterns of predictions by the final, 1SE BRT model agreed reasonably well with previously observed patterns of nitrate occurrence in groundwater of the Central Valley.
Browaeys, M.-J.; Wahyudi, S.
E-learning should be approached via a new paradigm, one where instruction and information are involved in a recursive process, an approach which counters the concept of linearity. New ways of thinking about how people learn and new technologies favour the emergence of principles of e-learning that
Jacobs, George M.; Shan, Tan Hui
The present paper begins by situating learner autonomy and collaborative learning as part of a larger paradigm shift towards student-centred learning. Next are brief discussions of learner autonomy and how learner autonomy links with collaborative learning. In the main part of the paper, four central principles of collaborative learning are…
The credibility of short-term undergraduate research as a paradigm for effective learning within Medicine has been recognized. With a view to strengthening this paradigm and enhancing research-teaching linkages, this study explores whether particular types of research supervisor are pre-disposed to providing supportive learning environments.…
Schlichting, Margaret L; Guarino, Katharine F; Schapiro, Anna C; Turk-Browne, Nicholas B; Preston, Alison R
Despite the importance of learning and remembering across the lifespan, little is known about how the episodic memory system develops to support the extraction of associative structure from the environment. Here, we relate individual differences in volumes along the hippocampal long axis to performance on statistical learning and associative inference tasks-both of which require encoding associations that span multiple episodes-in a developmental sample ranging from ages 6 to 30 years. Relating age to volume, we found dissociable patterns across the hippocampal long axis, with opposite nonlinear volume changes in the head and body. These structural differences were paralleled by performance gains across the age range on both tasks, suggesting improvements in the cross-episode binding ability from childhood to adulthood. Controlling for age, we also found that smaller hippocampal heads were associated with superior behavioral performance on both tasks, consistent with this region's hypothesized role in forming generalized codes spanning events. Collectively, these results highlight the importance of examining hippocampal development as a function of position along the hippocampal axis and suggest that the hippocampal head is particularly important in encoding associative structure across development.
Nielsen, Tine; Kreiner, Svend
Short abstract Motivated by experiencing with students’ psychological barriers for learning statistics we modified and extended the Statistical Anxiety Rating Scale (STARS) to develop a contemporary Danish measure of attitudes and relationship to statistics for use with higher education students...... with evidence of DIF in all cases: One TCA-item functioned differentially relative to age, one WS-item functioned differentially relative to statistics course (first or second), and two IA-items functioned differentially relative to statistics course and academic discipline (sociology, public health...
Yoshioka, Nobuyuki; Akagi, Yutaka; Katsura, Hosho
We apply the artificial neural network in a supervised manner to map out the quantum phase diagram of disordered topological superconductors in class DIII. Given the disorder that keeps the discrete symmetries of the ensemble as a whole, translational symmetry which is broken in the quasiparticle distribution individually is recovered statistically by taking an ensemble average. By using this, we classify the phases by the artificial neural network that learned the quasiparticle distribution in the clean limit and show that the result is totally consistent with the calculation by the transfer matrix method or noncommutative geometry approach. If all three phases, namely the Z2, trivial, and thermal metal phases, appear in the clean limit, the machine can classify them with high confidence over the entire phase diagram. If only the former two phases are present, we find that the machine remains confused in a certain region, leading us to conclude the detection of the unknown phase which is eventually identified as the thermal metal phase.
Harrison, Christopher J.; Konings, Karen D.; Schuwirth, Lambert W. T.; Wass, Valerie; van der Vleuten, Cees P. M.
BACKGROUND: Despite growing evidence of the benefits of including assessment for learning strategies within programmes of assessment, practical implementation of these approaches is often problematical. Organisational culture change is often hindered by personal and collective beliefs which encourage adherence to the existing organisational paradigm. We aimed to explore how these beliefs influenced proposals to redesign a summative assessment culture in order to improve students' use of asses...
Abdullah, Rusli; Samah, Bahaman Abu; Bolong, Jusang; D'Silva, Jeffrey Lawrence; Shaffril, Hayrol Azril Mohamed
Today, teaching and learning (T&L) using technology as tool is becoming more important especially in the field of statistics as a part of the subject matter in higher education system environment. Eventhough, there are many types of technology of statistical learnig tool (SLT) which can be used to support and enhance T&L environment, however, there is lack of a common standard knowledge management as a knowledge portal for guidance especially in relation to infrastructure requirement of SLT in servicing the community of user (CoU) such as educators, students and other parties who are interested in performing this technology as a tool for their T&L. Therefore, there is a need of a common standard infrastructure requirement of knowledge portal in helping CoU for managing of statistical knowledge in acquiring, storing, desseminating and applying of the statistical knowedge for their specific purposes. Futhermore, by having this infrastructure requirement of knowledge portal model of SLT as a guidance in promoting knowledge of best practise among the CoU, it can also enhance the quality and productivity of their work towards excellence of statistical knowledge application in education system environment.
Dumas, Guillaume; de Guzman, Gonzalo C; Tognoli, Emmanuelle; Kelso, J A Scott
Social neuroscience has called for new experimental paradigms aimed toward real-time interactions. A distinctive feature of interactions is mutual information exchange: One member of a pair changes in response to the other while simultaneously producing actions that alter the other. Combining mathematical and neurophysiological methods, we introduce a paradigm called the human dynamic clamp (HDC), to directly manipulate the interaction or coupling between a human and a surrogate constructed to behave like a human. Inspired by the dynamic clamp used so productively in cellular neuroscience, the HDC allows a person to interact in real time with a virtual partner itself driven by well-established models of coordination dynamics. People coordinate hand movements with the visually observed movements of a virtual hand, the parameters of which depend on input from the subject's own movements. We demonstrate that HDC can be extended to cover a broad repertoire of human behavior, including rhythmic and discrete movements, adaptation to changes of pacing, and behavioral skill learning as specified by a virtual "teacher." We propose HDC as a general paradigm, best implemented when empirically verified theoretical or mathematical models have been developed in a particular scientific field. The HDC paradigm is powerful because it provides an opportunity to explore parameter ranges and perturbations that are not easily accessible in ordinary human interactions. The HDC not only enables to test the veracity of theoretical models, it also illuminates features that are not always apparent in real-time human social interactions and the brain correlates thereof.
Shimoda, Shingo; Kimura, Hidenori
The remarkable capability of living organisms to adapt to unknown environments is due to learning mechanisms that are totally different from the current artificial machine-learning paradigm. Computational media composed of identical elements that have simple activity rules play a major role in biological control, such as the activities of neurons in brains and the molecular interactions in intracellular control. As a result of integrations of the individual activities of the computational media, new behavioral patterns emerge to adapt to changing environments. We previously implemented this feature of biological controls in a form of machine learning and succeeded to realize bipedal walking without the robot model or trajectory planning. Despite the success of bipedal walking, it was a puzzle as to why the individual activities of the computational media could achieve the global behavior. In this paper, we answer this question by taking a statistical approach that connects the individual activities of computational media to global network behaviors. We show that the individual activities can generate optimized behaviors from a particular global viewpoint, i.e., autonomous rhythm generation and learning of balanced postures, without using global performance indices.
Neumann, David L.; Hood, Michelle
A wiki was used as part of a blended learning approach to promote collaborative learning among students in a first year university statistics class. One group of students analysed a data set and communicated the results by jointly writing a practice report using a wiki. A second group analysed the same data but communicated the results in a…
Hagen, Brad; Awosoga, Olu; Kellett, Peter; Dei, Samuel Ofori
Undergraduate nursing students must often take a course in statistics, yet there is scant research to inform teaching pedagogy. The objectives of this study were to assess nursing students' overall attitudes towards statistics courses - including (among other things) overall fear and anxiety, preferred learning and teaching styles, and the perceived utility and benefit of taking a statistics course - before and after taking a mandatory course in applied statistics. The authors used a pre-experimental research design (a one-group pre-test/post-test research design), by administering a survey to nursing students at the beginning and end of the course. The study was conducted at a University in Western Canada that offers an undergraduate Bachelor of Nursing degree. Participants included 104 nursing students, in the third year of a four-year nursing program, taking a course in statistics. Although students only reported moderate anxiety towards statistics, student anxiety about statistics had dropped by approximately 40% by the end of the course. Students also reported a considerable and positive change in their attitudes towards learning in groups by the end of the course, a potential reflection of the team-based learning that was used. Students identified preferred learning and teaching approaches, including the use of real-life examples, visual teaching aids, clear explanations, timely feedback, and a well-paced course. Students also identified preferred instructor characteristics, such as patience, approachability, in-depth knowledge of statistics, and a sense of humor. Unfortunately, students only indicated moderate agreement with the idea that statistics would be useful and relevant to their careers, even by the end of the course. Our findings validate anecdotal reports on statistics teaching pedagogy, although more research is clearly needed, particularly on how to increase students' perceptions of the benefit and utility of statistics courses for their nursing
Moon, Hanna; Lee, Chan
Purpose: This paper aims to deepen the understanding of strategic learning through the lens of environmental jolts. Design/methodology/approach: Strategic learning is explained from the three paradigms of organizational learning. Findings: Organizational learning provides a firm foundation to develop and elaborate the concept of strategic learning…
Puviani, Luca; Rama, Sidita
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation.
Bertels, Julie; San Anton, Estibaliz; Gebuis, Titia; Destrebecqz, Arnaud
Extracting the statistical regularities present in the environment is a central learning mechanism in infancy. For instance, infants are able to learn the associations between simultaneously or successively presented visual objects (Fiser & Aslin, ; Kirkham, Slemmer & Johnson, ). The present study extends these results by investigating whether infants can learn the association between a target location and the context in which it is presented. With this aim, we used a visual associative learning procedure inspired by the contextual cuing paradigm, with infants from 8 to 12 months of age. In two experiments, in which we varied the complexity of the stimuli, we first habituated infants to several scenes where the location of a target (a cartoon character) was consistently associated with a context, namely a specific configuration of geometrical shapes. Second, we examined whether infants learned the covariation between the target location and the context by measuring looking times at scenes that either respected or violated the association. In both experiments, results showed that infants learned the target-context associations, as they looked longer at the familiar scenes than at the novel ones. In particular, infants selected clusters of co-occurring contextual shapes and learned the covariation between the target location and this subset. These results support the existence of a powerful and versatile statistical learning mechanism that may influence the orientation of infants' visual attention toward areas of interest in their environment during early developmental stages. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=9Hm1unyLBn0. © 2016 John Wiley & Sons Ltd.
Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A; Roubidoux, Marilyn A; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M; Samala, Ravi K
Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79 ± 0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as
Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M.; Samala, Ravi K.
Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input ‘for processing’ DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice’s coefficient (DC) of 0.79 ± 0.13 and Pearson’s correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as
Brown, Julie A.; Beaser, Richard S.; Neighbours, James; Shuman, Jill
Ongoing continuing medical education is an essential component of life-long learning and can have a positive influence on patient outcomes. However, some evidence suggests that continuing medical education has not fulfilled its potential as a performance improvement (PI) tool, in part due to a paradigm of CME that has focused on the quantity of…
Sproule, J. Michael
Examines the inability of the propaganda paradigm to maintain its position as the standard framework for rhetorical inquiry in the sociointellectual atmosphere that surrounded World War II and the Cold War. Discusses the epistemological and ideological roots of the shift from propaganda to statistical/experimental communication research. (JD)
Lumpkin, Angela; Achen, Rebecca M.; Dodd, Regan K.
A paradigm shift from lecture-based courses to interactive classes punctuated with engaging, student-centered learning activities has begun to characterize the work of some teachers in higher education. Convinced through the literature of the values of using active learning strategies, we assessed through an action research project in five college…
Benitez, Viridiana L.; Yurovsky, Daniel; Smith, Linda B.
Three experiments investigated competition between word-object pairings in a cross-situational word-learning paradigm. Adults were presented with One-Word pairings, where a single word labeled a single object, and Two-Word pairings, where two words labeled a single object. In addition to measuring learning of these two pairing types, we measured competition between words that refer to the same object. When the word-object co-occurrences were presented intermixed in training (Experiment 1), we found evidence for direct competition between words that label the same referent. Separating the two words for an object in time eliminated any evidence for this competition (Experiment 2). Experiment 3 demonstrated that adding a linguistic cue to the second label for a referent led to different competition effects between adults who self-reported different language learning histories, suggesting both distinctiveness and language learning history affect competition. Finally, in all experiments, competition effects were unrelated to participants’ explicit judgments of learning, suggesting that competition reflects the operating characteristics of implicit learning processes. Together, these results demonstrate that the role of competition between overlapping associations in statistical word-referent learning depends on time, the distinctiveness of word-object pairings, and language learning history. PMID:27087742
de Diana, I.P.F.; van der Heiden, G.
Attention has been drawn to the concepts of Electronic Books and Electronic Study Books. Several publications have discussed some main ideas (paradigms) for both concepts. For the Electronic Study Book as a learning environment, it is essential to consider individual modes of learning, usually
Ennis, Catherine D.
Dynamical systems theory can increase our understanding of the constantly evolving learning process. Current research using experimental and interpretive paradigms focuses on describing the attractors and constraints stabilizing the educational process. Dynamical systems theory focuses attention on critical junctures in the learning process as…
Broekens, Douwe Joost
In this thesis we have studied the influence of emotion on learning. We have used computational modelling techniques to do so, more specifically, the reinforcement learning paradigm. Emotion is modelled as artificial affect, a measure that denotes the positiveness versus negativeness of a situation
Full Text Available Mata kuliah Statistics (Statistika Pendidikan untuk prodi yang lain merupakan mata kuliah wajib di IAIN Walisongo. Di prodi Tadris Bahasa Inggris, pembelajaran mata kuliah Statistics selama ini mempunyai beberapa hambatan, antara lain mahasiswa kesulitan dalam pemahaman konsep, keterampilan berhitung, dan kurangnya waktu untuk berlatih. Bersamaan dengan kerjasama IAIN Walisongo dengan DBE 2 USAID untuk pembelajaran jarak jauh, dibuatlah kelas online untuk mata kuliah statistika yang memungkinkan mahasiswa untuk berdiskusi, menyelesaikan statistika uji dengan komputasi, memahami konsep statistika melalui video dan bahan bacaan, serta berinteraksi dengan dosen jika mahasiswa mengalami permasalahan. Selanjutnya ingin diketahui apakah pembelajaran dengan e-learning berpengaruh terhadap hasil belajar mahasiswa. Hasil dari studi menunjukkan bahwa tidak terlihat pengaruh yang signikan, namun jumlah mahasiswa yang mendapat nilai A, B+, dan B lebih banyak pada pembelajaran dengan e-learning dibandingkan dengan mahasiswa pada pembelajaran konvensional.
Ardiel, Evan L.; Rankin, Catharine H.
This article reviews the literature on learning and memory in the soil-dwelling nematode "Caenorhabditis elegans." Paradigms include nonassociative learning, associative learning, and imprinting, as worms have been shown to habituate to mechanical and chemical stimuli, as well as learn the smells, tastes, temperatures, and oxygen levels that…
Helmer, Alexander; de Visser, C.C.; van Kampen, E.
Reinforcement learning is a paradigm for learning decision-making tasks from interaction with the environment. Function approximators solve a part of the curse of dimensionality when learning in high-dimensional state and/or action spaces. It can be a time-consuming process to learn a good policy in
Talisayon, Vivien Millan
This study is an empirical investigation of Ausubel's paradigm of meaningful learning, applied specifically to the learning of high school physics students. In the first phase of the study path analysis and multiple regression techniques were used to describe the Ausubelian learning variables: available relevant ideas in learner's cognitive…
Full Text Available We propose a paradigm to apply machine learning various databases which have emerged in the study of the string landscape. In particular, we establish neural networks as both classifiers and predictors and train them with a host of available data ranging from Calabi–Yau manifolds and vector bundles, to quiver representations for gauge theories, using a novel framework of recasting geometrical and physical data as pixelated images. We find that even a relatively simple neural network can learn many significant quantities to astounding accuracy in a matter of minutes and can also predict hithertofore unencountered results, whereby rendering the paradigm a valuable tool in physics as well as pure mathematics.
Zieffler, Andrew; Garfield, Joan; Alt, Shirley; Dupuis, Danielle; Holleque, Kristine; Chang, Beng
Since the first studies on the teaching and learning of statistics appeared in the research literature, the scholarship in this area has grown dramatically. Given the diversity of disciplines, methodology, and orientation of the studies that may be classified as "statistics education research," summarizing and critiquing this body of work for…
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…
Kalaian, Sema A.; Kasim, Rafa M.
This meta-analytic study focused on the quantitative integration and synthesis of the accumulated pedagogical research in undergraduate statistics education literature. These accumulated research studies compared the academic achievement of students who had been instructed using one of the various forms of small-group learning methods to those who…
Schmid, Matthias J. A.
This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension
Neville, Mary Grace
Experiential learning theory, conversational learning, and seminar practices combine to shape an educational experience that is grounded in principles of appreciative inquiry. The seminar, taught to undergraduate business majors, seeks to encourage students to explore their underlying assumptions about business in society. Because postindustrial…
1. Sensitization and classical odor conditioning of the proboscis extension reflex were functionally analyzed by repeated intracellular recordings from a single identified neuron (PE1-neuron) in the central bee brain. This neuron belongs to the class of "extrinsic cells" arising from the pedunculus of the mushroom bodies and has extensive arborizations in the median and lateral protocerebrum. The recordings were performed on isolated bee heads. 2. Two different series of physiological experiments were carried out with the use of a similar temporal succession of stimuli as in previous behavioral experiments. In the first series, one group of animals was used for a single conditioning trial [conditioned stimulus (CS), carnation; unconditioned stimulus (US), sucrose solution to the antennae and proboscis), a second group was used for sensitization (sensitizing stimulus, sucrose solution to the antennae and/or proboscis), and the third group served as control (no sucrose stimulation). In the second series, a differential conditioning paradigm (paired odor CS+, carnation; unpaired odor CS-, orange blossom) was applied to test the associative nature of the conditioning effect. 3. The PE1-neuron showed a characteristic burstlike odor response before the training procedures. The treatments resulted in different spike-frequency modulations of this response, which were specific for the nonassociative and associative stimulus paradigms applied. During differential conditioning, there are dynamic up and down modulations of spike frequencies and of the DC potentials underlying the responses to the CS+. Overall, only transient changes in the minute range were observed. 4. The results of the sensitization procedures suggest two qualitatively different US pathways. The comparison between sensitization and one-trial conditioning shows differential effects of nonassociative and associative stimulus paradigms on the response behavior of the PE1-neuron. The results of the differential
Madison, Guy; Ullén, Fredrik
Human behavior is guided by evolutionarily shaped brain mechanisms that make statistical predictions based on limited information. Such mechanisms are important for facilitating interpersonal relationships, avoiding dangers, and seizing opportunities in social interaction. We thus suggest that it is essential for analyses of prejudice and prejudice reduction to take the predictive accuracy and adaptivity of the studied prejudices into account.
Weaver, Kathryn; Olson, Joanne K
The aims of this paper are to add clarity to the discussion about paradigms for nursing research and to consider integrative strategies for the development of nursing knowledge. Paradigms are sets of beliefs and practices, shared by communities of researchers, which regulate inquiry within disciplines. The various paradigms are characterized by ontological, epistemological and methodological differences in their approaches to conceptualizing and conducting research, and in their contribution towards disciplinary knowledge construction. Researchers may consider these differences so vast that one paradigm is incommensurable with another. Alternatively, researchers may ignore these differences and either unknowingly combine paradigms inappropriately or neglect to conduct needed research. To accomplish the task of developing nursing knowledge for use in practice, there is a need for a critical, integrated understanding of the paradigms used for nursing inquiry. We describe the evolution and influence of positivist, postpositivist, interpretive and critical theory research paradigms. Using integrative review, we compare and contrast the paradigms in terms of their philosophical underpinnings and scientific contribution. A pragmatic approach to theory development through synthesis of cumulative knowledge relevant to nursing practice is suggested. This requires that inquiry start with assessment of existing knowledge from disparate studies to identify key substantive content and gaps. Knowledge development in under-researched areas could be accomplished through integrative strategies that preserve theoretical integrity and strengthen research approaches associated with various philosophical perspectives. These strategies may include parallel studies within the same substantive domain using different paradigms; theoretical triangulation to combine findings from paradigmatically diverse studies; integrative reviews; and mixed method studies. Nurse scholars are urged to
Michelle T Tong
Full Text Available Memories are dynamic physical phenomena with psychometric forms as well as characteristic timescales. Most of our understanding of the cellular mechanisms underlying the neurophysiology of memory, however, derives from one-trial learning paradigms that, while powerful, do not fully embody the gradual, representational, and statistical aspects of cumulative learning. The early olfactory system -- particularly olfactory bulb -- comprises a reasonably well-understood and experimentally accessible neuronal network with intrinsic plasticity that underlies both one-trial (adult aversive, neonatal and cumulative (adult appetitive odor learning. These olfactory circuits employ many of the same molecular and structural mechanisms of memory as, for example, hippocampal circuits following inhibitory avoidance conditioning, but the temporal sequences of post-conditioning molecular events are likely to differ owing to the need to incorporate new information from ongoing learning events into the evolving memory trace. Moreover, the shapes of acquired odor representations, and their gradual transformation over the course of cumulative learning, also can be directly measured, adding an additional representational dimension to the traditional metrics of memory strength and persistence. In this review, we describe some established molecular and structural mechanisms of memory with a focus on the timecourses of post-conditioning molecular processes. We describe the properties of odor learning intrinsic to the olfactory bulb and review the utility of the olfactory system of adult rodents as a memory system in which to study the cellular mechanisms of cumulative learning.
Mukhamedov, Alfred M.
In this paper a dynamic paradigm of turbulence is proposed. The basic idea consists in the novel definition of chaotic structure given with the help of Pfaff system of PDE associated with the turbulent dynamics. A methodological analysis of the new and the former paradigm is produced
Gong, Xianmin; Xiao, Hongrui; Wang, Dahua
False recognition results from the interplay of multiple cognitive processes, including verbatim memory, gist memory, phantom recollection, and response bias. In the current study, we modified the simplified Conjoint Recognition (CR) paradigm to investigate the way in which the valence of emotional stimuli affects the cognitive process and behavioral outcome of false recognition. In Study 1, we examined the applicability of the modification to the simplified CR paradigm and model. Twenty-six undergraduate students (13 females, aged 21.00±2.30years) learned and recognized both the large and small categories of photo objects. The applicability of the paradigm and model was confirmed by a fair goodness-of-fit of the model to the observational data and by their competence in detecting the memory differences between the large- and small-category conditions. In Study 2, we recruited another sample of 29 undergraduate students (14 females, aged 22.60±2.74years) to learn and recognize the categories of photo objects that were emotionally provocative. The results showed that negative valence increased false recognition, particularly the rate of false "remember" responses, by facilitating phantom recollection; positive valence did not influence false recognition significantly though enhanced gist processing. Copyright © 2016 Elsevier B.V. All rights reserved.
Harper, R; Bauer, R; Kannarkat, J
This article discusses the theory and operations of Gestalt Therapy from the viewpoint of learning theory. General comparative issues are elaborated as well as the concepts of introjection, retroflextion, confluence, and projection. Principles and techniques of Gestalt Therapy are discussed in terms of learning theory paradigm. Practical implications of the various Gestalt techniques are presented.
Local learning processes are a vital part of any dynamic assimilation of transferred technology. The paper raises the question about the interaction between the training paradigms, which transnational corporations introduce in their subsidiaries in Malaysia and the specific basis for learning...... of Malaysian labour. Experiences from Malaysian industry indicate that local learning processes are shaped, among other things, by the concept of knowledge in a particular training programme, labour market structures, and learning cultures....
Elliott, William; Choi, Eunhee; Friedline, Terri
This article presents results from an evaluation of an online statistics lab as part of a foundations research methods course for master's-level social work students. The article discusses factors that contribute to an environment in social work that fosters attitudes of reluctance toward learning and teaching statistics in research methods…
Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi
Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.
Full Text Available Lifelong learning, life-wide learning, continuing education, vocational education, professional education of adults, formal education, informal education, permanent education, etc. – the author of the present article seeks the relationships between these widely used terms and traces through the history of their introduction in the modern educational theory and practice.
The WIMP paradigm is the glue that joins together much of the high energy and cosmic frontiers. It postulates that most of the matter in the Universe is made of weakly-interacting massive particles, with implications for a broad range of experiments and observations. I will review the WIMP paradigm's underlying motivations, its current status in view of rapid experimental progress on several fronts, and recent theoretical variations on the WIMP paradigm theme.
RDF Data Cube (QB) has boosted the publication of Linked Statistical Data (LSD) on the Web, making them linkable to other related datasets and concepts following the Linked Data paradigm. In this demo we present LSD Dimensions, a web based application that monitors the usage of dimensions and codes
Jamali, D.; Khoury, G.; Sahyoun, H.
Purpose: To track changes in management paradigms from the bureaucratic to the post-bureaucratic to the learning organization model, highlighting core differentiating features of each paradigm as well as necessary ingredients for successful evolution. Design/methodology/approach: The article takes the form of a literature review and critical…
Houghton, Catherine; Hunter, Andrew; Meskell, Pauline
To explore the use of paradigms as ontological and philosophical guides for conducting PhD research. A paradigm can help to bridge the aims of a study and the methods to achieve them. However, choosing a paradigm can be challenging for doctoral researchers: there can be ambiguity about which paradigm is suitable for a particular research question and there is a lack of guidance on how to shape the research process for a chosen paradigm. The authors discuss three paradigms used in PhD nursing research: post-positivism, interpretivism and pragmatism. They compare each paradigm in relation to its ontology, epistemology and methodology, and present three examples of PhD nursing research studies to illustrate how research can be conducted using these paradigms in the context of the research aims and methods. The commonalities and differences between the paradigms and their uses are highlighted. Creativity and flexibility are important when deciding on a paradigm. However, consistency and transparency are also needed to ensure the quality and rigour necessary for conducting nursing research. When choosing a suitable paradigm, the researcher should ensure that the ontology, epistemology and methodology of the paradigm are manifest in the methods and research strategies employed.
Green, Jennifer L.; Blankenship, Erin E.
We developed an introductory statistics course for pre-service elementary teachers. In this paper, we describe the goals and structure of the course, as well as the assessments we implemented. Additionally, we use example course work to demonstrate pre-service teachers' progress both in learning statistics and as novice teachers. Overall, the…
We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the paradigm of spike-time-dependent plasticity (STDP). This incorporates ...
Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.
Chau, Lily S; Galvez, Roberto
It is widely accepted that the amygdala plays a critical role in acquisition and consolidation of fear-related memories. Some of the more widely employed behavioral paradigms that have assisted in solidifying the amygdala's role in fear-related memories are associative learning paradigms. With most associative learning tasks, a neutral conditioned stimulus (CS) is paired with a salient unconditioned stimulus (US) that elicits an unconditioned response (UR). After multiple CS-US pairings, the subject learns that the CS predicts the onset or delivery of the US, and thus elicits a learned conditioned response (CR). Most fear-related associative paradigms have suggested that an aspect of the fear association is stored in the amygdala; however, some fear-motivated associative paradigms suggest that the amygdala is not a site of storage, but rather facilitates consolidation in other brain regions. Based upon various learning theories, one of the most likely sites for storage of long-term memories is the neocortex. In support of these theories, findings from our laboratory, and others, have demonstrated that trace-conditioning, an associative paradigm where there is a separation in time between the CS and US, induces learning-specific neocortical plasticity. The following review will discuss the amygdala's involvement, either as a site of storage or facilitating storage in other brain regions such as the neocortex, in fear- and non-fear-motivated associative paradigms. In this review, we will discuss recent findings suggesting a broader role for the amygdala in increasing the saliency of behaviorally relevant information, thus facilitating acquisition for all forms of memory, both fear- and non-fear-related. This proposed promiscuous role of the amygdala in facilitating acquisition for all memories further suggests a potential role of the amygdala in general learning disabilities.
Engstroem, Maria; Landtblom, Anne-Marie; Ragnehed, Mattias; Lundberg, Peter; Karlsson, Marie; Crone, Marie; Antepohl, Wolfram
Background: In fMRI examinations, it is very important to select appropriate paradigms assessing the brain function of interest. In addition, the patients' ability to perform the required cognitive tasks during fMRI must be taken into account. Purpose: To evaluate two language paradigms, word generation and sentence reading for their usefulness in examinations of aphasic patients and to make suggestions for improvements of clinical fMRI. Material and Methods: Five patients with aphasia after stroke or trauma sequelae were examined by fMRI. The patients' language ability was screened by neurolinguistic tests and elementary pre-fMRI language tests. Results: The sentence-reading paradigm succeeded to elicit adequate language-related activation in perilesional areas whereas the word generation paradigm failed. These findings were consistent with results on the behavioral tests in that all patients showed very poor performance in phonemic fluency, but scored well above mean at a reading comprehension task. Conclusion: The sentence-reading paradigm is appropriate to assess language function in this patient group, while the word-generation paradigm seems to be inadequate. In addition, it is crucial to use elementary pre-fMRI language tests to guide the fMRI paradigm decision.
Engstroem, Maria; Landtblom, Anne-Marie; Ragnehed, Mattias; Lundberg, Peter (Center for Medical Image Science and Visualization (CMIV), Linkoeping Univ., Linkoeping (Sweden)), e-mail: email@example.com; Karlsson, Marie; Crone, Marie (Dept. of Clinical and Experimental Medicine/Logopedics, Linkoeping Univ., Linkoeping (Sweden)); Antepohl, Wolfram (Dept. of Clinical and Experimental Medicine/Rehabilitation, Linkoeping Univ., Linkoeping (Sweden))
Background: In fMRI examinations, it is very important to select appropriate paradigms assessing the brain function of interest. In addition, the patients' ability to perform the required cognitive tasks during fMRI must be taken into account. Purpose: To evaluate two language paradigms, word generation and sentence reading for their usefulness in examinations of aphasic patients and to make suggestions for improvements of clinical fMRI. Material and Methods: Five patients with aphasia after stroke or trauma sequelae were examined by fMRI. The patients' language ability was screened by neurolinguistic tests and elementary pre-fMRI language tests. Results: The sentence-reading paradigm succeeded to elicit adequate language-related activation in perilesional areas whereas the word generation paradigm failed. These findings were consistent with results on the behavioral tests in that all patients showed very poor performance in phonemic fluency, but scored well above mean at a reading comprehension task. Conclusion: The sentence-reading paradigm is appropriate to assess language function in this patient group, while the word-generation paradigm seems to be inadequate. In addition, it is crucial to use elementary pre-fMRI language tests to guide the fMRI paradigm decision.
Budé, Luc; van de Wiel, Margaretha W J; Imbos, Tjaart; Berger, Martijn P F
Education is aimed at students reaching conceptual understanding of the subject matter, because this leads to better performance and application of knowledge. Conceptual understanding depends on coherent and error-free knowledge structures. The construction of such knowledge structures can only be accomplished through active learning and when new knowledge can be integrated into prior knowledge. The intervention in this study was directed at both the activation of students as well as the integration of knowledge. Undergraduate university students from an introductory statistics course, in an authentic problem-based learning (PBL) environment, were randomly assigned to conditions and measurement time points. In the PBL tutorial meetings, half of the tutors guided the discussions of the students in a traditional way. The other half guided the discussions more actively by asking directive and activating questions. To gauge conceptual understanding, the students answered open-ended questions asking them to explain and relate important statistical concepts. Results of the quantitative analysis show that providing directive tutor guidance improved understanding. Qualitative data of students' misconceptions seem to support this finding. Long-term retention of the subject matter seemed to be inadequate. ©2010 The British Psychological Society.
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Scalice, D.; Davis, H. B.; Leach, D.; Chambers, N.
The Next Generation Science Standards (NGSS) introduce a Framework for teaching and learning with three interconnected "dimensions:" Disciplinary Core Ideas (DCI's), Cross-cutting Concepts (CCC's), and Science and Engineering Practices (SEP's). This "3D" Framework outlines progressions of learning from K-12 based on the DCI's, detailing which parts of a concept should be taught at each grade band. We used these discipline-based progressions to synthesize interdisciplinary progressions for core concepts in astrobiology, such as the origins of life, what makes a world habitable, biosignatures, and searching for life on other worlds. The final product is an organizing tool for lesson plans, learning media, and other educational materials in astrobiology, as well as a fundamental resource in astrobiology education that serves both educators and scientists as they plan and carry out their programs for learners.
Huang, Lan; Du, Youfu; Chen, Gongyang
Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.
Full Text Available Processes like the globalization consistency and learning about society are screened by diffuse concepts such as those taking the last steps of the industrial civilization and hierarchically ordered world through hegemony. This is why the meaning of globalization is given by deviant trends, like globalism, and the knowledge society is taken for the tools promoted by itself, such as the internet. This does not imply only approximations of meaning but rather the vanity of change, preserving the status quo represented by the pre-global world or the adversity principle. Historicism of paradigm cannot be avoided. Evolvement towards something else, announced by globalization is implacable, and the new ordinating principle, the one of competition, opens the opportunity horizon to global world.
Full Text Available Processes like the globalization consistency and learning about society are screened by diffuse concepts such as those taking the last steps of the industrial civilization and hierarchically ordered world through hegemony. This is why the meaning of globalization is given by deviant trends, like globalism, and the knowledge society is taken for the tools promoted by itself, such as the internet. This does not imply only approximations of meaning but rather the vanity of change, preserving the status quo represented by the pre-global world or the adversity principle. Historicism of paradigm cannot be avoided. Evolvement towards something else, announced by globalization is implacable, and the new ordinating principle, the one of competition, opens the opportunity horizon to global world
Zvi H. Perry
Full Text Available Background. We changed the biostatistics curriculum for our medical students and have created a course entitled “Multivariate analysis of statistical data, using the SPSS package.” Purposes. The aim of this course was to develop students’ skills in computerized data analysis, as well as enhancing their ability to read and interpret statistical data analysis in the literature. Methods. In the current study we have shown that a computer-based course for biostatistics and advanced data analysis is feasible and efficient, using course specific evaluation questionnaires. Results. Its efficacy is both subjective (our subjects felt better prepared to do their theses, as well as to read articles with advanced statistical data analysis and objective (their knowledge of how and when to apply statistical procedures seemed to improve. Conclusions. We showed that a formal evaluative process for such a course is possible and that it enhances the learning experience both for the students and their teachers. In the current study we have shown that a computer-based course for biostatistics and advanced data analysis is feasible and efficient.
Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.
Effects of stimulus movement on learning, transfer, matching, and short-term memory performance were assessed with 2 monkeys using a video-task paradigm in which the animals responded to computer-generated images by manipulating a joystick. Performance on tests of learning set, transfer index, matching to sample, and delayed matching to sample in the video-task paradigm was comparable to that obtained in previous investigations using the Wisconsin General Testing Apparatus. Additionally, learning, transfer, and matching were reliably and significantly better when the stimuli or discriminanda moved than when the stimuli were stationary. External manipulations such as stimulus movement may increase attention to the demands of a task, which in turn should increase the efficiency of learning. These findings have implications for the investigation of learning in other populations, as well as for the application of the video-task paradigm to comparative study.
Berger, Uwe; Stöbel-Richter, Yve
Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.