WorldWideScience

Sample records for big picture learning

  1. Computational Literacy and "The Big Picture" Concerning Computers in Mathematics Education

    Science.gov (United States)

    diSessa, Andrea A.

    2018-01-01

    This article develops some ideas concerning the "big picture" of how using computers might fundamentally change learning, with an emphasis on mathematics (and, more generally, STEM education). I develop the big-picture model of "computation as a new literacy" in some detail and with concrete examples of sixth grade students…

  2. Case-based learning facilitates critical thinking in undergraduate nutrition education: students describe the big picture.

    Science.gov (United States)

    Harman, Tara; Bertrand, Brenda; Greer, Annette; Pettus, Arianna; Jennings, Jill; Wall-Bassett, Elizabeth; Babatunde, Oyinlola Toyin

    2015-03-01

    The vision of dietetics professions is based on interdependent education, credentialing, and practice. Case-based learning is a method of problem-based learning that is designed to heighten higher-order thinking. Case-based learning can assist students to connect education and specialized practice while developing professional skills for entry-level practice in nutrition and dietetics. This study examined student perspectives of their learning after immersion into case-based learning in nutrition courses. The theoretical frameworks of phenomenology and Bloom's Taxonomy of Educational Objectives triangulated the design of this qualitative study. Data were drawn from 426 written responses and three focus group discussions among 85 students from three upper-level undergraduate nutrition courses. Coding served to deconstruct the essence of respondent meaning given to case-based learning as a learning method. The analysis of the coding was the constructive stage that led to configuration of themes and theoretical practice pathways about student learning. Four leading themes emerged. Story or Scenario represents the ways that students described case-based learning, changes in student thought processes to accommodate case-based learning are illustrated in Method of Learning, higher cognitive learning that was achieved from case-based learning is represented in Problem Solving, and Future Practice details how students explained perceived professional competency gains from case-based learning. The skills that students acquired are consistent with those identified as essential to professional practice. In addition, the common concept of Big Picture was iterated throughout the themes and demonstrated that case-based learning prepares students for multifaceted problems that they are likely to encounter in professional practice. Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  3. Java EE 7 the big picture

    CERN Document Server

    Coward, Danny

    2015-01-01

    Java EE 7: The Big Picture uniquely explores the entire Java EE 7 platform in an all-encompassing style while examining each tier of the platform in enough detail so that you can select the right technologies for specific project needs. In this authoritative guide, Java expert Danny Coward walks you through the code, applications, and frameworks that power the platform. Take full advantage of the robust capabilities of Java EE 7, increase your productivity, and meet enterprise demands with help from this Oracle Press resource.

  4. Small wormholes change our picture of the big bang

    CERN Multimedia

    1990-01-01

    Matt Visser has studied tiny wormholes, which may be produced on a subatomic scale by quantum fluctuations in the energy of the vacuum. He believes these quantum wormholes could change our picture of the origin of the Universe in the big bang (1/2 p)

  5. Biophotonics: the big picture

    Science.gov (United States)

    Marcu, Laura; Boppart, Stephen A.; Hutchinson, Mark R.; Popp, Jürgen; Wilson, Brian C.

    2018-02-01

    The 5th International Conference on Biophotonics (ICOB) held April 30 to May 1, 2017, in Fremantle, Western Australia, brought together opinion leaders to discuss future directions for the field and opportunities to consider. The first session of the conference, "How to Set a Big Picture Biophotonics Agenda," was focused on setting the stage for developing a vision and strategies for translation and impact on society of biophotonic technologies. The invited speakers, panelists, and attendees engaged in discussions that focused on opportunities and promising applications for biophotonic techniques, challenges when working at the confluence of the physical and biological sciences, driving factors for advances of biophotonic technologies, and educational opportunities. We share a summary of the presentations and discussions. Three main themes from the conference are presented in this position paper that capture the current status, opportunities, challenges, and future directions of biophotonics research and key areas of applications: (1) biophotonics at the nano- to microscale level; (2) biophotonics at meso- to macroscale level; and (3) biophotonics and the clinical translation conundrum.

  6. Stepping back to see the big picture: when obstacles elicit global processing

    NARCIS (Netherlands)

    Marguc, J.; Förster, J.; van Kleef, G.A.

    2011-01-01

    Can obstacles prompt people to look at the "big picture" and open up their minds? Do the cognitive effects of obstacles extend beyond the tasks with which they interfere? These questions were addressed in 6 studies involving both physical and nonphysical obstacles and different measures of global

  7. "Big Science: the LHC in Pictures" in the Globe

    CERN Multimedia

    2008-01-01

    An exhibition of spectacular photographs of the LHC and its experiments is about to open in the Globe. The LHC and its four experiments are not only huge in size but also uniquely beautiful, as the exhibition "Big Science: the LHC in Pictures" in the Globe of Science and Innovation will show. The exhibition features around thirty spectacular photographs measuring 4.5 metres high and 2.5 metres wide. These giant pictures reflecting the immense scale of the LHC and the mysteries of the Universe it is designed to uncover fill the Globe with shape and colour. The exhibition, which will open on 4 March, is divided into six different themes: CERN, the LHC and the four experiments ATLAS, LHCb, CMS and ALICE. Facts about all these subjects will be available at information points and in an explanatory booklet accompanying the exhibition (which visitors will be able to buy if they wish to take it home with them). Globe of Science and Innovatio...

  8. Machine learning for Big Data analytics in plants.

    Science.gov (United States)

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Big data technologies in e-learning

    Directory of Open Access Journals (Sweden)

    Gyulara A. Mamedova

    2017-01-01

    Full Text Available Recently, e-learning around the world is rapidly developing, and the main problem is to provide the students with quality educational information on time. This task cannot be solved without analyzing the large flow of information, entering the information environment of e-learning from participants in the educational process – students, lecturers, administration, etc. In this environment, there are a large number of different types of data, both structured and unstructured. Data processing is difficult to implement by traditional statistical methods. The aim of the study is to show that for the development and implementation of successful e-learning systems, it is necessary to use new technologies that would allow storing and processing large data streams.In order to store the big data, a large amount of disk space is required. It is shown that to solve this problem it is efficient to use clustered NAS (Network Area Storage technology, which allows storing information of educational institutions on NAS servers and sharing them with Internet. To process and personalize the Big Data in the environment of e-learning, it is proposed to use the technologies MapReduce, Hadoop, NoSQL and others. The article gives examples of the use of these technologies in the cloud environment. These technologies in e-learning allow achieving flexibility, scalability, availability, quality of service, security, confidentiality and ease of educational information use.Another important problem of e-learning is the identification of new, sometimes hidden, interconnection in Big Data, new knowledge (data mining, which can be used to improve the educational process and improve its management. To classify electronic educational resources, identify patterns of students with similar psychological, behavioral and intellectual characteristics, developing individualized educational programs, it is proposed to use methods of analysis of Big Data.The article shows that at

  10. Learning from picture books: Infants’ use of naming information

    Directory of Open Access Journals (Sweden)

    Melanie eKhu

    2014-02-01

    Full Text Available The present study investigated whether naming would facilitate infants’ transfer of information from picture books to the real world. Eighteen- and 21-month-olds learned a novel label for a novel object depicted in a picture book. Infants then saw a second picture book in which an adult demonstrated how to elicit the object’s nonobvious property. Accompanying narration described the pictures using the object’s newly learnt label. Infants were subsequently tested with the real-world object depicted in the book, as well as a different-colour exemplar. Infants’ performance on the test trials was compared with that of infants in a no label condition. When presented with the exact object depicted in the picture book, 21-month-olds were significantly more likely to elicit the object’s nonobvious property than were 18-month-olds. Learning the object’s label before learning about the object’s hidden property did not improve 18-month-olds’ performance. At 21-months, the number of infants in the label condition who attempted to elicit the real-world object’s nonobvious property was greater than would be predicted by chance, but the number of infants in the no label condition was not. Neither age group nor label condition predicted test performance for the different-colour exemplar. The findings are discussed in relation to infants’ learning and transfer from picture books.

  11. Learning from picture books: Infants' use of naming information.

    Science.gov (United States)

    Khu, Melanie; Graham, Susan A; Ganea, Patricia A

    2014-01-01

    The present study investigated whether naming would facilitate infants' transfer of information from picture books to the real world. Eighteen- and 21-month-olds learned a novel label for a novel object depicted in a picture book. Infants then saw a second picture book in which an adult demonstrated how to elicit the object's non-obvious property. Accompanying narration described the pictures using the object's newly learnt label. Infants were subsequently tested with the real-world object depicted in the book, as well as a different-color exemplar. Infants' performance on the test trials was compared with that of infants in a no label condition. When presented with the exact object depicted in the picture book, 21-month-olds were significantly more likely to attempt to elicit the object's non-obvious property than were 18-month-olds. Learning the object's label before learning about the object's hidden property did not improve 18-month-olds' performance. At 21-months, the number of infants in the label condition who attempted to elicit the real-world object's non-obvious property was greater than would be predicted by chance, but the number of infants in the no label condition was not. Neither age group nor label condition predicted test performance for the different-color exemplar. The findings are discussed in relation to infants' learning and transfer from picture books.

  12. MLBCD: a machine learning tool for big clinical data.

    Science.gov (United States)

    Luo, Gang

    2015-01-01

    Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.

  13. Learning from picture books: Infants’ use of naming information

    Science.gov (United States)

    Khu, Melanie; Graham, Susan A.; Ganea, Patricia A.

    2014-01-01

    The present study investigated whether naming would facilitate infants’ transfer of information from picture books to the real world. Eighteen- and 21-month-olds learned a novel label for a novel object depicted in a picture book. Infants then saw a second picture book in which an adult demonstrated how to elicit the object’s non-obvious property. Accompanying narration described the pictures using the object’s newly learnt label. Infants were subsequently tested with the real-world object depicted in the book, as well as a different-color exemplar. Infants’ performance on the test trials was compared with that of infants in a no label condition. When presented with the exact object depicted in the picture book, 21-month-olds were significantly more likely to attempt to elicit the object’s non-obvious property than were 18-month-olds. Learning the object’s label before learning about the object’s hidden property did not improve 18-month-olds’ performance. At 21-months, the number of infants in the label condition who attempted to elicit the real-world object’s non-obvious property was greater than would be predicted by chance, but the number of infants in the no label condition was not. Neither age group nor label condition predicted test performance for the different-color exemplar. The findings are discussed in relation to infants’ learning and transfer from picture books. PMID:24611058

  14. Taking a lot of Pictures of Real Things and Making them into a Single Picture you can Move on a Computer

    Science.gov (United States)

    Linneman, C.; Hults, C.

    2017-12-01

    This summer I spent my time in the largest state of all the states, with the people who take care of the most important parks, owned by all of us. My job was to take a lot of pictures of real things, small and large, and to make them into one piece on a computer, into pictures that can be moved and turned and can be easily shared across the world at any time. My job had three different classes: very small, pretty big, and very big. For the small things: Using a table that turns, I took many still pictures of old animals turned into rocks as well as things thrown away by people who are now dead. The pieces of rock and old things are important and exciting, but they can break quite easily, so only a few people are allowed to touch them. With the pictures you can move, many more people can learn about, "touch", and see them, but they use a computer instead of their hands. For a pretty big block of ice moving down a long area of land, I took many pictures of the end of it, while at the same time knowing just where I was on the face of the world. Using a computer, I again put all the pictures together into one picture that could be turned and moved. One person with a computer could look at any part of the piece of ice without having to actually visit it. Finally, for the very big things, I was part of a team that would fly slowly over the areas we were interested in, taking pictures about every half of a second. After taking tens of hundreds of pictures, the computer join all the pictures together into a single picture that showed each and every little up and down of the land that we had flown over, getting very few wrong. This way of making pictures you can move doesn't take as much money as other means, and it can be used on things of very different areas, from something as small as a finger to something as large as a huge field of ice moving slowly over time. The people who care for the parks that we all own don't have as much money as some, and in the biggest state

  15. A Model for Learning Over Time: The Big Picture

    Science.gov (United States)

    Amato, Herbert K.; Konin, Jeff G.; Brader, Holly

    2002-01-01

    Objective: To present a method of describing the concept of “learning over time” with respect to its implementation into an athletic training education program curriculum. Background: The formal process of learning over time has recently been introduced as a required way for athletic training educational competencies and clinical proficiencies to be delivered and mastered. Learning over time incorporates the documented cognitive, psychomotor, and affective skills associated with the acquisition, progression, and reflection of information. This method of academic preparation represents a move away from a quantitative-based learning module toward a proficiency-based mastery of learning. Little research or documentation can be found demonstrating either the specificity of this concept or suggestions for its application. Description: We present a model for learning over time that encompasses multiple indicators for assessment in a successive format. Based on a continuum approach, cognitive, psychomotor, and affective characteristics are assessed at different levels in classroom and clinical environments. Clinical proficiencies are a common set of entry-level skills that need to be integrated into the athletic training educational domains. Objective documentation is presented, including the skill breakdown of a task and a matrix to identify a timeline of competency and proficiency delivery. Clinical Advantages: The advantages of learning over time pertain to the integration of cognitive knowledge into clinical skill acquisition. Given the fact that learning over time has been implemented as a required concept for athletic training education programs, this model may serve to assist those program faculty who have not yet developed, or are in the process of developing, a method of administering this approach to learning. PMID:12937551

  16. Less is More: How manipulative features affect children's learning from picture books.

    Science.gov (United States)

    Tare, Medha; Chiong, Cynthia; Ganea, Patricia; Deloache, Judy

    2010-09-01

    Picture books are ubiquitous in young children's lives and are assumed to support children's acquisition of information about the world. Given their importance, relatively little research has directly examined children's learning from picture books. We report two studies examining children's acquisition of labels and facts from picture books that vary on two dimensions: iconicity of the pictures and presence of manipulative features (or "pop-ups"). In Study 1, 20-month-old children generalized novel labels less well when taught from a book with manipulative features than from standard picture books without such elements. In Study 2, 30- and 36-month-old children learned fewer facts when taught from a manipulative picture book with drawings than from a standard picture book with realistic images and no manipulative features. The results of the two studies indicate that children's learning from picture books is facilitated by realistic illustrations, but impeded by manipulative features.

  17. BIG DATA AND E-LEARNING: THE IMPACT ON THE FUTURE OF LEARNING INDUSTRY

    Directory of Open Access Journals (Sweden)

    Valentin PAU

    2015-11-01

    Full Text Available In nowadays, one of the most interesting aspects of e-Learning is that he is continuously evolving, where, the big data architecture represents an important component over which the e-Learning communities has stopped more and more. In our work paper we will analyze the technological benefits of the big data concept and the impact on the future of e-Learning but also we will mention the critical aspects regarding the integrity of the data.

  18. MACHINE LEARNING TECHNIQUES USED IN BIG DATA

    Directory of Open Access Journals (Sweden)

    STEFANIA LOREDANA NITA

    2016-07-01

    Full Text Available The classical tools used in data analysis are not enough in order to benefit of all advantages of big data. The amount of information is too large for a complete investigation, and the possible connections and relations between data could be missed, because it is difficult or even impossible to verify all assumption over the information. Machine learning is a great solution in order to find concealed correlations or relationships between data, because it runs at scale machine and works very well with large data sets. The more data we have, the more the machine learning algorithm is useful, because it “learns” from the existing data and applies the found rules on new entries. In this paper, we present some machine learning algorithms and techniques used in big data.

  19. Picture-Word Differences in Discrimination Learning: II. Effects of Conceptual Categories.

    Science.gov (United States)

    Bourne, Lyle E., Jr.; And Others

    A well established finding in the discrimination learning literature is that pictures are learned more rapidly than their associated verbal labels. It was hypothesized in this study that the usual superiority of pictures over words in a discrimination list containing same-instance repetitions would disappear in a discrimination list containing…

  20. Big Data Analysis for Personalized Health Activities: Machine Learning Processing for Automatic Keyword Extraction Approach

    Directory of Open Access Journals (Sweden)

    Jun-Ho Huh

    2018-04-01

    Full Text Available The obese population is increasing rapidly due to the change of lifestyle and diet habits. Obesity can cause various complications and is becoming a social disease. Nonetheless, many obese patients are unaware of the medical treatments that are right for them. Although a variety of online and offline obesity management services have been introduced, they are still not enough to attract the attention of users and are not much of help to solve the problem. Obesity healthcare and personalized health activities are the important factors. Since obesity is related to lifestyle habits, eating habits, and interests, I concluded that the big data analysis of these factors could deduce the problem. Therefore, I collected big data by applying the machine learning and crawling method to the unstructured citizen health data in Korea and the search data of Naver, which is a Korean portal company, and Google for keyword analysis for personalized health activities. It visualized the big data using text mining and word cloud. This study collected and analyzed the data concerning the interests related to obesity, change of interest on obesity, and treatment articles. The analysis showed a wide range of seasonal factors according to spring, summer, fall, and winter. It also visualized and completed the process of extracting the keywords appropriate for treatment of abdominal obesity and lower body obesity. The keyword big data analysis technique for personalized health activities proposed in this paper is based on individual’s interests, level of interest, and body type. Also, the user interface (UI that visualizes the big data compatible with Android and Apple iOS. The users can see the data on the app screen. Many graphs and pictures can be seen via menu, and the significant data values are visualized through machine learning. Therefore, I expect that the big data analysis using various keywords specific to a person will result in measures for personalized

  1. Less is More: How manipulative features affect children’s learning from picture books

    Science.gov (United States)

    Tare, Medha; Chiong, Cynthia; Ganea, Patricia; DeLoache, Judy

    2010-01-01

    Picture books are ubiquitous in young children’s lives and are assumed to support children’s acquisition of information about the world. Given their importance, relatively little research has directly examined children’s learning from picture books. We report two studies examining children’s acquisition of labels and facts from picture books that vary on two dimensions: iconicity of the pictures and presence of manipulative features (or “pop-ups”). In Study 1, 20-month-old children generalized novel labels less well when taught from a book with manipulative features than from standard picture books without such elements. In Study 2, 30- and 36-month-old children learned fewer facts when taught from a manipulative picture book with drawings than from a standard picture book with realistic images and no manipulative features. The results of the two studies indicate that children’s learning from picture books is facilitated by realistic illustrations, but impeded by manipulative features. PMID:20948970

  2. Learning Analytics: The next frontier for computer assisted language learning in big data age

    Directory of Open Access Journals (Sweden)

    Yu Qinglan

    2015-01-01

    Full Text Available Learning analytics (LA has been applied to various learning environments, though it is quite new in the field of computer assisted language learning (CALL. This article attempts to examine the application of learning analytics in the upcoming big data age. It starts with an introduction and application of learning analytics in other fields, followed by a retrospective review of historical interaction between learning and media in CALL, and a penetrating analysis on why people would go to learning analytics to increase the efficiency of foreign language education. As approved in previous research, new technology, including big data mining and analysis, would inevitably enhance the learning of foreign languages. Potential changes that learning analytics would bring to Chinese foreign language education and researches are also presented in the article.

  3. A Big Data Platform for Storing, Accessing, Mining and Learning Geospatial Data

    Science.gov (United States)

    Yang, C. P.; Bambacus, M.; Duffy, D.; Little, M. M.

    2017-12-01

    Big Data is becoming a norm in geoscience domains. A platform that is capable to effiently manage, access, analyze, mine, and learn the big data for new information and knowledge is desired. This paper introduces our latest effort on developing such a platform based on our past years' experiences on cloud and high performance computing, analyzing big data, comparing big data containers, and mining big geospatial data for new information. The platform includes four layers: a) the bottom layer includes a computing infrastructure with proper network, computer, and storage systems; b) the 2nd layer is a cloud computing layer based on virtualization to provide on demand computing services for upper layers; c) the 3rd layer is big data containers that are customized for dealing with different types of data and functionalities; d) the 4th layer is a big data presentation layer that supports the effient management, access, analyses, mining and learning of big geospatial data.

  4. Learning basic life support (BLS) with tablet PCs in reciprocal learning at school: are videos superior to pictures? A randomized controlled trial.

    Science.gov (United States)

    Iserbyt, Peter; Charlier, Nathalie; Mols, Liesbet

    2014-06-01

    It is often assumed that animations (i.e., videos) will lead to higher learning compared to static media (i.e., pictures) because they provide a more realistic demonstration of the learning task. To investigate whether learning basic life support (BLS) and cardiopulmonary resuscitation (CPR) from video produce higher learning outcomes compared to pictures in reciprocal learning. A randomized controlled trial. A total of 128 students (mean age: 17 years) constituting eight intact classes from a secondary school learned BLS in reciprocal roles of doer and helper with tablet PCs. Student pairs in each class were randomized over a Picture and a Video group. In the Picture group, students learned BLS by means of pictures combined with written instructions. In the Video group, BLS was learned through videos with on-screen instructions. Informational equivalence was assured since instructions in both groups comprised exactly the same words. BLS assessment occurred unannounced, three weeks following intervention. Analysis of variance demonstrated no significant differences in chest compression depths between the Picture group (M=42 mm, 95% CI=40-45) and the Video group (M=39 mm, 95% CI=36-42). In the Picture group significantly higher percentages of chest compressions with correct hand placement were achieved (M=67%, CI=58-77) compared to the Video group (M=53%, CI=43-63), P=.03, η(p)(2)=.03. No other significant differences were found. Results do not support the assumption that videos are superior to pictures for learning BLS and CPR in reciprocal learning. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  6. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  7. Big Data and Learning Analytics in Blended Learning Environments: Benefits and Concerns

    Directory of Open Access Journals (Sweden)

    Anthony G. Picciano

    2014-09-01

    Full Text Available The purpose of this article is to examine big data and learning analytics in blended learning environments. It will examine the nature of these concepts, provide basic definitions, and identify the benefits and concerns that apply to their development and implementation. This article draws on concepts associated with data-driven decision making, which evolved in the 1980s and 1990s, and takes a sober look at big data and analytics. It does not present them as panaceas for all of the issues and decisions faced by higher education administrators, but sees them as part of solutions, although not without significant investments of time and money to achieve worthwhile benefits.

  8. Picture Books Stimulate the Learning of Mathematics

    Science.gov (United States)

    van den Heuvel-Panhuizen, Marja; van den Boogaard, Sylvia; Doig, Brian

    2009-01-01

    In this article we describe our experiences using picture books to provide young children (five- to six-year-olds) with a learning environment where they can explore and extend preliminary notions of mathematics-related concepts, without being taught these concepts explicitly. We gained these experiences in the PICO-ma project, which aimed to…

  9. Hot big bang or slow freeze?

    Energy Technology Data Exchange (ETDEWEB)

    Wetterich, C.

    2014-09-07

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  10. Hot big bang or slow freeze?

    International Nuclear Information System (INIS)

    Wetterich, C.

    2014-01-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe

  11. Hot big bang or slow freeze?

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2014-09-01

    Full Text Available We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  12. Event-related potentials and recognition memory for pictures and words: the effects of intentional and incidental learning.

    Science.gov (United States)

    Noldy, N E; Stelmack, R M; Campbell, K B

    1990-07-01

    Event-related potentials were recorded under conditions of intentional or incidental learning of pictures and words, and during the subsequent recognition memory test for these stimuli. Intentionally learned pictures were remembered better than incidentally learned pictures and intentionally learned words, which, in turn, were remembered better than incidentally learned words. In comparison to pictures that were ignored, the pictures that were attended were characterized by greater positive amplitude frontally at 250 ms and centro-parietally at 350 ms and by greater negativity at 450 ms at parietal and occipital sites. There were no effects of attention on the waveforms elicited by words. These results support the view that processing becomes automatic for words, whereas the processing of pictures involves additional effort or allocation of attentional resources. The N450 amplitude was greater for words than for pictures during both acquisition (intentional items) and recognition phases (hit and correct rejection categories for intentional items, hit category for incidental items). Because pictures are better remembered than words, the greater late positive wave (600 ms) elicited by the pictures than the words during the acquisition phase is also consistent with the association between P300 and better memory that has been reported.

  13. When Do Pictures Help Learning from Expository Text? Multimedia and Modality Effects in Primary Schools

    Science.gov (United States)

    Herrlinger, Simone; Höffler, Tim N.; Opfermann, Maria; Leutner, Detlev

    2017-06-01

    Adding pictures to a text is very common in today's education and might be especially beneficial for elementary school children, whose abilities to read and understand pure text have not yet been fully developed. Our study examined whether adding pictures supports learning of a biology text in fourth grade and whether the text modality (spoken or written) plays a role. Results indicate that overall, pictures enhanced learning but that the text should be spoken rather than written. These results are in line with instructional design principles derived from common multimedia learning theories. In addition, for elementary school children, it might be advisable to read texts out to the children. Reading by themselves and looking at pictures might overload children's cognitive capacities and especially their visual channel. In this case, text and pictures would not be integrated into one coherent mental model, and effective learning would not take place.

  14. To What Extent Can the Big Five and Learning Styles Predict Academic Achievement

    Science.gov (United States)

    Köseoglu, Yaman

    2016-01-01

    Personality traits and learning styles play defining roles in shaping academic achievement. 202 university students completed the Big Five personality traits questionnaire and the Inventory of Learning Processes Scale and self-reported their grade point averages. Conscientiousness and agreeableness, two of the Big Five personality traits, related…

  15. Are pictures good for learning new vocabulary in a foreign language? Only if you think they are not.

    Science.gov (United States)

    Carpenter, Shana K; Olson, Kellie M

    2012-01-01

    The current study explored whether new words in a foreign language are learned better from pictures than from native language translations. In both between-subjects and within-subject designs, Swahili words were not learned better from pictures than from English translations (Experiments 1-3). Judgments of learning revealed that participants exhibited greater overconfidence in their ability to recall a Swahili word from a picture than from a translation (Experiments 2-3), and Swahili words were also considered easier to process when paired with pictures rather than translations (Experiment 4). When this overconfidence bias was eliminated through retrieval practice (Experiment 2) and instructions warning participants to not be overconfident (Experiment 3), Swahili words were learned better from pictures than from translations. It appears, therefore, that pictures can facilitate learning of foreign language vocabulary--as long as participants are not too overconfident in the power of a picture to help them learn a new word.

  16. Big data for space situation awareness

    Science.gov (United States)

    Blasch, Erik; Pugh, Mark; Sheaff, Carolyn; Raquepas, Joe; Rocci, Peter

    2017-05-01

    Recent advances in big data (BD) have focused research on the volume, velocity, veracity, and variety of data. These developments enable new opportunities in information management, visualization, machine learning, and information fusion that have potential implications for space situational awareness (SSA). In this paper, we explore some of these BD trends as applicable for SSA towards enhancing the space operating picture. The BD developments could increase in measures of performance and measures of effectiveness for future management of the space environment. The global SSA influences include resident space object (RSO) tracking and characterization, cyber protection, remote sensing, and information management. The local satellite awareness can benefit from space weather, health monitoring, and spectrum management for situation space understanding. One area in big data of importance to SSA is value - getting the correct data/information at the right time, which corresponds to SSA visualization for the operator. A SSA big data example is presented supporting disaster relief for space situation awareness, assessment, and understanding.

  17. The role of picture books in young children’s mathematics learning

    NARCIS (Netherlands)

    van den Heuvel-Panhuizen, M.H.A.M; Elia, I.

    2013-01-01

    In this chapter we address the role of picture books in kindergartners’ learning of mathematics. The chapter is based on various studies we carried out on this topic from different perspectives. All studies sought to provide insight into the power of picture books to contribute to the development of

  18. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  19. Think big: learning contexts, algorithms and data science

    Directory of Open Access Journals (Sweden)

    Baldassarre Michele

    2016-12-01

    Full Text Available Due to the increasing growth in available data in recent years, all areas of research and the managements of institutions and organisations, specifically schools and universities, feel the need to give meaning to this availability of data. This article, after a brief reference to the definition of big data, intends to focus attention and reflection on their type to proceed to an extension of their characterisation. One of the hubs to make feasible the use of Big Data in operational contexts is to give a theoretical basis to which to refer. The Data, Information, Knowledge and Wisdom (DIKW model correlates these four aspects, concluding in Data Science, which in many ways could revolutionise the established pattern of scientific investigation. The Learning Analytics applications on online learning platforms can be tools for evaluating the quality of teaching. And that is where some problems arise. It becomes necessary to handle with care the available data. Finally, a criterion for deciding whether it makes sense to think of an analysis based on Big Data can be to think about the interpretability and relevance in relation to both institutional and personal processes.

  20. Big Data in Public Health: Terminology, Machine Learning, and Privacy.

    Science.gov (United States)

    Mooney, Stephen J; Pejaver, Vikas

    2018-04-01

    The digital world is generating data at a staggering and still increasing rate. While these "big data" have unlocked novel opportunities to understand public health, they hold still greater potential for research and practice. This review explores several key issues that have arisen around big data. First, we propose a taxonomy of sources of big data to clarify terminology and identify threads common across some subtypes of big data. Next, we consider common public health research and practice uses for big data, including surveillance, hypothesis-generating research, and causal inference, while exploring the role that machine learning may play in each use. We then consider the ethical implications of the big data revolution with particular emphasis on maintaining appropriate care for privacy in a world in which technology is rapidly changing social norms regarding the need for (and even the meaning of) privacy. Finally, we make suggestions regarding structuring teams and training to succeed in working with big data in research and practice.

  1. Special Issue: Every picture tells a story: Pupil representations of learning the violin

    Directory of Open Access Journals (Sweden)

    Andrea Creech

    2006-06-01

    Full Text Available Abstract: The influence on learning outcomes of interpersonal interaction amongst teachers, pupils and parents is the subject of an inquiry that took this researcher on a voyage from the qualitative to the quantitative side of the “methodological divide”, and back again. This paper presents findings from first phase of the research, which included a quantitative survey to examine how learning and teaching experience for violin pupils was influenced by the interpersonal dynamics of responsiveness and control, within pupilparent and pupil-teacher relationships. As part of the survey pupils were asked to draw pictures of their violin lessons. It was thought that the pictures might reveal pupils’ perceptions of their experience of learning an instrument and that the pictures would add depth to the quantitative scales that measured interpersonal mechanisms and pupil outcomes. The pictures were subjected to content analysis and coded accordingly. These codes were matched with pupil artists’ scores for control and responsiveness, as well as with their scores for outcomes that included enjoyment of music, personal satisfaction, self esteem, self efficacy, friendship, motivation and musical attainment. Analysis of variance was computed in order to test the null hypotheses that a pupil-teacher-parent interpersonal behaviour (control and responsiveness was not represented in their pictures and b pupil outcomes were not reflected in their depictions of violin lessons. This paper presents the results of this analysis, thus addressing the question of whether the pictures could be accepted as telling a credible and coherent story about pupils’ perceptions of learning the violin.

  2. Strategies in Reading Comprehension: Individual Differences in Learning from Pictures and Words (A Footnote). Technical Report No. 300.

    Science.gov (United States)

    Levin, Joel R.; Guttmann, Joseph

    In a recent experiment it was discovered that although many children learn uniformly well (or poorly) from pictures and words, others learn appreciably better from pictures. The present study rules out an alternative explanation of those data--which had been produced on a single learning task containing both pictures and words--by obtaining…

  3. The Relationship between the Big-Five Model of Personality and Self-Regulated Learning Strategies

    Science.gov (United States)

    Bidjerano, Temi; Dai, David Yun

    2007-01-01

    The study examined the relationship between the big-five model of personality and the use of self-regulated learning strategies. Measures of self-regulated learning strategies and big-five personality traits were administered to a sample of undergraduate students. Results from canonical correlation analysis indicated an overlap between the…

  4. English made easy volume one a new ESL approach learning English through pictures

    CERN Document Server

    Crichton, Jonathan

    2015-01-01

    This is a fun and user–friendly way to learn English English Made Easy is a breakthrough in English language learning—imaginatively exploiting how pictures and text can work together to create understanding and help learners learn more productively. It gives beginner English learners easy access to the vocabulary, grammar and functions of English as it is actually used in a comprehensive range of social situations. Self–guided students and classroom learners alike will be delighted by the way they are helped to progress easily from one unit to the next, using a combination of pictures and text

  5. Technology and Pedagogy: Using Big Data to Enhance Student Learning

    Science.gov (United States)

    Brinton, Christopher Greg

    2016-01-01

    The "big data revolution" has penetrated many fields, from network monitoring to online retail. Education and learning are quickly becoming part of it, too, because today, course delivery platforms can collect unprecedented amounts of behavioral data about students as they interact with learning content online. This data includes, for…

  6. Anthropomorphism in Decorative Pictures: Benefit or Harm for Learning?

    Science.gov (United States)

    Schneider, Sascha; Nebel, Steve; Beege, Maik; Rey, Günter Daniel

    2018-01-01

    When people attribute human characteristics to nonhuman objects they are amenable to anthropomorphism. For example, human faces or the insertion of personalized labels are found to trigger anthropomorphism. Two studies examine the effects of these features when included in decorative pictures in multimedia learning materials. In a first…

  7. A Survey on Domain-Specific Languages for Machine Learning in Big Data

    OpenAIRE

    Portugal, Ivens; Alencar, Paulo; Cowan, Donald

    2016-01-01

    The amount of data generated in the modern society is increasing rapidly. New problems and novel approaches of data capture, storage, analysis and visualization are responsible for the emergence of the Big Data research field. Machine Learning algorithms can be used in Big Data to make better and more accurate inferences. However, because of the challenges Big Data imposes, these algorithms need to be adapted and optimized to specific applications. One important decision made by software engi...

  8. Are Pictures Good for Learning New Vocabulary in a Foreign Language? Only If You Think They Are Not

    Science.gov (United States)

    Carpenter, Shana K.; Olson, Kellie M.

    2012-01-01

    The current study explored whether new words in a foreign language are learned better from pictures than from native language translations. In both between-subjects and within-subject designs, Swahili words were not learned better from pictures than from English translations (Experiments 1-3). Judgments of learning revealed that participants…

  9. Learning big data with Amazon Elastic MapReduce

    CERN Document Server

    Singh, Amarkant

    2014-01-01

    This book is aimed at developers and system administrators who want to learn about Big Data analysis using Amazon Elastic MapReduce. Basic Java programming knowledge is required. You should be comfortable with using command-line tools. Prior knowledge of AWS, API, and CLI tools is not assumed. Also, no exposure to Hadoop and MapReduce is expected.

  10. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  11. Social Learning and Optimal Advertising in the Motion Picture Industry

    OpenAIRE

    Ohio University; Department of Economics; Hailey Hayeon Joo

    2009-01-01

    Social learning is thought to be a key determinant of the demand for movies. This can be a double-edged sword for motion picture distributors, because when a movie is good, social learning can enhance the effectiveness of movie advertising, but when a movie is bad, it can mitigate this effectiveness. This paper develops an equilibrium model of consumers' movie-going choices and movie distributors' advertising decisions. First, we develop a structural model for studios' optimal advertising str...

  12. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  13. The Words Children Hear: Picture Books and the Statistics for Language Learning.

    Science.gov (United States)

    Montag, Jessica L; Jones, Michael N; Smith, Linda B

    2015-09-01

    Young children learn language from the speech they hear. Previous work suggests that greater statistical diversity of words and of linguistic contexts is associated with better language outcomes. One potential source of lexical diversity is the text of picture books that caregivers read aloud to children. Many parents begin reading to their children shortly after birth, so this is potentially an important source of linguistic input for many children. We constructed a corpus of 100 children's picture books and compared word type and token counts in that sample and a matched sample of child-directed speech. Overall, the picture books contained more unique word types than the child-directed speech. Further, individual picture books generally contained more unique word types than length-matched, child-directed conversations. The text of picture books may be an important source of vocabulary for young children, and these findings suggest a mechanism that underlies the language benefits associated with reading to children. © The Author(s) 2015.

  14. Where is the bigger picture in the teaching and learning of mathematics?

    Directory of Open Access Journals (Sweden)

    Satsope Maoto

    2016-11-01

    Full Text Available This article presents an interpretive analysis of three different mathematics teaching cases to establish where the bigger picture should lie in the teaching and learning of mathematics. We use pre-existing data collected through pre-observation and post-observation interviews and passive classroom observation undertaken by the third author in two different Grade 11 classes taught by two different teachers at one high school. Another set of data was collected through participant observation of the second author’s Year 2 University class. We analyse the presence or absence of the bigger picture, especially, in the teachers’ questioning strategies and their approach to content, guided by Tall’s framework of three worlds of mathematics, namely the ‘conceptual-embodied’ world, the ‘proceptual-symbolic’ world and the ‘axiomatic-formal’ world. Within this broad framework we acknowledge Pirie and Kieren’s notion of folding back towards the attainment of an axiomatic-formal world. We argue that the teaching and learning of mathematics should remain anchored in the bigger picture and, in that way, mathematics is meaningful, accessible, expandable and transferable.

  15. New Data, Old Tensions: Big Data, Personalized Learning, and the Challenges of Progressive Education

    Science.gov (United States)

    Dishon, Gideon

    2017-01-01

    Personalized learning has become the most notable application of big data in primary and secondary schools in the United States. The combination of big data and adaptive technological platforms is heralded as a revolution that could transform education, overcoming the outdated classroom model, and realizing the progressive vision of…

  16. Machine Learning for Knowledge Extraction from PHR Big Data.

    Science.gov (United States)

    Poulymenopoulou, Michaela; Malamateniou, Flora; Vassilacopoulos, George

    2014-01-01

    Cloud computing, Internet of things (IOT) and NoSQL database technologies can support a new generation of cloud-based PHR services that contain heterogeneous (unstructured, semi-structured and structured) patient data (health, social and lifestyle) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). The patient data stored in such PHR systems constitute big data whose analysis with the use of appropriate machine learning algorithms is expected to improve diagnosis and treatment accuracy, to cut healthcare costs and, hence, to improve the overall quality and efficiency of healthcare provided. This paper describes a health data analytics engine which uses machine learning algorithms for analyzing cloud based PHR big health data towards knowledge extraction to support better healthcare delivery as regards disease diagnosis and prognosis. This engine comprises of the data preparation, the model generation and the data analysis modules and runs on the cloud taking advantage from the map/reduce paradigm provided by Apache Hadoop.

  17. Ripe for Change: Garden-Based Learning in Schools. Harvard Education Letter Impact Series

    Science.gov (United States)

    Hirschi, Jane S.

    2015-01-01

    "Ripe for Change: Garden-Based Learning in Schools" takes a big-picture view of the school garden movement and the state of garden-based learning in public K--8 education. The book frames the garden movement for educators and shows how school gardens have the potential to be a significant resource for teaching and learning. In this…

  18. Effects of Picture Labeling on Science Text Processing and Learning: Evidence from Eye Movements

    Science.gov (United States)

    Mason, Lucia; Pluchino, Patrik; Tornatora, Maria Caterina

    2013-01-01

    This study investigated the effects of reading a science text illustrated by either a labeled or unlabeled picture. Both the online process of reading the text and the offline conceptual learning from the text were examined. Eye-tracking methodology was used to trace text and picture processing through indexes of first- and second-pass reading or…

  19. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  20. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  1. Machine learning of big data in gaining insight into successful treatment of hypertension.

    Science.gov (United States)

    Koren, Gideon; Nordon, Galia; Radinsky, Kira; Shalev, Varda

    2018-06-01

    Despite effective medications, rates of uncontrolled hypertension remain high. Treatment protocols are largely based on randomized trials and meta-analyses of these studies. The objective of this study was to test the utility of machine learning of big data in gaining insight into the treatment of hypertension. We applied machine learning techniques such as decision trees and neural networks, to identify determinants that contribute to the success of hypertension drug treatment on a large set of patients. We also identified concomitant drugs not considered to have antihypertensive activity, which may contribute to lowering blood pressure (BP) control. Higher initial BP predicts lower success rates. Among the medication options and their combinations, treatment with beta blockers appears to be more commonly effective, which is not reflected in contemporary guidelines. Among numerous concomitant drugs taken by hypertensive patients, proton pump inhibitors (PPIs), and HMG CO-A reductase inhibitors (statins) significantly improved the success rate of hypertension. In conclusions, machine learning of big data is a novel method to identify effective antihypertensive therapy and for repurposing medications already on the market for new indications. Our results related to beta blockers, stemming from machine learning of a large and diverse set of big data, in contrast to the much narrower criteria for randomized clinic trials (RCTs), should be corroborated and affirmed by other methods, as they hold potential promise for an old class of drugs which may be presently underutilized. These previously unrecognized effects of PPIs and statins have been very recently identified as effective in lowering BP in preliminary clinical observations, lending credibility to our big data results.

  2. Get the picture? The effects of iconicity on toddlers' reenactment from picture books.

    Science.gov (United States)

    Simcock, Gabrielle; DeLoache, Judy

    2006-11-01

    What do toddlers learn from everyday picture-book reading interactions? To date, there has been scant research exploring this question. In this study, the authors adapted a standard imitation procedure to examine 18- to 30-month-olds' ability to learn how to reenact a novel action sequence from a picture book. The results provide evidence that toddlers can imitate specific target actions on novel real-world objects on the basis of a picture-book interaction. Children's imitative performance after the reading interaction varied both as a function of age and the level of iconicity of the pictures in the book. These findings are discussed in terms of children's emerging symbolic capacity and the flexibility of the cognitive representation.

  3. Sources of Evidence-of-Learning: Learning and Assessment in the Era of Big Data

    Science.gov (United States)

    Cope, Bill; Kalantzis, Mary

    2015-01-01

    This article sets out to explore a shift in the sources of evidence-of-learning in the era of networked computing. One of the key features of recent developments has been popularly characterized as "big data". We begin by examining, in general terms, the frame of reference of contemporary debates on machine intelligence and the role of…

  4. Stepping back to see the big picture: when obstacles elicit global processing.

    Science.gov (United States)

    Marguc, Janina; Förster, Jens; Van Kleef, Gerben A

    2011-11-01

    Can obstacles prompt people to look at the "big picture" and open up their minds? Do the cognitive effects of obstacles extend beyond the tasks with which they interfere? These questions were addressed in 6 studies involving both physical and nonphysical obstacles and different measures of global versus local processing styles. Perceptual scope increased after participants solved anagrams in the presence, rather than the absence, of an auditory obstacle (random words played in the background; Study 1), particularly among individuals low in volatility (i.e., those who are inclined to stay engaged and finish what they do; Study 4). It also increased immediately after participants encountered a physical obstacle while navigating a maze (Study 3A) and when compared with doing nothing (Study 3B). Conceptual scope increased after participants solved anagrams while hearing random numbers framed as an "obstacle to overcome" rather than a "distraction to ignore" (Study 2) and after participants navigated a maze with a physical obstacle, compared with a maze without a physical obstacle, but only when trait (Study 5) or state (Study 6) volatility was low. Results suggest that obstacles trigger an "if obstacle, then start global processing" response, primarily when people are inclined to stay engaged and finish ongoing activities. Implications for dealing with life's obstacles and related research are discussed.

  5. Big Society? Disabled people with the label of learning disabilities and the queer(y)ing of civil society

    OpenAIRE

    Goodley, Dan; Runswick-Cole, Katherine

    2014-01-01

    This paper explores the shifting landscape of civil society alongside the emergence of ‘Big Society’ in the UK. We do so as we begin a research project Big Society? Disabled people with learning disabilities and Civil Society [Economic and Social Research Council (ES/K004883/1)]; we consider what ‘Big Society’ might mean for the lives of disabled people labelled with learning disabilities (LDs). In the paper, we explore the ways in which the disabled body/mind might be thought of as a locus o...

  6. APPLICATION OF BIG DATA IN EDUCATION DATA MINING AND LEARNING ANALYTICS – A LITERATURE REVIEW

    Directory of Open Access Journals (Sweden)

    Katrina Sin

    2015-07-01

    Full Text Available The usage of learning management systems in education has been increasing in the last few years. Students have started using mobile phones, primarily smart phones that have become a part of their daily life, to access online content. Student's online activities generate enormous amount of unused data that are wasted as traditional learning analytics are not capable of processing them. This has resulted in the penetration of Big Data technologies and tools into education, to process the large amount of data involved. This study looks into the recent applications of Big Data technologies in education and presents a review of literature available on Educational Data Mining and Learning Analytics.

  7. Rule based systems for big data a machine learning approach

    CERN Document Server

    Liu, Han; Cocea, Mihaela

    2016-01-01

    The ideas introduced in this book explore the relationships among rule based systems, machine learning and big data. Rule based systems are seen as a special type of expert systems, which can be built by using expert knowledge or learning from real data. The book focuses on the development and evaluation of rule based systems in terms of accuracy, efficiency and interpretability. In particular, a unified framework for building rule based systems, which consists of the operations of rule generation, rule simplification and rule representation, is presented. Each of these operations is detailed using specific methods or techniques. In addition, this book also presents some ensemble learning frameworks for building ensemble rule based systems.

  8. Embracing Big Data in Complex Educational Systems: The Learning Analytics Imperative and the Policy Challenge

    Science.gov (United States)

    Macfadyen, Leah P.; Dawson, Shane; Pardo, Abelardo; Gaševic, Dragan

    2014-01-01

    In the new era of big educational data, learning analytics (LA) offer the possibility of implementing real-time assessment and feedback systems and processes at scale that are focused on improvement of learning, development of self-regulated learning skills, and student success. However, to realize this promise, the necessary shifts in the…

  9. Enhancing Health Risk Prediction with Deep Learning on Big Data and Revised Fusion Node Paradigm

    Directory of Open Access Journals (Sweden)

    Hongye Zhong

    2017-01-01

    Full Text Available With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves the complex application of machine-learning algorithms, such as Bayesian fusions and neural network, for data extraction and logical inference. Deep learning, combined with information fusion paradigms, can be utilized to provide more comprehensive and reliable predictions from big health data. Based on the proposed framework, an experimental system is developed as an illustration for the framework implementation.

  10. Big Fish, Little Fish: Teaching and Learning in the Middle Years

    Science.gov (United States)

    Groundwater-Smith, Susan, Ed.; Mockler, Nicole, Ed.

    2015-01-01

    "Big Fish, Little Fish: Teaching and Learning in the Middle Years" provides pre-service and early career teachers with a pathway to understanding the needs of students as they make the important transition from primary to secondary schooling. The book explores contemporary challenges for teaching and learning in the middle years, with a…

  11. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    Science.gov (United States)

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  12. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  13. Big Data, Deep Learning and Tianhe-2 at Sun Yat-Sen University, Guangzhou

    Science.gov (United States)

    Yuen, D. A.; Dzwinel, W.; Liu, J.; Zhang, K.

    2014-12-01

    In this decade the big data revolution has permeated in many fields, ranging from financial transactions, medical surveys and scientific endeavors, because of the big opportunities people see ahead. What to do with all this data remains an intriguing question. This is where computer scientists together with applied mathematicians have made some significant inroads in developing deep learning techniques for unraveling new relationships among the different variables by means of correlation analysis and data-assimilation methods. Deep-learning and big data taken together is a grand challenge task in High-performance computing which demand both ultrafast speed and large memory. The Tianhe-2 recently installed at Sun Yat-Sen University in Guangzhou is well positioned to take up this challenge because it is currently the world's fastest computer at 34 Petaflops. Each compute node of Tianhe-2 has two CPUs of Intel Xeon E5-2600 and three Xeon Phi accelerators. The Tianhe-2 has a very large fast memory RAM of 88 Gigabytes on each node. The system has a total memory of 1,375 Terabytes. All of these technical features will allow very high dimensional (more than 10) problem in deep learning to be explored carefully on the Tianhe-2. Problems in seismology which can be solved include three-dimensional seismic wave simulations of the whole Earth with a few km resolution and the recognition of new phases in seismic wave form from assemblage of large data sets.

  14. Tone of voice guides word learning in informative referential contexts.

    Science.gov (United States)

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C

    2013-06-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., "daxen") spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.

  15. On the inside of a marble from quantum mechanics to the Big Bang

    CERN Document Server

    Bascom, Gavin

    2017-01-01

    Keeping in mind that we can only see the universe from the comfort of our home galaxy, Bascom begins his text by meticulously laying the necessary groundwork to understand the Big Bang’s mathematics without using any equations. He then paints a freeze-frame picture of our universe as if we had taken a three-dimensional picture with a giant camera. Within this picture, he traces forces beginning with the smallest (a single atom) to the biggest (the cosmos), keeping in mind that in this frozen moment everything further away from the observer spatially is also further away from the observer in time; that is, older. Soon a very real and very vivid image of the Big Bang appears (especially in things that are loud or hot), echoing down through time and into our everyday lives, reflected in every atom during every measurement. Then, slowly but deliberately, Bascom unfreezes this picture, ratcheting each moment from one to the next, showing us how and why quantum particles are constantly in contact with the Big Ban...

  16. Big(ger Data as Better Data in Open Distance Learning

    Directory of Open Access Journals (Sweden)

    Paul Prinsloo

    2015-02-01

    Full Text Available In the context of the hype, promise and perils of Big Data and the currently dominant paradigm of data-driven decision-making, it is important to critically engage with the potential of Big Data for higher education. We do not question the potential of Big Data, but we do raise a number of issues, and present a number of theses to be seriously considered in realising this potential. The University of South Africa (Unisa is one of the mega ODL institutions in the world with more than 360,000 students and a range of courses and programmes. Unisa already has access to a staggering amount of student data, hosted in disparate sources, and governed by different processes. As the university moves to mainstreaming online learning, the amount of and need for analyses of data are increasing, raising important questions regarding our assumptions, understanding, data sources, systems and processes. This article presents a descriptive case study of the current state of student data at Unisa, as well as explores the impact of existing data sources and analytic approaches. From the analysis it is clear that in order for big(ger data to be better data, a number of issues need to be addressed. The article concludes by presenting a number of theses that should form the basis for the imperative to optimise the harvesting, analysis and use of student data.

  17. Cosmic relics from the big bang

    International Nuclear Information System (INIS)

    Hall, L.J.

    1988-12-01

    A brief introduction to the big bang picture of the early universe is given. Dark matter is discussed; particularly its implications for elementary particle physics. A classification scheme for dark matter relics is given. 21 refs., 11 figs., 1 tab

  18. Cosmic relics from the big bang

    Energy Technology Data Exchange (ETDEWEB)

    Hall, L.J.

    1988-12-01

    A brief introduction to the big bang picture of the early universe is given. Dark matter is discussed; particularly its implications for elementary particle physics. A classification scheme for dark matter relics is given. 21 refs., 11 figs., 1 tab.

  19. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  20. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  1. Teacher Candidates Implementing Universal Design for Learning: Enhancing Picture Books with QR Codes

    Science.gov (United States)

    Grande, Marya; Pontrello, Camille

    2016-01-01

    The purpose of this study was to investigate if teacher candidates could gain knowledge of the principles of Universal Design for Learning by enhancing traditional picture books with Quick Response (QR) codes and to determine if the process of making these enhancements would impact teacher candidates' comfort levels with using technology on both…

  2. Sense Things in the Big Deep Water Bring the Big Deep Water to Computers so People can understand the Deep Water all the Time without getting wet

    Science.gov (United States)

    Pelz, M.; Heesemann, M.; Scherwath, M.; Owens, D.; Hoeberechts, M.; Moran, K.

    2015-12-01

    Senses help us learn stuff about the world. We put sense things in, over, and under the water to help people understand water, ice, rocks, life and changes over time out there in the big water. Sense things are like our eyes and ears. We can use them to look up and down, right and left all of the time. We can also use them on top of or near the water to see wind and waves. As the water gets deep, we can use our sense things to see many a layer of different water that make up the big water. On the big water we watch ice grow and then go away again. We think our sense things will help us know if this is different from normal, because it could be bad for people soon if it is not normal. Our sense things let us hear big water animals talking low (but sometimes high). We can also see animals that live at the bottom of the big water and we take lots of pictures of them. Lots of the animals we see are soft and small or hard and small, but sometimes the really big ones are seen too. We also use our sense things on the bottom and sometimes feel the ground shaking. Sometimes, we get little pockets of bad smelling air going up, too. In other areas of the bottom, we feel hot hot water coming out of the rock making new rocks and we watch some animals even make houses and food out of the hot hot water that turns to rock as it cools. To take care of the sense things we use and control water cars and smaller water cars that can dive deep in the water away from the bigger water car. We like to put new things in the water and take things out of the water that need to be fixed at least once a year. Sense things are very cool because you can use the sense things with your computer too. We share everything for free on our computers, which your computer talks to and gets pictures and sounds for you. Sharing the facts from the sense things is the best part about having the sense things because we can get many new ideas about understanding the big water from anyone with a computer!

  3. Clinical and CT scan pictures of cerebral cysticercosis

    Energy Technology Data Exchange (ETDEWEB)

    Singounas, E.G.; Krassanakis, K.; Karvounis, P.C. (Evangelismos Hospital, Athens (Greece))

    1982-01-01

    The clinical presentations and CT scan pictures of four patients harbouring big cysticercus cysts are described. The value of CT scanning in detecting these cysts is emphasized, and also the fact that these cysts can behave as space-occyping lesions, which must be differentiated from other cystic formations.

  4. UPAYA MENINGKATKAN MOTIVASI BELAJAR SISWA DALAM PEMBELAJARAN SEJARAH MELALUI MODEL COOPERATIVE TIPE PICTURE AND PICTURE KELAS XI SMA N I KELAM PERMAI KABUPATEN SINTANG

    Directory of Open Access Journals (Sweden)

    Susi Susanti

    2015-09-01

    Full Text Available motivation in learning history through Cooperative models of type Picture and Picture in class XI SMA N 1 Kelam Permai Sintang District?” This study used action research (action research conducted through two cycles with each cycle stages are planning, action, observation, and reflection. And forms of research that action research (classroom action research. Subjects in this study were students of class XI IPS 3 Kelam Permai Sintang District academic year 2014/2015, amounting to 27 people and 1 subject teachers of history. Data were obtained through classroom observation and documentation of the results of the actions taken and the data about the image, with this action research will note an increase or decrease after the class actions do persiklus. Research result are (1 Students motivation before using the model type Cooperativ Picture and Picture in class XI sejarahdi learning SMA N 1 Kelam Permai Sintang District the percentage of student motivation 57.0% categorized enough this can be seen from the results of pre-action that researchers do. Student motivation before using the model Cooperative Picture and Picture type varies greatly, since most of the students' motivation is still arguably less, because students are more likely to still passive, busy with their own activities and are less motivated to learn, (2 Application of Cooperative models of type Picture and Picture in pemebalajaran history in class XI IPS 3 SMAN 1 Kelam Permai Sintang District has implemented optimally and effectively. It can be seen from the students' motivation where students are more active, the students were interested and enthusiastic to follow the teaching of history. Because learning Cooperative models of type Picture and Picture is a learning model that uses paired images or sorted into order logis, (3There is an increase in students' motivation in learning history in class XI SMA N 1 Kelam Permai Sintang. This is evident from the average value of student

  5. English made easy, v.1 a new ESL approach learning English through pictures

    CERN Document Server

    Crichton, Jonathan

    2015-01-01

    This is a fun and userfriendly way to learn EnglishEnglish Made Easy is a breakthrough in English language learningimaginatively exploiting how pictures and text can work together to create understanding and help learners learn more productively. It gives beginner English learners easy access to the vocabulary, grammar and functions of English as it is actually used in a comprehensive range of social situations. Selfguided students and classroom learners alike will be delighted by the way they are helped to progress easily from one unit to the next, using a combina

  6. Teachers' Beliefs, Instructional Behaviors, and Students' Engagement in Learning from Texts with Instructional Pictures

    Science.gov (United States)

    Schroeder, Sascha; Richter, Tobias; McElvany, Nele; Hachfeld, Axinja; Baumert, Jurgen; Schnotz, Wolfgang; Horz, Holger; Ullrich, Mark

    2011-01-01

    This study investigated the relations between teachers' pedagogical beliefs and students' self-reported engagement in learning from texts with instructional pictures. Participants were the biology, geography, and German teachers of 46 classes (Grades 5-8) and their students. Teachers' instructional behaviors and students' engagement in learning…

  7. Reflections on the Ready to Learn Initiative 2010 to 2015: How a Federal Program in Partnership with Public Media Supported Young Children's Equitable Learning during a Time of Great Change

    Science.gov (United States)

    Pasnik, Shelley; Llorente, Carlin; Hupert, Naomi; Moorthy, Savitha

    2016-01-01

    "Reflections on the Ready to Learn Initiative, 2010 to 2015," draws upon interviews with 26 prominent children's media researchers, producers, and thought leaders and a review of scholarly articles and reports to provide a big picture view of the status and future directions of children's media. In this illuminating report, EDC and SRI…

  8. Pictures of the month

    CERN Multimedia

    Claudia Marcelloni de Oliveira

    Starting with this issue, we will publish special pictures illustrating the ongoing construction and commissioning efforts. If you wish to have a professionnal photographer immortalize your detector before it disappears in the heart of ATLAS or for a special event, don't hesitate to contact Claudia Marcelloni de Oliveira (16-3687) from the CERN photo service. Members of the pixel team preparing to insert the outermost layer (the outer of the three barrel pixel layers) into the Global Support Frame for the Pixel Detector in SR1. Ongoing work on the first Big Wheel on the C side. Exploded view of the side-C Big Wheel and the barrel cryostat. The TRT Barrel services (HV, LV, cooling liquid, active gas, flushing gas) are now completely connected and tested. Hats off to Kirill Egorov, Mike Reilly, Ben Legeyt and Godwin Mayers who managed to fit everything within the small clearance margin!

  9. A Framework for Identifying and Analyzing Major Issues in Implementing Big Data and Data Analytics in E-Learning: Introduction to Special Issue on Big Data and Data Analytics

    Science.gov (United States)

    Corbeil, Maria Elena; Corbeil, Joseph Rene; Khan, Badrul H.

    2017-01-01

    Due to rapid advancements in our ability to collect, process, and analyze massive amounts of data, it is now possible for educational institutions to gain new insights into how people learn (Kumar, 2013). E-learning has become an important part of education, and this form of learning is especially suited to the use of big data and data analysis,…

  10. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  11. Using Big Book to Teach Things in My House

    OpenAIRE

    Effrien, Intan; Lailatus, Sa’diyah; Nuruliftitah Maja, Neneng

    2017-01-01

    The purpose of this study to determine students' interest in learning using the big book media. Big book is a big book from the general book. The big book contains simple words and images that match the content of sentences and spelling. From here researchers can know the interest and development of students' knowledge. As well as train researchers to remain crative in developing learning media for students.

  12. Learning from static versus animated pictures of embodied knowledge : A pilot study on reconstructing a ballet choreography as concept map

    NARCIS (Netherlands)

    Fürstenau, B.; Kuhtz, M.; Simon-Hatala, B.; Kneppers, L.; Cañas, A.; Reiska, P.; Novak, J.

    2016-01-01

    In a research study we investigated whether static or animated pictures better support learning of abstract pedagogical content about action-oriented learning. For that purpose, we conducted a study with two experimental groups. One group received a narration about learning theory and a supporting

  13. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  14. The words children hear: Picture books and the statistics for language learning

    OpenAIRE

    Montag, Jessica L.; Jones, Michael N.; Smith, Linda B.

    2015-01-01

    Young children learn language from the speech they hear. Previous work suggests that the statistical diversity of words and of linguistic contexts is associated with better language outcomes. One potential source of lexical diversity is the text of picture books that caregivers read aloud to children. Many parents begin reading to their children shortly after birth, so this is potentially an important source of linguistic input for many children. We constructed a corpus of 100 children’s pict...

  15. Big Data in the Service of Educator Learning: What Should Be Done with Collected Online Professional Learning Information?

    Science.gov (United States)

    O'Brian, Mary M.

    2016-01-01

    The concern over big data and ramifications of its use permeates many, if not all, aspects of life in the 21st century. With the advent of online learning, another area of concern, one that directly impacts the world of education, has been added: the use of data within online professional development settings. In this article, we examine the type…

  16. The Role of Working Memory in Multimedia Instruction: Is Working Memory Working during Learning from Text and Pictures?

    Science.gov (United States)

    Schuler, Anne; Scheiter, Katharina; van Genuchten, Erlijn

    2011-01-01

    A lot of research has focused on the beneficial effects of using multimedia, that is, text and pictures, for learning. Theories of multimedia learning are based on Baddeley's working memory model (Baddeley 1999). Despite this theoretical foundation, there is only little research that aims at empirically testing whether and more importantly how…

  17. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  18. A Big Data and Learning Analytics Approach to Process-Level Feedback in Cognitive Simulations.

    Science.gov (United States)

    Pecaric, Martin; Boutis, Kathy; Beckstead, Jason; Pusic, Martin

    2017-02-01

    Collecting and analyzing large amounts of process data for the purposes of education can be considered a big data/learning analytics (BD/LA) approach to improving learning. However, in the education of health care professionals, the application of BD/LA is limited to date. The authors discuss the potential advantages of the BD/LA approach for the process of learning via cognitive simulations. Using the lens of a cognitive model of radiograph interpretation with four phases (orientation, searching/scanning, feature detection, and decision making), they reanalyzed process data from a cognitive simulation of pediatric ankle radiography where 46 practitioners from three expertise levels classified 234 cases online. To illustrate the big data component, they highlight the data available in a digital environment (time-stamped, click-level process data). Learning analytics were illustrated using algorithmic computer-enabled approaches to process-level feedback.For each phase, the authors were able to identify examples of potentially useful BD/LA measures. For orientation, the trackable behavior of re-reviewing the clinical history was associated with increased diagnostic accuracy. For searching/scanning, evidence of skipping views was associated with an increased false-negative rate. For feature detection, heat maps overlaid on the radiograph can provide a metacognitive visualization of common novice errors. For decision making, the measured influence of sequence effects can reflect susceptibility to bias, whereas computer-generated path maps can provide insights into learners' diagnostic strategies.In conclusion, the augmented collection and dynamic analysis of learning process data within a cognitive simulation can improve feedback and prompt more precise reflection on a novice clinician's skill development.

  19. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  20. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  1. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration; Klimentov, Alexei; Korchuganova, Tatiana

    2017-01-01

    BigPanDA monitoring is a web based application which provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analyzing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this wor...

  2. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration

    2017-01-01

    BigPanDA monitoring is a web-based application that provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analysing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this work...

  3. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  4. Big Data as a Driver for Clinical Decision Support Systems: A Learning Health Systems Perspective

    Directory of Open Access Journals (Sweden)

    Arianna Dagliati

    2018-05-01

    Full Text Available Big data technologies are nowadays providing health care with powerful instruments to gather and analyze large volumes of heterogeneous data collected for different purposes, including clinical care, administration, and research. This makes possible to design IT infrastructures that favor the implementation of the so-called “Learning Healthcare System Cycle,” where healthcare practice and research are part of a unique and synergic process. In this paper we highlight how “Big Data enabled” integrated data collections may support clinical decision-making together with biomedical research. Two effective implementations are reported, concerning decision support in Diabetes and in Inherited Arrhythmogenic Diseases.

  5. East coast gas - the big picture

    International Nuclear Information System (INIS)

    Drummond, K.J.

    1998-01-01

    The North American conventional gas resource base was reviewed and an explanation of how Canada's East coast fits into the overall picture was given. At 1996 year end, the total conventional ultimate natural gas resource base for North America was estimated to be 2,695 trillion cubic feet. The most important supply areas are Canada and the United States. Mexico and Alaska are expected to play only a minor role in the overall North American supply. Approximately half of the conventional gas estimated to exist in North America remains to be discovered. Only 78 per cent from the half that has been discovered has been produced, and only 22 per cent of it is remaining as reserves. Of the undiscovered natural gas resource, 38 per cent is in the frontier regions of Alaska and Canada. The growing importance of the East coast of North America as markets for natural gas was reviewed. The distribution of ultimate conventional marketable gas resources for Canada was described. The potential of the Western Canadian Sedimentary Basin (WCSB) and selected frontier areas were assessed. The report showed an undiscovered conventional marketable gas estimate of 122 trillion cubic feet for the WCSB and 107 trillion cubic feet for the Frontier areas. The two most significant areas of discovery in eastern Canada were considered to be the Hibernia oil field on the Grand Banks and the Venture gas field of the Scotian Shelf. 2 tabs., 7 figs

  6. Frequent Item set Mining of Big Data for Social Media

    OpenAIRE

    Roshani Pardeshi; Prof. Madhu Nashipudimath

    2016-01-01

    Big data is a term for massive data sets having large, more varied and complex structure with the difficulties of storing, analyzing and visualizing for further processes or results. Bigdata includes data from email, documents, pictures, audio, video files, and other sources that do not fit into a relational database. This unstructured data brings enormous challenges to Bigdata.The process of research into massive amounts of data to reveal hidden patterns and secret correlations named as big ...

  7. Getting The Picture: Our Changing Climate- A new learning tool for climate science

    Science.gov (United States)

    Yager, K.; Balog, J. D.

    2014-12-01

    Earth Vision Trust (EVT), founded by James Balog- photographer and scientist, has developed a free, online, multimedia climate science education tool for students and educators. Getting The Picture (GTP) creates a new learning experience, drawing upon powerful archives of Extreme Ice Survey's unique photographs and time-lapse videos of changing glaciers around the world. GTP combines the latest in climate science through interactive tools that make the basic scientific tenets of climate science accessible and easy to understand. The aim is to use a multidisciplinary approach to encourage critical thinking about the way our planet is changing due to anthropogenic activities, and to inspire students to find their own voice regarding our changing climate The essence of this resource is storytelling through the use of inspiring images, field expedition notes and dynamic multimedia tools. EVT presents climate education in a new light, illustrating the complex interaction between humans and nature through their Art + Science approach. The overarching goal is to educate and empower young people to take personal action. GTP is aligned with national educational and science standards (NGSS, CCSS, Climate Literacy) so it may be used in conventional classrooms as well as education centers, museum kiosks or anywhere with Internet access. Getting The Picture extends far beyond traditional learning to provide an engaging experience for students, educators and all those who wish to explore the latest in climate science.

  8. Lessons learned from a whole hospital PACS installation. Picture Archiving and Communication System.

    Science.gov (United States)

    Pilling, J R

    2002-09-01

    The Norfolk and Norwich University Hospital has incorporated a fully filmless Picture Archiving and Communication System (PACS) as part of a new hospital provision using PFI funding. The PACS project has been very successful and has met with unanimous acclaim from radiologists and clinicians. A project of this size cannot be achieved without learning some lessons from mistakes and recognising areas where attention to detail resulted in a successful implementation. This paper considers the successes and problems encountered in a large PACS installation.

  9. Young children's learning and transfer of biological information from picture books to real animals.

    Science.gov (United States)

    Ganea, Patricia A; Ma, Lili; Deloache, Judy S

    2011-01-01

    Preschool children (N = 104) read a book that described and illustrated color camouflage in animals (frogs and lizards). Children were then asked to indicate and explain which of 2 novel animals would be more likely to fall prey to a predatory bird. In Experiment 1, 3- and 4-year-olds were tested with pictures depicting animals in camouflage and noncamouflage settings; in Experiment 2, 4-year-olds were tested with real animals. The results show that by 4 years of age, children can learn new biological facts from a picture book. Of particular importance, transfer from books to real animals was found. These findings point to the importance that early book exposure can play in framing and increasing children's knowledge about the world. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.

  10. The Picture Superiority Effect and Biological Education.

    Science.gov (United States)

    Reid, D. J.

    1984-01-01

    Discusses learning behaviors where the "picture superiority effect" (PSE) seems to be most effective in biology education. Also considers research methodology and suggests a new research model which allows a more direct examination of the strategies learners use when matching up picture and text in efforts to "understand"…

  11. Development of Pupils Picture Aesthetic Competences on the Basis of IT-didactic Designs of Digital Picture Production

    DEFF Research Database (Denmark)

    Rasmussen, Helle

    : The research method refers to Design Based Research, since the project is based on a design theoretical view of learning. (Cobb et. All 2003, Van den Akker 2006, Collins 2004). Learning is here to be understood as “a sign producing activity in a specific situation within an institutional framing”, which makes...... Education” (English Title), The Danish University of Education Cobb, P. et al. (2003): “Design Experiments in Educational Research” in “Educational Researcher”, vol. 32, no. 1. Collins, Allan et. al. (2004): “Design Research: Theoretical and Metodological Issuses” in “Journal of the Learning Sciences”, Vol...... Competences on the Basis of IT-didactic Designs of Digital Picture Production Proposal information: The topic for this presentation is an ongoing investigation of the connection between the learning outcome of digital picture production and IT-didactic designs, and it refers to a Ph.D.-project in progress...

  12. Fragmented pictures revisited: long-term changes in repetition priming, relation to skill learning, and the role of cognitive resources.

    Science.gov (United States)

    Kennedy, Kristen M; Rodrigue, Karen M; Raz, Naftali

    2007-01-01

    Whereas age-related declines in declarative memory have been demonstrated in multiple cross-sectional and longitudinal studies, the effect of age on non-declarative manifestations of memory, such as repetition priming and perceptual skill learning, are less clear. The common assumption, based on cross-sectional studies, is that these processes are only mildly (if at all) affected by age. To investigate long-term changes in repetition priming and age-related differences in identification of fragmented pictures in a 5-year longitudinal design. Healthy adults (age 28-82 years) viewed drawings of objects presented in descending order of fragmentation. The identification threshold (IT) was the highest fragmentation level at which the object was correctly named. After a short interval, old pictures were presented again along with a set of similar but novel pictures. Five years later the participants repeated the experiment. At baseline and 5-year follow-up alike, one repeated exposure improved IT for old (priming) and new (skill acquisition) pictures. However, long-term retention of priming gains was observed only in young adults. Working memory explained a significant proportion of variance in within-occasion priming, long-term priming, and skill learning. Contrary to cross-sectional results, this longitudinal study suggests perceptual repetition priming is not an age-invariant phenomenon and advanced age and reduced availability of cognitive resources may contribute to its decline. Copyright 2007 S. Karger AG, Basel.

  13. Penerapan Pembelajaran Kooperatif Tipe Picture and Picture untuk Meningkatkan Perkembangan Kognitif Anak Usia Dini di Kelompok Bermain

    Directory of Open Access Journals (Sweden)

    Rosmaryn Tutupary

    2017-07-01

    Full Text Available The world of children is a world of play, and learning is done with or while playing that involves all the senses of the child. Paud teachers and parents need to look at the aspects of personality that exist in the development of children, including aspects of cognitive aspects, aspects of moral values, aspects of intelligence, motor aspects, social aspects of emotional. These five aspects may affect the thinking aspect of the child, and this is highly dependent on the ability of each individual. Therefore, children need to get good and proper stimulation to optimize aspects of its development. This study aims to determine the application of cooperative learning picture and picture type in improving early childhood cognitive development in KB Mawar FKIP Unpatti Ambon. This research is a Classroom Action Research. The subjects of this study were students aged 4-5 years KB Roses FKIP Unpatti Ambon which amounted to 10 people. Data collection techniques are observation and interview. This classroom action research procedure is carried out in two cycles, namely cycle I and cycle II. To know the result of student learning by using learning strategy with song on cognitive aspect conducted evaluation in the form of observation to cognitive aspect. Where indicators are performed by holding observations after completion of learning in each cycle at the end of the second meeting. The results showed that in the first cycle, there are still students who do not meet the criteria of the indicators conducted by the tutor, so it can be said as a weakness encountered in the implementation of the first cycle action, while in cycle II students are very active hear the teacher explanation. Very active in question is that students can follow the teacher's explanation well so that what is assigned can be done well. Thus it can be concluded that by using cooperative learning picture and picture type can develop early childhood cognitive in KB Mawar FKIP Unpatti Ambon

  14. [Understanding the symbolic values of Japanese onomatopoeia: comparison of Japanese and Chinese speakers].

    Science.gov (United States)

    Haryu, Etsuko; Zhao, Lihua

    2007-10-01

    Do non-native speakers of the Japanese language understand the symbolic values of Japanese onomatopoeia matching a voiced/unvoiced consonant with a big/small sound made by a big/small object? In three experiments, participants who were native speakers of Japanese, Japanese-learning Chinese, or Chinese without knowledge of the Japanese language were shown two pictures. One picture was of a small object making a small sound, such as a small vase being broken, and the other was of a big object making a big sound, such as a big vase being broken. Participants were presented with two novel onomatopoetic words with voicing contrasts, e.g.,/dachan/vs./tachan/, and were told that each word corresponded to one of the two pictures. They were then asked to match the words to the corresponding pictures. Chinese without knowledge of Japanese performed only at chance level, whereas Japanese and Japanese-learning Chinese successfully matched a voiced/unvoiced consonant with a big/small object respectively. The results suggest that the key to understanding the symbolic values of voicing contrasts in Japanese onomatopoeia is some basic knowledge that is intrinsic to the Japanese language.

  15. Using Multiple Big Datasets and Machine Learning to Produce a New Global Particulate Dataset: A Technology Challenge Case Study

    Science.gov (United States)

    Lary, D. J.

    2013-12-01

    A BigData case study is described where multiple datasets from several satellites, high-resolution global meteorological data, social media and in-situ observations are combined using machine learning on a distributed cluster using an automated workflow. The global particulate dataset is relevant to global public health studies and would not be possible to produce without the use of the multiple big datasets, in-situ data and machine learning.To greatly reduce the development time and enhance the functionality a high level language capable of parallel processing has been used (Matlab). A key consideration for the system is high speed access due to the large data volume, persistence of the large data volumes and a precise process time scheduling capability.

  16. Application of machine learning methods in big data analytics at management of contracts in the construction industry

    Directory of Open Access Journals (Sweden)

    Valpeters Marina

    2018-01-01

    Full Text Available The number of experts who realize the importance of big data continues to increase in various fields of the economy. Experts begin to use big data more frequently for the solution of their specific objectives. One of the probable big data tasks in the construction industry is the determination of the probability of contract execution at a stage of its establishment. The contract holder cannot guarantee execution of the contract. Therefore it leads to a lot of risks for the customer. This article is devoted to the applicability of machine learning methods to the task of determination of the probability of a successful contract execution. Authors try to reveal the factors influencing the possibility of contract default and then try to define the following corrective actions for a customer. In the problem analysis, authors used the linear and non-linear algorithms, feature extraction, feature transformation and feature selection. The results of investigation include the prognostic models with a predictive force based on the machine learning algorithms such as logistic regression, decision tree, randomize forest. Authors have validated models on available historical data. The developed models have the potential for practical use in the construction organizations while making new contracts.

  17. Strategies and Principles of Distributed Machine Learning on Big Data

    Directory of Open Access Journals (Sweden)

    Eric P. Xing

    2016-06-01

    Full Text Available The rise of big data has led to new demands for machine learning (ML systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area

  18. Selected pictures of the month

    CERN Multimedia

    Claudia Marcelloni de Oliveira

    View of a single MDT Big Wheel (on side A in UX15 cavern) taken during its last movement immediately after being assembled and just before being connected to the neighbouring TGC1 wheel. Assembly work on the Cathode Strip Chambers on Small Wheel C in building 190. Connecting the services for the Cathode Strip Chambers. The installation of the optical fibers for the readout of the Cathode Strip Chambers on Small Weel C by the Irvine group. Best from our archives: View of the End Cap Calorimeter and TGC big wheel from the Cryostat side A of ATLAS cavern taken on 22 May 2007. The picture above was taken from the platform in the middle, between the Cryostat and the End-Cap. Muriel hopes you all had a great vacation. She herself had a wonderful time sailing in Galicia (North Western Spain). She can be seen here wearing the traditional dress offered to her by "Los Amigos de las Dornas" (Friends of the Dornas -traditional sailing boats used for fishing) - when she became ...

  19. The Value of Being a Conscientious Learner: Examining the Effects of the Big Five Personality Traits on Self-Reported Learning from Training

    Science.gov (United States)

    Woods, Stephen A.; Patterson, Fiona C.; Koczwara, Anna; Sofat, Juilitta A.

    2016-01-01

    Purpose: The aim of this paper is to examine the impact of personality traits of the Big Five model on training outcomes to help explain variation in training effectiveness. Design/Methodology/ Approach: Associations of the Big Five with self-reported learning following training were tested in a pre- and post-design in a field sample of junior…

  20. Big Data Comes to School

    Directory of Open Access Journals (Sweden)

    Bill Cope

    2016-03-01

    Full Text Available The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.

  1. Big Data's Call to Philosophers of Education

    Science.gov (United States)

    Blanken-Webb, Jane

    2017-01-01

    This paper investigates the intersection of big data and philosophy of education by considering big data's potential for addressing learning via a holistic process of coming-to-know. Learning, in this sense, cannot be reduced to the difference between a pre- and post-test, for example, as it is constituted at least as much by qualities of…

  2. A Conceptual Paper on the Application of the Picture Word Inductive Model Using Bruner's Constructivist View of Learning and the Cognitive Load Theory

    Science.gov (United States)

    Jiang, Xuan; Perkins, Kyle

    2013-01-01

    Bruner's constructs of learning, specifically the structure of learning, spiral curriculum, and discovery learning, in conjunction with the Cognitive Load Theory, are used to evaluate the Picture Word Inductive Model (PWIM), an inquiry-oriented inductive language arts strategy designed to teach K-6 children phonics and spelling. The PWIM reflects…

  3. How Big is Earth?

    Science.gov (United States)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  4. What Is Seen and What Is Listened: an Experience on the Visual Learning of Music Through the Artistic Picture

    Directory of Open Access Journals (Sweden)

    Carmen M. Zavala Arnal

    2017-09-01

    Full Text Available This paper shows the results of an experiment consisting of a sequence of didactic activities carried out with students of first grade of Musical Language of Professional Conservatory Education through the artistic picture as main tool for the historical and musical contextualization and to support musical audition and interpretation. On this occasion, the central panel of the altarpiece of the Coronation of the Virgin from the parochial church of Retascón (Zaragoza, made in the first third of the fifteen century by the Master of Retascón, which includes singing angels with music sheet rolls / music scrolls, is the medium through which the different learning activities are going to be developed. In addition, an unpublished iconographic-musical description of the selected work is provided. With the aim of reaching some specific learning objects related to Medieval and modal music, apart from the particular methodologies of artistic and musical education, the quantitative method is used. Its results confirm the usefulness of the artistic picture in Musical Language learning.

  5. WE-H-BRB-00: Big Data in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  6. WE-H-BRB-00: Big Data in Radiation Oncology

    International Nuclear Information System (INIS)

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  7. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    Science.gov (United States)

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  8. m-Health 2.0: New perspectives on mobile health, Machine Learning and Big Data Analytics.

    Science.gov (United States)

    Istepanian, Robert S H; Al-Anzi, Turki

    2018-06-08

    Mobile health (m-Health) has been repeatedly called the biggest technological breakthrough of our modern times. Similarly, the concept of big data in the context of healthcare is considered one of the transformative drivers for intelligent healthcare delivery systems. In recent years, big data has become increasingly synonymous with mobile health, however key challenges of 'Big Data and mobile health', remain largely untackled. This is becoming particularly important with the continued deluge of the structured and unstructured data sets generated on daily basis from the proliferation of mobile health applications within different healthcare systems and products globally. The aim of this paper is of twofold. First we present the relevant big data issues from the mobile health (m-Health) perspective. In particular we discuss these issues from the technological areas and building blocks (communications, sensors and computing) of mobile health and the newly defined (m-Health 2.0) concept. The second objective is to present the relevant rapprochement issues of big m-Health data analytics with m-Health. Further, we also present the current and future roles of machine and deep learning within the current smart phone centric m-health model. The critical balance between these two important areas will depend on how different stakeholder from patients, clinicians, healthcare providers, medical and m-health market businesses and regulators will perceive these developments. These new perspectives are essential for better understanding the fine balance between the new insights of how intelligent and connected the future mobile health systems will look like and the inherent risks and clinical complexities associated with the big data sets and analytical tools used in these systems. These topics will be subject for extensive work and investigations in the foreseeable future for the areas of data analytics, computational and artificial intelligence methods applied for mobile health

  9. How Big Data, Comparative Effectiveness Research, and Rapid-Learning Health-Care Systems Can Transform Patient Care in Radiation Oncology.

    Science.gov (United States)

    Sanders, Jason C; Showalter, Timothy N

    2018-01-01

    Big data and comparative effectiveness research methodologies can be applied within the framework of a rapid-learning health-care system (RLHCS) to accelerate discovery and to help turn the dream of fully personalized medicine into a reality. We synthesize recent advances in genomics with trends in big data to provide a forward-looking perspective on the potential of new advances to usher in an era of personalized radiation therapy, with emphases on the power of RLHCS to accelerate discovery and the future of individualized radiation treatment planning.

  10. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  11. Training and Maintenance of a Picture-Based Communication Response in Older Adults with Dementia

    Science.gov (United States)

    Trahan, Maranda A.; Donaldson, Jeanne M.; McNabney, Matthew K.; Kahng, SungWoo

    2014-01-01

    We examined whether adults with dementia could learn to emit a picture-based communication response and if this skill would maintain over time. Three women with moderate to severe dementia were taught to exchange a picture card for a highly preferred activity. All participants quickly learned to exchange the picture card and maintained this…

  12. BIG-DATA and the Challenges for Statistical Inference and Economics Teaching and Learning

    Directory of Open Access Journals (Sweden)

    J.L. Peñaloza Figueroa

    2017-04-01

    Full Text Available The  increasing  automation  in  data  collection,  either  in  structured  or unstructured formats, as well as the development of reading, concatenation and comparison algorithms and the growing analytical skills which characterize the era of Big Data, cannot not only be considered a technological achievement, but an organizational, methodological and analytical challenge for knowledge as well, which is necessary to generate opportunities and added value. In fact, exploiting the potential of Big-Data includes all fields of community activity; and given its ability to extract behaviour patterns, we are interested in the challenges for the field of teaching and learning, particularly in the field of statistical inference and economic theory. Big-Data can improve the understanding of concepts, models and techniques used in both statistical inference and economic theory, and it can also generate reliable and robust short and long term predictions. These facts have led to the demand for analytical capabilities, which in turn encourages teachers and students to demand access to massive information produced by individuals, companies and public and private organizations in their transactions and inter- relationships. Mass data (Big Data is changing the way people access, understand and organize knowledge, which in turn is causing a shift in the approach to statistics and economics teaching, considering them as a real way of thinking rather than just operational and technical disciplines. Hence, the question is how teachers can use automated collection and analytical skills to their advantage when teaching statistics and economics; and whether it will lead to a change in what is taught and how it is taught.

  13. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  14. Big Data, Analytics, Læring og Uddannelse - et kritisk blik

    DEFF Research Database (Denmark)

    Ryberg, Thomas

    2016-01-01

    Begreber som Big Data og Learning Analytics er begyndt at cirkulere inden for undervisningsverdenen, og der er store forhåbninger og forestillinger om, hvilke positive forandringer det vil medføre for uddannelsessektoren. Samtidig lader det til, at forståelsen af hvad Big Data og Learning Analyti...... er og kan, er mere vag og tåget end de store forhåbninger og forestillinger lader ane. I artiklen vil jeg derfor kaste et kritisk blik på vores forestillinger om uddannelsesteknologier i almindelighed, og Big Data og Analytics i særdeleshed....

  15. Proceedings of the Second All-USGS Modeling Conference, February 11-14, 2008: Painting the Big Picture

    Science.gov (United States)

    Brady, Shailaja R.

    2009-01-01

    The Second USGS Modeling Conference was held February 11-14, 2008, in Orange Beach, Ala. Participants at the conference came from all U.S. Geological Survey (USGS) regions and represented all four science discipline - Biology, Geography, Geology, and Water. Representatives from other Department of the Interior (DOI) agencies and partners from the academic community also participated. The conference, which was focused on 'painting the big picture', emphasized the following themes: Integrated Landscape Monitoring, Global Climate Change, Ecosystem Modeling, and Hazards and Risks. The conference centered on providing a forum for modelers to meet, exchange information on current approaches, identify specific opportunities to share existing models and develop more linked and integrated models to address complex science questions, and increase collaboration across disciplines and with other organizations. Abstracts for the 31 oral presentations and more than 60 posters presented at the conference are included here. The conference also featured a field trip to review scientific modeling issues along the Gulf of Mexico. The field trip included visits to Mississippi Sandhill Crane National Wildlife Refuge, Grand Bay National Estuarine Research Reserve, the 5 Rivers Delta Resource Center, and Bon Secour National Wildlife Refuge. On behalf of all the participants of the Second All-USGS Modeling Conference, the conference organizing committee expresses our sincere appreciation for the support of field trip oganizers and leaders, including the managers from the various Reserves and Refuges. The organizing committee for the conference included Jenifer Bracewell, Sally Brady, Jacoby Carter, Thomas Casadevall, Linda Gundersen, Tom Gunther, Heather Henkel, Lauren Hay, Pat Jellison, K. Bruce Jones, Kenneth Odom, and Mark Wildhaber.

  16. The universe before the Big Bang cosmology and string theory

    CERN Document Server

    Gasperini, Maurizio

    2008-01-01

    Terms such as "expanding Universe", "big bang", and "initial singularity", are nowadays part of our common language. The idea that the Universe we observe today originated from an enormous explosion (big bang) is now well known and widely accepted, at all levels, in modern popular culture. But what happens to the Universe before the big bang? And would it make any sense at all to ask such a question? In fact, recent progress in theoretical physics, and in particular in String Theory, suggests answers to the above questions, providing us with mathematical tools able in principle to reconstruct the history of the Universe even for times before the big bang. In the emerging cosmological scenario the Universe, at the epoch of the big bang, instead of being a "new born baby" was actually a rather "aged" creature in the middle of its possibly infinitely enduring evolution. The aim of this book is to convey this picture in non-technical language accessibile also to non-specialists. The author, himself a leading cosm...

  17. The research of approaches of applying the results of big data analysis in higher education

    Science.gov (United States)

    Kochetkov, O. T.; Prokhorov, I. V.

    2017-01-01

    This article briefly discusses the approaches to the use of Big Data in the educational process of higher educational institutions. There is a brief description of nature of big data, their distribution in the education industry and new ways to use Big Data as part of the educational process are offered as well. This article describes a method for the analysis of the relevant requests by using Yandex.Wordstat (for laboratory works on the processing of data) and Google Trends (for actual pictures of interest and preference in a higher education institution).

  18. "Beyond the Big Bang: a new view of cosmology"

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    and parameters? Can one conceive of a completion of the scenario which resolves the big bang singularity and explains the dark energy now coming to dominate? Are we forced to resort to anthropic explanations? In this talk, I will develop an alternate picture, in which the big bang singularity is resolved and in which the value of the dark energy might be fixed by physical processes. The key is a resolution of the singularity. Using a combination of arguments,involving M theory and holography as well as analytic continuation in time within the low energy effective theory, I argue that there is a unique way to match cosmic evolution across the big bang singularity. The latter is no longer the beginning of time but is instead the gateway to an eternal, cyclical universe. If time permits, I shall describe new work c...

  19. Big(ger) Data as Better Data in Open Distance Learning

    Science.gov (United States)

    Prinsloo, Paul; Archer, Elizabeth; Barnes, Glen; Chetty, Yuraisha; van Zyl, Dion

    2015-01-01

    In the context of the hype, promise and perils of Big Data and the currently dominant paradigm of data-driven decision-making, it is important to critically engage with the potential of Big Data for higher education. We do not question the potential of Big Data, but we do raise a number of issues, and present a number of theses to be seriously…

  20. Semantic Web, Reusable Learning Objects, Personal Learning Networks in Health: Key Pieces for Digital Health Literacy.

    Science.gov (United States)

    Konstantinidis, Stathis Th; Wharrad, Heather; Windle, Richard; Bamidis, Panagiotis D

    2017-01-01

    The knowledge existing in the World Wide Web is exponentially expanding, while continuous advancements in health sciences contribute to the creation of new knowledge. There are a lot of efforts trying to identify how the social connectivity can endorse patients' empowerment, while other studies look at the identification and the quality of online materials. However, emphasis has not been put on the big picture of connecting the existing resources with the patients "new habits" of learning through their own Personal Learning Networks. In this paper we propose a framework for empowering patients' digital health literacy adjusted to patients' currents needs by utilizing the contemporary way of learning through Personal Learning Networks, existing high quality learning resources and semantics technologies for interconnecting knowledge pieces. The framework based on the concept of knowledge maps for health as defined in this paper. Health Digital Literacy needs definitely further enhancement and the use of the proposed concept might lead to useful tools which enable use of understandable health trusted resources tailored to each person needs.

  1. The Big Money Question: Action Research Projects Give District a Clear Picture of Professional Learning's Impact

    Science.gov (United States)

    Dill-Varga, Barbara

    2015-01-01

    How do districts know if the resources they have allocated to support professional learning in their school district are actually improving the quality of teaching and impacting student performance? In an increasingly challenging financial environment, this is important to know. In this article, a Chicago-area district facing a budget deficit…

  2. Big Java late objects

    CERN Document Server

    Horstmann, Cay S

    2012-01-01

    Big Java: Late Objects is a comprehensive introduction to Java and computer programming, which focuses on the principles of programming, software engineering, and effective learning. It is designed for a two-semester first course in programming for computer science students.

  3. The Role of Book Features in Young Children's Transfer of Information from Picture Books to Real-World Contexts.

    Science.gov (United States)

    Strouse, Gabrielle A; Nyhout, Angela; Ganea, Patricia A

    2018-01-01

    Picture books are an important source of new language, concepts, and lessons for young children. A large body of research has documented the nature of parent-child interactions during shared book reading. A new body of research has begun to investigate the features of picture books that support children's learning and transfer of that information to the real world. In this paper, we discuss how children's symbolic development, analogical reasoning, and reasoning about fantasy may constrain their ability to take away content information from picture books. We then review the nascent body of findings that has focused on the impact of picture book features on children's learning and transfer of words and letters, science concepts, problem solutions, and morals from picture books. In each domain of learning we discuss how children's development may interact with book features to impact their learning. We conclude that children's ability to learn and transfer content from picture books can be disrupted by some book features and research should directly examine the interaction between children's developing abilities and book characteristics on children's learning.

  4. The Role of Book Features in Young Children's Transfer of Information from Picture Books to Real-World Contexts

    Science.gov (United States)

    Strouse, Gabrielle A.; Nyhout, Angela; Ganea, Patricia A.

    2018-01-01

    Picture books are an important source of new language, concepts, and lessons for young children. A large body of research has documented the nature of parent-child interactions during shared book reading. A new body of research has begun to investigate the features of picture books that support children's learning and transfer of that information to the real world. In this paper, we discuss how children's symbolic development, analogical reasoning, and reasoning about fantasy may constrain their ability to take away content information from picture books. We then review the nascent body of findings that has focused on the impact of picture book features on children's learning and transfer of words and letters, science concepts, problem solutions, and morals from picture books. In each domain of learning we discuss how children's development may interact with book features to impact their learning. We conclude that children's ability to learn and transfer content from picture books can be disrupted by some book features and research should directly examine the interaction between children's developing abilities and book characteristics on children's learning. PMID:29467690

  5. The Role of Book Features in Young Children's Transfer of Information from Picture Books to Real-World Contexts

    Directory of Open Access Journals (Sweden)

    Gabrielle A. Strouse

    2018-02-01

    Full Text Available Picture books are an important source of new language, concepts, and lessons for young children. A large body of research has documented the nature of parent-child interactions during shared book reading. A new body of research has begun to investigate the features of picture books that support children's learning and transfer of that information to the real world. In this paper, we discuss how children's symbolic development, analogical reasoning, and reasoning about fantasy may constrain their ability to take away content information from picture books. We then review the nascent body of findings that has focused on the impact of picture book features on children's learning and transfer of words and letters, science concepts, problem solutions, and morals from picture books. In each domain of learning we discuss how children's development may interact with book features to impact their learning. We conclude that children's ability to learn and transfer content from picture books can be disrupted by some book features and research should directly examine the interaction between children's developing abilities and book characteristics on children's learning.

  6. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Fields, Brian D.; Olive, Keith A.

    2006-01-01

    We present an overview of the standard model of big bang nucleosynthesis (BBN), which describes the production of the light elements in the early universe. The theoretical prediction for the abundances of D, 3 He, 4 He, and 7 Li is discussed. We emphasize the role of key nuclear reactions and the methods by which experimental cross section uncertainties are propagated into uncertainties in the predicted abundances. The observational determination of the light nuclides is also discussed. Particular attention is given to the comparison between the predicted and observed abundances, which yields a measurement of the cosmic baryon content. The spectrum of anisotropies in the cosmic microwave background (CMB) now independently measures the baryon density to high precision; we show how the CMB data test BBN, and find that the CMB and the D and 4 He observations paint a consistent picture. This concordance stands as a major success of the hot big bang. On the other hand, 7 Li remains discrepant with the CMB-preferred baryon density; possible explanations are reviewed. Finally, moving beyond the standard model, primordial nucleosynthesis constraints on early universe and particle physics are also briefly discussed

  7. Employing Picture Description to Assess the Students' Descriptive Paragraph Writing

    Directory of Open Access Journals (Sweden)

    Ida Ayu Mega Cahyani

    2018-03-01

    Full Text Available Writing is considered as an important skill in learning process which is needed to be mastered by the students. However, in teaching learning process at schools or universities, the assessment of writing skill is not becoming the focus of learning process and the assessment is administered inappropriately. In this present study, the researcher undertook the study which dealt with assessing descriptive paragraph writing ability of the students through picture description by employing an ex post facto as the research design. The present study was intended to answer the research problem dealing with the extent of the students’ achievement of descriptive paragraph writing ability which is assessed through picture description. The samples under the study were 40 students determined by means of random sampling technique with lottery system. The data were collected through administering picture description as the research instrument. The obtained data were analyzed by using norm-reference measure of five standard values. The results of the data analysis showed that there were 67.50% samples of the study were successful in writing descriptive paragraph, while there were 32.50% samples were unsuccessful in writing descriptive paragraph which was assessed by administering picture description test

  8. A Framework for Learning about Big Data with Mobile Technologies for Democratic Participation: Possibilities, Limitations, and Unanticipated Obstacles

    Science.gov (United States)

    Philip, Thomas M.; Schuler-Brown, Sarah; Way, Winmar

    2013-01-01

    As Big Data becomes increasingly important in policy-making, research, marketing, and commercial applications, we argue that literacy in this domain is critical for engaged democratic participation and that peer-generated data from mobile technologies offer rich possibilities for students to learn about this new genre of data. Through the lens of…

  9. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  10. Analysed potential of big data and supervised machine learning techniques in effectively forecasting travel times from fused data

    Directory of Open Access Journals (Sweden)

    Ivana Šemanjski

    2015-12-01

    Full Text Available Travel time forecasting is an interesting topic for many ITS services. Increased availability of data collection sensors increases the availability of the predictor variables but also highlights the high processing issues related to this big data availability. In this paper we aimed to analyse the potential of big data and supervised machine learning techniques in effectively forecasting travel times. For this purpose we used fused data from three data sources (Global Positioning System vehicles tracks, road network infrastructure data and meteorological data and four machine learning techniques (k-nearest neighbours, support vector machines, boosting trees and random forest. To evaluate the forecasting results we compared them in-between different road classes in the context of absolute values, measured in minutes, and the mean squared percentage error. For the road classes with the high average speed and long road segments, machine learning techniques forecasted travel times with small relative error, while for the road classes with the small average speeds and segment lengths this was a more demanding task. All three data sources were proven itself to have a high impact on the travel time forecast accuracy and the best results (taking into account all road classes were achieved for the k-nearest neighbours and random forest techniques.

  11. Learning disability subtypes and the role of attention during the naming of pictures and words: an event-related potential analysis.

    Science.gov (United States)

    Greenham, Stephanie L; Stelmack, Robert M; van der Vlugt, Harry

    2003-01-01

    The role of attention in the processing of pictures and words was investigated for a group of normally achieving children and for groups of learning disability sub-types that were defined by deficient performance on tests of reading and spelling (Group RS) and of arithmetic (Group A). An event-related potential (ERP) recording paradigm was employed in which the children were required to attend to and name either pictures or words that were presented individually or in superimposed picture-word arrays that varied in degree of semantic relation. For Group RS, the ERP waves to words, both presented individually or attended in the superimposed array, exhibited reduced N450 amplitude relative to controls, whereas their ERP waves to pictures were normal. This suggests that the word-naming deficiency for Group RS is not a selective attention deficit but rather a specific linguistic deficit that develops at a later stage of processing. In contrast to Group RS and controls, Group A did not exhibit reliable early frontal negative waves (N280) to the super-imposed pictures and words, an effect that may reflect a selective attention deficit for these children that develops at an early stage of visuo-spatial processing. These early processing differences were also evident in smaller amplitude N450 waves for Group A when naming either pictures or words in the superimposed arrays.

  12. The Role of Pictures in Learning Biology: Part 1, Perception and Observation.

    Science.gov (United States)

    Reid, David

    1990-01-01

    The concept of a "picture superiority effect" is discussed. Examined are a number of perceptual considerations that need to be given to picture construction. Parameters which appear to attract the learner's attention to a picture are considered. (CW)

  13. A High-Order CFS Algorithm for Clustering Big Data

    Directory of Open Access Journals (Sweden)

    Fanyu Bu

    2016-01-01

    Full Text Available With the development of Internet of Everything such as Internet of Things, Internet of People, and Industrial Internet, big data is being generated. Clustering is a widely used technique for big data analytics and mining. However, most of current algorithms are not effective to cluster heterogeneous data which is prevalent in big data. In this paper, we propose a high-order CFS algorithm (HOCFS to cluster heterogeneous data by combining the CFS clustering algorithm and the dropout deep learning model, whose functionality rests on three pillars: (i an adaptive dropout deep learning model to learn features from each type of data, (ii a feature tensor model to capture the correlations of heterogeneous data, and (iii a tensor distance-based high-order CFS algorithm to cluster heterogeneous data. Furthermore, we verify our proposed algorithm on different datasets, by comparison with other two clustering schemes, that is, HOPCM and CFS. Results confirm the effectiveness of the proposed algorithm in clustering heterogeneous data.

  14. Initial conditions and the structure of the singularity in pre-big-bang cosmology

    NARCIS (Netherlands)

    Feinstein, A.; Kunze, K.E.; Vazquez-Mozo, M.A.

    2000-01-01

    We propose a picture, within the pre-big-bang approach, in which the universe emerges from a bath of plane gravitational and dilatonic waves. The waves interact gravitationally breaking the exact plane symmetry and lead generically to gravitational collapse resulting in a singularity with the

  15. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data

    Science.gov (United States)

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880

  16. A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.

    Science.gov (United States)

    Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming

    2018-01-01

    The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.

  17. Seeing the "Big" Picture: Big Data Methods for Exploring Relationships Between Usage, Language, and Outcome in Internet Intervention Data.

    Science.gov (United States)

    Carpenter, Jordan; Crutchley, Patrick; Zilca, Ran D; Schwartz, H Andrew; Smith, Laura K; Cobb, Angela M; Parks, Acacia C

    2016-08-31

    Assessing the efficacy of Internet interventions that are already in the market introduces both challenges and opportunities. While vast, often unprecedented amounts of data may be available (hundreds of thousands, and sometimes millions of participants with high dimensions of assessed variables), the data are observational in nature, are partly unstructured (eg, free text, images, sensor data), do not include a natural control group to be used for comparison, and typically exhibit high attrition rates. New approaches are therefore needed to use these existing data and derive new insights that can augment traditional smaller-group randomized controlled trials. Our objective was to demonstrate how emerging big data approaches can help explore questions about the effectiveness and process of an Internet well-being intervention. We drew data from the user base of a well-being website and app called Happify. To explore effectiveness, multilevel models focusing on within-person variation explored whether greater usage predicted higher well-being in a sample of 152,747 users. In addition, to explore the underlying processes that accompany improvement, we analyzed language for 10,818 users who had a sufficient volume of free-text response and timespan of platform usage. A topic model constructed from this free text provided language-based correlates of individual user improvement in outcome measures, providing insights into the beneficial underlying processes experienced by users. On a measure of positive emotion, the average user improved 1.38 points per week (SE 0.01, t122,455=113.60, Peffect on change in well-being over time, illustrating which topics may be more beneficial than others when engaging with the interventions. In particular, topics that are related to addressing negative thoughts and feelings were correlated with improvement over time. Using observational analyses on naturalistic big data, we can explore the relationship between usage and well-being among

  18. Moving Another Big Desk.

    Science.gov (United States)

    Fawcett, Gay

    1996-01-01

    New ways of thinking about leadership require that leaders move their big desks and establish environments that encourage trust and open communication. Educational leaders must trust their colleagues to make wise choices. When teachers are treated democratically as leaders, classrooms will also become democratic learning organizations. (SM)

  19. Exploring the Use of Color Photographs in Chinese Picture Composition Writings: An Action Research in Singapore Schools

    Science.gov (United States)

    Qiyan, Wang; Kian Chye, Lim; Huay Lit Woo

    2006-01-01

    Writing picture compositions is part of the requirements for the mother tongue language learning in Singapore primary schools. For Chinese as a mother tongue, the prevailing materials used for learning picture composition are confined to only black-and-white drawn pictures. This has caused some problems: (1) not many good and suitable…

  20. Big Bang, Blowup, and Modular Curves: Algebraic Geometry in Cosmology

    Science.gov (United States)

    Manin, Yuri I.; Marcolli, Matilde

    2014-07-01

    We introduce some algebraic geometric models in cosmology related to the ''boundaries'' of space-time: Big Bang, Mixmaster Universe, Penrose's crossovers between aeons. We suggest to model the kinematics of Big Bang using the algebraic geometric (or analytic) blow up of a point x. This creates a boundary which consists of the projective space of tangent directions to x and possibly of the light cone of x. We argue that time on the boundary undergoes the Wick rotation and becomes purely imaginary. The Mixmaster (Bianchi IX) model of the early history of the universe is neatly explained in this picture by postulating that the reverse Wick rotation follows a hyperbolic geodesic connecting imaginary time axis to the real one. Penrose's idea to see the Big Bang as a sign of crossover from ''the end of previous aeon'' of the expanding and cooling Universe to the ''beginning of the next aeon'' is interpreted as an identification of a natural boundary of Minkowski space at infinity with the Big Bang boundary.

  1. A glossary for big data in population and public health: discussion and commentary on terminology and research methods.

    Science.gov (United States)

    Fuller, Daniel; Buote, Richard; Stanley, Kevin

    2017-11-01

    The volume and velocity of data are growing rapidly and big data analytics are being applied to these data in many fields. Population and public health researchers may be unfamiliar with the terminology and statistical methods used in big data. This creates a barrier to the application of big data analytics. The purpose of this glossary is to define terms used in big data and big data analytics and to contextualise these terms. We define the five Vs of big data and provide definitions and distinctions for data mining, machine learning and deep learning, among other terms. We provide key distinctions between big data and statistical analysis methods applied to big data. We contextualise the glossary by providing examples where big data analysis methods have been applied to population and public health research problems and provide brief guidance on how to learn big data analysis methods. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Big Data: You Are Adding to . . . and Using It

    Science.gov (United States)

    Makela, Carole J.

    2016-01-01

    "Big data" prompts a whole lexicon of terms--data flow; analytics; data mining; data science; smart you name it (cars, houses, cities, wearables, etc.); algorithms; learning analytics; predictive analytics; data aggregation; data dashboards; digital tracks; and big data brokers. New terms are being coined frequently. Are we paying…

  3. A misleading feeling of happiness: metamemory for positive emotional and neutral pictures.

    Science.gov (United States)

    Hourihan, Kathleen L; Bursey, Elliott

    2017-01-01

    Emotional information is often remembered better than neutral information, but the emotional benefit for positive information is less consistently observed than the benefit for negative information. The current study examined whether positive emotional pictures are recognised better than neutral pictures, and further examined whether participants can predict how emotion affects picture recognition. In two experiments, participants studied a mixed list of positive and neutral pictures, and made immediate judgements of learning (JOLs). JOLs for positive pictures were consistently higher than for neutral pictures. However, recognition performance displayed an inconsistent pattern. In Experiment 1, neutral pictures were more discriminable than positive pictures, but Experiment 2 found no difference in recognition based on emotional content. Despite participants' beliefs, positive emotional content does not appear to consistently benefit picture memory.

  4. Picture reality decision, semantic categories and gender. A new set of pictures, with norms and an experimental study.

    Science.gov (United States)

    Barbarotto, Riccardo; Laiacona, Marcella; Macchi, Valeria; Capitani, Erminio

    2002-01-01

    We present a new corpus of 80 pictures of unreal objects, useful for a controlled assessment of object reality decision. The new pictures were assembled from parts of the Snodgrass and Vanderwart [J. Exp. Psychol., Hum. Learning Memory 6; 1980: 174] set and were devised for the purpose of contrasting natural categories (animals, fruits and vegetables), artefacts (tools, vehicles and furniture), body parts and musical instruments. We examined 140 normal subjects in a free-choice and a multiple-choice object decision task, assembled with 80 pictures of real objects and above 80 new pictures of unreal objects in order to obtain a difficulty index for each picture. We found that the tasks were more difficult with pictures representing natural entities than with pictures of artefacts. We found a gender by category interaction, with a female superiority with some natural categories (fruits and vegetables, but not animals), and a male advantage with artefacts. On this basis, the difficulty index we calculated for each picture is separately reported for males and females. We discuss the possible origin of the gender effect, which has been found with the same categories in other tasks and has a counterpart in the different familiarity of the stimuli for males and females. In particular, we contrast explanations based on socially determined gender differences with accounts based on evolutionary pressures. We further comment on the relationship between data from normal subjects and the domain-specific account of semantic category dissociations observed in brain-damaged patients.

  5. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  6. Learning and memory for sequences of pictures, words, and spatial locations: an exploration of serial position effects.

    Science.gov (United States)

    Bonk, William J; Healy, Alice F

    2010-01-01

    A serial reproduction of order with distractors task was developed to make it possible to observe successive snapshots of the learning process at each serial position. The new task was used to explore the effect of several variables on serial memory performance: stimulus content (words, blanks, and pictures), presentation condition (spatial information vs. none), semantically categorized item clustering (grouped vs. ungrouped), and number of distractors relative to targets (none, equal, double). These encoding and retrieval variables, along with learning attempt number, affected both overall performance levels and the shape of the serial position function, although a large and extensive primacy advantage and a small 1-item recency advantage were found in each case. These results were explained well by a version of the scale-independent memory, perception, and learning model that accounted for improved performance by increasing the value of only a single parameter that reflects reduced interference from distant items.

  7. Machine Learning for Big Data: A Study to Understand Limits at Scale

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Del-Castillo-Negrete, Carlos Emilio [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-21

    This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measures such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms

  8. The lack of a big picture in tuberculosis: the clinical point of view, the problems of experimental modeling and immunomodulation. The factors we should consider when designing novel treatment strategies.

    Science.gov (United States)

    Vilaplana, Cristina; Cardona, Pere-Joan

    2014-01-01

    This short review explores the large gap between clinical issues and basic science, and suggests why tuberculosis research should focus on redirect the immune system and not only on eradicating Mycobacterium tuberculosis bacillus. Along the manuscript, several concepts involved in human tuberculosis are explored in order to understand the big picture, including infection and disease dynamics, animal modeling, liquefaction, inflammation and immunomodulation. Scientists should take into account all these factors in order to answer questions with clinical relevance. Moreover, the inclusion of the concept of a strong inflammatory response being required in order to develop cavitary tuberculosis disease opens a new field for developing new therapeutic and prophylactic tools in which destruction of the bacilli may not necessarily be the final goal.

  9. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  10. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  11. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  12. Visualising Cultures: The "European Picture Book Collection" Moves "Down Under"

    Science.gov (United States)

    Cotton, Penni; Daly, Nicola

    2015-01-01

    The potential for picture books in national collections to act as mirrors reflecting the reader's cultural identity, is widely accepted. This paper shows that the books in a New Zealand Picture Book Collection can also become windows into unfamiliar worlds for non-New Zealand readers, giving them the opportunity to learn more about a context in…

  13. Cognitive computing and big data analytics

    CERN Document Server

    Hurwitz, Judith; Bowles, Adrian

    2015-01-01

    MASTER THE ABILITY TO APPLY BIG DATA ANALYTICS TO MASSIVE AMOUNTS OF STRUCTURED AND UNSTRUCTURED DATA Cognitive computing is a technique that allows humans and computers to collaborate in order to gain insights and knowledge from data by uncovering patterns and anomalies. This comprehensive guide explains the underlying technologies, such as artificial intelligence, machine learning, natural language processing, and big data analytics. It then demonstrates how you can use these technologies to transform your organization. You will explore how different vendors and different industries are a

  14. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  15. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  16. Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.

    Science.gov (United States)

    Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed

    2018-01-01

    The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.

  17. Personalizing Medicine Through Hybrid Imaging and Medical Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Laszlo Papp

    2018-06-01

    Full Text Available Medical imaging has evolved from a pure visualization tool to representing a primary source of analytic approaches toward in vivo disease characterization. Hybrid imaging is an integral part of this approach, as it provides complementary visual and quantitative information in the form of morphological and functional insights into the living body. As such, non-invasive imaging modalities no longer provide images only, but data, as stated recently by pioneers in the field. Today, such information, together with other, non-imaging medical data creates highly heterogeneous data sets that underpin the concept of medical big data. While the exponential growth of medical big data challenges their processing, they inherently contain information that benefits a patient-centric personalized healthcare. Novel machine learning approaches combined with high-performance distributed cloud computing technologies help explore medical big data. Such exploration and subsequent generation of knowledge require a profound understanding of the technical challenges. These challenges increase in complexity when employing hybrid, aka dual- or even multi-modality image data as input to big data repositories. This paper provides a general insight into medical big data analysis in light of the use of hybrid imaging information. First, hybrid imaging is introduced (see further contributions to this special Research Topic, also in the context of medical big data, then the technological background of machine learning as well as state-of-the-art distributed cloud computing technologies are presented, followed by the discussion of data preservation and data sharing trends. Joint data exploration endeavors in the context of in vivo radiomics and hybrid imaging will be presented. Standardization challenges of imaging protocol, delineation, feature engineering, and machine learning evaluation will be detailed. Last, the paper will provide an outlook into the future role of hybrid

  18. Big Data and Machine Learning in Plastic Surgery: A New Frontier in Surgical Innovation.

    Science.gov (United States)

    Kanevsky, Jonathan; Corban, Jason; Gaster, Richard; Kanevsky, Ari; Lin, Samuel; Gilardino, Mirko

    2016-05-01

    Medical decision-making is increasingly based on quantifiable data. From the moment patients come into contact with the health care system, their entire medical history is recorded electronically. Whether a patient is in the operating room or on the hospital ward, technological advancement has facilitated the expedient and reliable measurement of clinically relevant health metrics, all in an effort to guide care and ensure the best possible clinical outcomes. However, as the volume and complexity of biomedical data grow, it becomes challenging to effectively process "big data" using conventional techniques. Physicians and scientists must be prepared to look beyond classic methods of data processing to extract clinically relevant information. The purpose of this article is to introduce the modern plastic surgeon to machine learning and computational interpretation of large data sets. What is machine learning? Machine learning, a subfield of artificial intelligence, can address clinically relevant problems in several domains of plastic surgery, including burn surgery; microsurgery; and craniofacial, peripheral nerve, and aesthetic surgery. This article provides a brief introduction to current research and suggests future projects that will allow plastic surgeons to explore this new frontier of surgical science.

  19. String Theory and Pre-big bang Cosmology

    CERN Document Server

    Gasperini, M.

    In string theory, the traditional picture of a Universe that emerges from the inflation of a very small and highly curved space-time patch is a possibility, not a necessity: quite different initial conditions are possible, and not necessarily unlikely. In particular, the duality symmetries of string theory suggest scenarios in which the Universe starts inflating from an initial state characterized by very small curvature and interactions. Such a state, being gravitationally unstable, will evolve towards higher curvature and coupling, until string-size effects and loop corrections make the Universe "bounce" into a standard, decreasing-curvature regime. In such a context, the hot big bang of conventional cosmology is replaced by a "hot big bounce" in which the bouncing and heating mechanisms originate from the quantum production of particles in the high-curvature, large-coupling pre-bounce phase. Here we briefly summarize the main features of this inflationary scenario, proposed a quarter century ago. In its si...

  20. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    Science.gov (United States)

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  1. Implications of Big Data for cell biology

    OpenAIRE

    Dolinski, Kara; Troyanskaya, Olga G.

    2015-01-01

    Big Data” has surpassed “systems biology” and “omics” as the hottest buzzword in the biological sciences, but is there any substance behind the hype? Certainly, we have learned about various aspects of cell and molecular biology from the many individual high-throughput data sets that have been published in the past 15–20 years. These data, although useful as individual data sets, can provide much more knowledge when interrogated with Big Data approaches, such as applying integrative methods ...

  2. Statistical Challenges in Modeling Big Brain Signals

    KAUST Repository

    Yu, Zhaoxia; Pluta, Dustin; Shen, Tong; Chen, Chuansheng; Xue, Gui; Ombao, Hernando

    2017-01-01

    Brain signal data are inherently big: massive in amount, complex in structure, and high in dimensions. These characteristics impose great challenges for statistical inference and learning. Here we review several key challenges, discuss possible

  3. A Big Bang Lab

    Science.gov (United States)

    Scheider, Walter

    2005-01-01

    The February 2005 issue of The Science Teacher (TST) reminded everyone that by learning how scientists study stars, students gain an understanding of how science measures things that can not be set up in lab, either because they are too big, too far away, or happened in a very distant past. The authors of "How Far are the Stars?" show how the…

  4. Application and Exploration of Big Data Mining in Clinical Medicine.

    Science.gov (United States)

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  5. Memory for pictures and sounds: independence of auditory and visual codes.

    Science.gov (United States)

    Thompson, V A; Paivio, A

    1994-09-01

    Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

  6. The Structural Consequences of Big Data-Driven Education.

    Science.gov (United States)

    Zeide, Elana

    2017-06-01

    Educators and commenters who evaluate big data-driven learning environments focus on specific questions: whether automated education platforms improve learning outcomes, invade student privacy, and promote equality. This article puts aside separate unresolved-and perhaps unresolvable-issues regarding the concrete effects of specific technologies. It instead examines how big data-driven tools alter the structure of schools' pedagogical decision-making, and, in doing so, change fundamental aspects of America's education enterprise. Technological mediation and data-driven decision-making have a particularly significant impact in learning environments because the education process primarily consists of dynamic information exchange. In this overview, I highlight three significant structural shifts that accompany school reliance on data-driven instructional platforms that perform core school functions: teaching, assessment, and credentialing. First, virtual learning environments create information technology infrastructures featuring constant data collection, continuous algorithmic assessment, and possibly infinite record retention. This undermines the traditional intellectual privacy and safety of classrooms. Second, these systems displace pedagogical decision-making from educators serving public interests to private, often for-profit, technology providers. They constrain teachers' academic autonomy, obscure student evaluation, and reduce parents' and students' ability to participate or challenge education decision-making. Third, big data-driven tools define what "counts" as education by mapping the concepts, creating the content, determining the metrics, and setting desired learning outcomes of instruction. These shifts cede important decision-making to private entities without public scrutiny or pedagogical examination. In contrast to the public and heated debates that accompany textbook choices, schools often adopt education technologies ad hoc. Given education

  7. Big data of tree species distributions

    DEFF Research Database (Denmark)

    Serra-Diaz, Josep M.; Enquist, Brian J.; Maitner, Brian

    2018-01-01

    are currently available in big databases, several challenges hamper their use, notably geolocation problems and taxonomic uncertainty. Further, we lack a complete picture of the data coverage and quality assessment for open/public databases of tree occurrences. Methods: We combined data from five major...... and data aggregation, especially from national forest inventory programs, to improve the current publicly available data.......Background: Trees play crucial roles in the biosphere and societies worldwide, with a total of 60,065 tree species currently identified. Increasingly, a large amount of data on tree species occurrences is being generated worldwide: from inventories to pressed plants. While many of these data...

  8. Big Data and Nursing: Implications for the Future.

    Science.gov (United States)

    Topaz, Maxim; Pruinelli, Lisiane

    2017-01-01

    Big data is becoming increasingly more prevalent and it affects the way nurses learn, practice, conduct research and develop policy. The discipline of nursing needs to maximize the benefits of big data to advance the vision of promoting human health and wellbeing. However, current practicing nurses, educators and nurse scientists often lack the required skills and competencies necessary for meaningful use of big data. Some of the key skills for further development include the ability to mine narrative and structured data for new care or outcome patterns, effective data visualization techniques, and further integration of nursing sensitive data into artificial intelligence systems for better clinical decision support. We provide growth-path vision recommendations for big data competencies for practicing nurses, nurse educators, researchers, and policy makers to help prepare the next generation of nurses and improve patient outcomes trough better quality connected health.

  9. An optimal big data workflow for biomedical image analysis

    Directory of Open Access Journals (Sweden)

    Aurelle Tchagna Kouanou

    Full Text Available Background and objective: In the medical field, data volume is increasingly growing, and traditional methods cannot manage it efficiently. In biomedical computation, the continuous challenges are: management, analysis, and storage of the biomedical data. Nowadays, big data technology plays a significant role in the management, organization, and analysis of data, using machine learning and artificial intelligence techniques. It also allows a quick access to data using the NoSQL database. Thus, big data technologies include new frameworks to process medical data in a manner similar to biomedical images. It becomes very important to develop methods and/or architectures based on big data technologies, for a complete processing of biomedical image data. Method: This paper describes big data analytics for biomedical images, shows examples reported in the literature, briefly discusses new methods used in processing, and offers conclusions. We argue for adapting and extending related work methods in the field of big data software, using Hadoop and Spark frameworks. These provide an optimal and efficient architecture for biomedical image analysis. This paper thus gives a broad overview of big data analytics to automate biomedical image diagnosis. A workflow with optimal methods and algorithm for each step is proposed. Results: Two architectures for image classification are suggested. We use the Hadoop framework to design the first, and the Spark framework for the second. The proposed Spark architecture allows us to develop appropriate and efficient methods to leverage a large number of images for classification, which can be customized with respect to each other. Conclusions: The proposed architectures are more complete, easier, and are adaptable in all of the steps from conception. The obtained Spark architecture is the most complete, because it facilitates the implementation of algorithms with its embedded libraries. Keywords: Biomedical images, Big

  10. Supporting Learning from Illustrated Texts: Conceptualizing and Evaluating a Learning Strategy

    Science.gov (United States)

    Schlag, Sabine; Ploetzner, Rolf

    2011-01-01

    Texts and pictures are often combined in order to improve learning. Many students, however, have difficulty to appropriately process text-picture combinations. We have thus conceptualized a learning strategy which supports learning from illustrated texts. By inducing the processes of information selection, organization, integration, and…

  11. Readiness of Adults to Learn Using E-Learning, M-Learning and T-Learning Technologies

    Science.gov (United States)

    Vilkonis, Rytis; Bakanoviene, Tatjana; Turskiene, Sigita

    2013-01-01

    The article presents results of the empirical research revealing readiness of adults to participate in the lifelong learning process using e-learning, m-learning and t-learning technologies. The research has been carried out in the framework of the international project eBig3 aiming at development a new distance learning platform blending virtual…

  12. Toward a Learning Health-care System - Knowledge Delivery at the Point of Care Empowered by Big Data and NLP.

    Science.gov (United States)

    Kaggal, Vinod C; Elayavilli, Ravikumar Komandur; Mehrabi, Saeed; Pankratz, Joshua J; Sohn, Sunghwan; Wang, Yanshan; Li, Dingcheng; Rastegar, Majid Mojarad; Murphy, Sean P; Ross, Jason L; Chaudhry, Rajeev; Buntrock, James D; Liu, Hongfang

    2016-01-01

    The concept of optimizing health care by understanding and generating knowledge from previous evidence, ie, the Learning Health-care System (LHS), has gained momentum and now has national prominence. Meanwhile, the rapid adoption of electronic health records (EHRs) enables the data collection required to form the basis for facilitating LHS. A prerequisite for using EHR data within the LHS is an infrastructure that enables access to EHR data longitudinally for health-care analytics and real time for knowledge delivery. Additionally, significant clinical information is embedded in the free text, making natural language processing (NLP) an essential component in implementing an LHS. Herein, we share our institutional implementation of a big data-empowered clinical NLP infrastructure, which not only enables health-care analytics but also has real-time NLP processing capability. The infrastructure has been utilized for multiple institutional projects including the MayoExpertAdvisor, an individualized care recommendation solution for clinical care. We compared the advantages of big data over two other environments. Big data infrastructure significantly outperformed other infrastructure in terms of computing speed, demonstrating its value in making the LHS a possibility in the near future.

  13. METODE PEMBELAJARAN “PICTURE AND PICTURE” DALAM MENULIS TEKS CERITA FIKSI NOVEL PADA BUKU TEKS BAHASA INDONESIA EKSPRESI DIRI DAN AKADEMIK SMA/ MA/ SMK/ MAK KELAS X11 SEMESTER 2 KURIKULUM 2013

    Directory of Open Access Journals (Sweden)

    Jamilatus Sa'adah

    2017-04-01

    Full Text Available This study expects a teacher has learning methods to better support classroom learning, thus resulting in a productive and active learning. Indonesian textbooks and academic self-expression SMA / MA / SMK / MAK class XII 2nd semester curriculum in 2013, the Ministry of education and culture through learning to write fiction in the novel. This study uses a method that is Pictur Picture and Picture and Picture Learning model is a learning method that uses images and paired or sorted into a logical sequence. Learning is characterized Active, Innovative, Creative, and Fun. Learning Model relies images as a medium of learning. These images become a major factor in the learning process. In pulling teaching of writing fiction in the novel, the author tried to examine the learning method and Pictur Picture.  

  14. The New Possibilities from "Big Data" to Overlooked Associations Between Diabetes, Biochemical Parameters, Glucose Control, and Osteoporosis.

    Science.gov (United States)

    Kruse, Christian

    2018-06-01

    To review current practices and technologies within the scope of "Big Data" that can further our understanding of diabetes mellitus and osteoporosis from large volumes of data. "Big Data" techniques involving supervised machine learning, unsupervised machine learning, and deep learning image analysis are presented with examples of current literature. Supervised machine learning can allow us to better predict diabetes-induced osteoporosis and understand relative predictor importance of diabetes-affected bone tissue. Unsupervised machine learning can allow us to understand patterns in data between diabetic pathophysiology and altered bone metabolism. Image analysis using deep learning can allow us to be less dependent on surrogate predictors and use large volumes of images to classify diabetes-induced osteoporosis and predict future outcomes directly from images. "Big Data" techniques herald new possibilities to understand diabetes-induced osteoporosis and ascertain our current ability to classify, understand, and predict this condition.

  15. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  16. Statistical Challenges in Modeling Big Brain Signals

    KAUST Repository

    Yu, Zhaoxia

    2017-11-01

    Brain signal data are inherently big: massive in amount, complex in structure, and high in dimensions. These characteristics impose great challenges for statistical inference and learning. Here we review several key challenges, discuss possible solutions, and highlight future research directions.

  17. Translating Big Data into Smart Data for Veterinary Epidemiology.

    Science.gov (United States)

    VanderWaal, Kimberly; Morrison, Robert B; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing "big" data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having "big data" to create "smart data," with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  18. Big data based fraud risk management at Alibaba

    OpenAIRE

    Chen, Jidong; Tao, Ye; Wang, Haoran; Chen, Tao

    2015-01-01

    With development of mobile internet and finance, fraud risk comes in all shapes and sizes. This paper is to introduce the Fraud Risk Management at Alibaba under big data. Alibaba has built a fraud risk monitoring and management system based on real-time big data processing and intelligent risk models. It captures fraud signals directly from huge amount data of user behaviors and network, analyzes them in real-time using machine learning, and accurately predicts the bad users and transactions....

  19. Big data analytics for early detection of breast cancer based on machine learning

    Science.gov (United States)

    Ivanova, Desislava

    2017-12-01

    This paper presents the concept and the modern advances in personalized medicine that rely on technology and review the existing tools for early detection of breast cancer. The breast cancer types and distribution worldwide is discussed. It is spent time to explain the importance of identifying the normality and to specify the main classes in breast cancer, benign or malignant. The main purpose of the paper is to propose a conceptual model for early detection of breast cancer based on machine learning for processing and analysis of medical big dataand further knowledge discovery for personalized treatment. The proposed conceptual model is realized by using Naive Bayes classifier. The software is written in python programming language and for the experiments the Wisconsin breast cancer database is used. Finally, the experimental results are presented and discussed.

  20. Application and Exploration of Big Data Mining in Clinical Medicine

    Science.gov (United States)

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-01-01

    Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378

  1. Big data analysis new algorithms for a new society

    CERN Document Server

    Stefanowski, Jerzy

    2016-01-01

    This edited volume is devoted to Big Data Analysis from a Machine Learning standpoint as presented by some of the most eminent researchers in this area. It demonstrates that Big Data Analysis opens up new research problems which were either never considered before, or were only considered within a limited range. In addition to providing methodological discussions on the principles of mining Big Data and the difference between traditional statistical data analysis and newer computing frameworks, this book presents recently developed algorithms affecting such areas as business, financial forecasting, human mobility, the Internet of Things, information networks, bioinformatics, medical systems and life science. It explores, through a number of specific examples, how the study of Big Data Analysis has evolved and how it has started and will most likely continue to affect society. While the benefits brought upon by Big Data Analysis are underlined, the book also discusses some of the warnings that have been issued...

  2. Big data analytics to improve cardiovascular care: promise and challenges.

    Science.gov (United States)

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  3. Really big numbers

    CERN Document Server

    Schwartz, Richard Evan

    2014-01-01

    In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...

  4. Big data and biomedical informatics: a challenging opportunity.

    Science.gov (United States)

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  5. Nurse practitioner preferences for distance education methods related to learning style, course content, and achievement.

    Science.gov (United States)

    Andrusyszyn, M A; Cragg, C E; Humbert, J

    2001-04-01

    The relationships among multiple distance delivery methods, preferred learning style, content, and achievement was sought for primary care nurse practitioner students. A researcher-designed questionnaire was completed by 86 (71%) participants, while 6 engaged in follow-up interviews. The results of the study included: participants preferred learning by "considering the big picture"; "setting own learning plans"; and "focusing on concrete examples." Several positive associations were found: learning on own with learning by reading, and setting own learning plans; small group with learning through discussion; large group with learning new things through hearing and with having learning plans set by others. The most preferred method was print-based material and the least preferred method was audio tape. The most suited method for content included video teleconferencing for counseling, political action, and transcultural issues; and video tape for physical assessment. Convenience, self-direction, and timing of learning were more important than delivery method or learning style. Preferred order of learning was reading, discussing, observing, doing, and reflecting. Recommended considerations when designing distance courses include a mix of delivery methods, specific content, outcomes, learner characteristics, and state of technology.

  6. Recreating big Ban to learn more about universe

    CERN Multimedia

    2005-01-01

    A multi-nation effort at Gemeva-based CERN laboratory to recreate conditions existing just after the Big Ban could give vital clues to the creation of the universe and help overcome prejudices against this widely held scientific theory, an eminent science writer said in Kolkata on Tuesday

  7. Toward a Learning Health-care System – Knowledge Delivery at the Point of Care Empowered by Big Data and NLP

    Science.gov (United States)

    Kaggal, Vinod C.; Elayavilli, Ravikumar Komandur; Mehrabi, Saeed; Pankratz, Joshua J.; Sohn, Sunghwan; Wang, Yanshan; Li, Dingcheng; Rastegar, Majid Mojarad; Murphy, Sean P.; Ross, Jason L.; Chaudhry, Rajeev; Buntrock, James D.; Liu, Hongfang

    2016-01-01

    The concept of optimizing health care by understanding and generating knowledge from previous evidence, ie, the Learning Health-care System (LHS), has gained momentum and now has national prominence. Meanwhile, the rapid adoption of electronic health records (EHRs) enables the data collection required to form the basis for facilitating LHS. A prerequisite for using EHR data within the LHS is an infrastructure that enables access to EHR data longitudinally for health-care analytics and real time for knowledge delivery. Additionally, significant clinical information is embedded in the free text, making natural language processing (NLP) an essential component in implementing an LHS. Herein, we share our institutional implementation of a big data-empowered clinical NLP infrastructure, which not only enables health-care analytics but also has real-time NLP processing capability. The infrastructure has been utilized for multiple institutional projects including the MayoExpertAdvisor, an individualized care recommendation solution for clinical care. We compared the advantages of big data over two other environments. Big data infrastructure significantly outperformed other infrastructure in terms of computing speed, demonstrating its value in making the LHS a possibility in the near future. PMID:27385912

  8. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  9. BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Machine learning and data mining advance predictive big data analysis in precision animal agriculture.

    Science.gov (United States)

    Morota, Gota; Ventura, Ricardo V; Silva, Fabyano F; Koyama, Masanori; Fernando, Samodha C

    2018-04-14

    Precision animal agriculture is poised to rise to prominence in the livestock enterprise in the domains of management, production, welfare, sustainability, health surveillance, and environmental footprint. Considerable progress has been made in the use of tools to routinely monitor and collect information from animals and farms in a less laborious manner than before. These efforts have enabled the animal sciences to embark on information technology-driven discoveries to improve animal agriculture. However, the growing amount and complexity of data generated by fully automated, high-throughput data recording or phenotyping platforms, including digital images, sensor and sound data, unmanned systems, and information obtained from real-time noninvasive computer vision, pose challenges to the successful implementation of precision animal agriculture. The emerging fields of machine learning and data mining are expected to be instrumental in helping meet the daunting challenges facing global agriculture. Yet, their impact and potential in "big data" analysis have not been adequately appreciated in the animal science community, where this recognition has remained only fragmentary. To address such knowledge gaps, this article outlines a framework for machine learning and data mining and offers a glimpse into how they can be applied to solve pressing problems in animal sciences.

  10. Big Data and Biomedical Informatics: A Challenging Opportunity

    Science.gov (United States)

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  11. Military Simulation Big Data: Background, State of the Art, and Challenges

    Directory of Open Access Journals (Sweden)

    Xiao Song

    2015-01-01

    Full Text Available Big data technology has undergone rapid development and attained great success in the business field. Military simulation (MS is another application domain producing massive datasets created by high-resolution models and large-scale simulations. It is used to study complicated problems such as weapon systems acquisition, combat analysis, and military training. This paper firstly reviewed several large-scale military simulations producing big data (MS big data for a variety of usages and summarized the main characteristics of result data. Then we looked at the technical details involving the generation, collection, processing, and analysis of MS big data. Two frameworks were also surveyed to trace the development of the underlying software platform. Finally, we identified some key challenges and proposed a framework as a basis for future work. This framework considered both the simulation and big data management at the same time based on layered and service oriented architectures. The objective of this review is to help interested researchers learn the key points of MS big data and provide references for tackling the big data problem and performing further research.

  12. WE-H-BRB-02: Where Do We Stand in the Applications of Big Data in Radiation Oncology?

    International Nuclear Information System (INIS)

    Xing, L.

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  13. WE-H-BRB-02: Where Do We Stand in the Applications of Big Data in Radiation Oncology?

    Energy Technology Data Exchange (ETDEWEB)

    Xing, L. [Stanford University School of Medicine (United States)

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  14. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  15. [Big data approaches in psychiatry: examples in depression research].

    Science.gov (United States)

    Bzdok, D; Karrer, T M; Habel, U; Schneider, F

    2017-11-29

    The exploration and therapy of depression is aggravated by heterogeneous etiological mechanisms and various comorbidities. With the growing trend towards big data in psychiatry, research and therapy can increasingly target the individual patient. This novel objective requires special methods of analysis. The possibilities and challenges of the application of big data approaches in depression are examined in closer detail. Examples are given to illustrate the possibilities of big data approaches in depression research. Modern machine learning methods are compared to traditional statistical methods in terms of their potential in applications to depression. Big data approaches are particularly suited to the analysis of detailed observational data, the prediction of single data points or several clinical variables and the identification of endophenotypes. A current challenge lies in the transfer of results into the clinical treatment of patients with depression. Big data approaches enable biological subtypes in depression to be identified and predictions in individual patients to be made. They have enormous potential for prevention, early diagnosis, treatment choice and prognosis of depression as well as for treatment development.

  16. A practical guide to big data research in psychology.

    Science.gov (United States)

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Translating Big Data into Smart Data for Veterinary Epidemiology

    Directory of Open Access Journals (Sweden)

    Kimberly VanderWaal

    2017-07-01

    Full Text Available The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing “big” data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having “big data” to create “smart data,” with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  18. What Is Seen Is Who You Are: Are Cues in Selfie Pictures Related to Personality Characteristics?

    Science.gov (United States)

    Musil, Bojan; Preglej, Andrej; Ropert, Tadevž; Klasinc, Lucia; Babič, Nenad Č.

    2017-01-01

    Developments and innovation in the areas of mobile information technology, digital media and social networks foster new reflections on computer-mediated communication research, especially in the field of self-presentation. In this context, the selfie as a self-portrait photo is interesting, because as a meaningful gesture, it actively and directly relates the content of the photo to the author of the picture. From the perspective of the selfie as an image and the impression it forms, in the first part of the research we explored the distinctive characteristics of selfie pictures; moreover, from the perspective of the potential reflection of a selfie image on the personality of its author, in the second part we related the characteristics of selfie pictures to various personality constructs (e.g., Big Five personality traits narcissism and femininity-masculinity). Important aspects of selfies especially in relation to gender include the tilt of the head, the side of the face exhibited, mood and head position, later related also to the context of the selfie picture. We found no significant relations between selfie cues and personality constructs. The face-ism index was related to entitlement, and selfie availability to neuroticism. PMID:28197113

  19. What Is Seen Is Who You Are: Are Cues in Selfie Pictures Related to Personality Characteristics?

    Science.gov (United States)

    Musil, Bojan; Preglej, Andrej; Ropert, Tadevž; Klasinc, Lucia; Babič, Nenad Č

    2017-01-01

    Developments and innovation in the areas of mobile information technology, digital media and social networks foster new reflections on computer-mediated communication research, especially in the field of self-presentation. In this context, the selfie as a self-portrait photo is interesting, because as a meaningful gesture, it actively and directly relates the content of the photo to the author of the picture. From the perspective of the selfie as an image and the impression it forms, in the first part of the research we explored the distinctive characteristics of selfie pictures; moreover, from the perspective of the potential reflection of a selfie image on the personality of its author, in the second part we related the characteristics of selfie pictures to various personality constructs (e.g., Big Five personality traits narcissism and femininity-masculinity). Important aspects of selfies especially in relation to gender include the tilt of the head, the side of the face exhibited, mood and head position, later related also to the context of the selfie picture. We found no significant relations between selfie cues and personality constructs. The face-ism index was related to entitlement, and selfie availability to neuroticism.

  20. The Effect of Extraversion and Presentation Order on Learning from Picture-Commentary Sequences by Children.

    Science.gov (United States)

    Riding, R. J.; Wicks, B. J.

    1978-01-01

    Groups of extrovert, ambivert, and introvert children, aged 8, saw pictures with a taped commentary about each. On an immediate recall test, extroverts recalled most if given the commentary before the picture, introverts did best when the picture came first, and ambiverts performed similarly in both conditions. (Author/SJL)

  1. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  2. Picture languages formal models for picture recognition

    CERN Document Server

    Rosenfeld, Azriel

    1979-01-01

    Computer Science and Applied Mathematics: Picture Languages: Formal Models for Picture Recognition treats pictorial pattern recognition from the formal standpoint of automata theory. This book emphasizes the capabilities and relative efficiencies of two types of automata-array automata and cellular array automata, with respect to various array recognition tasks. The array automata are simple processors that perform sequences of operations on arrays, while the cellular array automata are arrays of processors that operate on pictures in a highly parallel fashion, one processor per picture element. This compilation also reviews a collection of results on two-dimensional sequential and parallel array acceptors. Some of the analogous one-dimensional results and array grammars and their relation to acceptors are likewise covered in this text. This publication is suitable for researchers, professionals, and specialists interested in pattern recognition and automata theory.

  3. The Evolution of Big Data and Learning Analytics in American Higher Education

    Science.gov (United States)

    Picciano, Anthony G.

    2012-01-01

    Data-driven decision making, popularized in the 1980s and 1990s, is evolving into a vastly more sophisticated concept known as big data that relies on software approaches generally referred to as analytics. Big data and analytics for instructional applications are in their infancy and will take a few years to mature, although their presence is…

  4. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  5. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language.

    Science.gov (United States)

    Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela

    2017-01-01

    Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word's meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.

  6. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language

    Directory of Open Access Journals (Sweden)

    Claudia Repetto

    2017-12-01

    Full Text Available Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2. Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word’s meaning or by the learner performing a gesture to the word (enactment. Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1 reading, (2 reading and pairing the novel word to a picture, and (3 reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1. When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.

  7. Big Data in radiation therapy: challenges and opportunities.

    Science.gov (United States)

    Lustberg, Tim; van Soest, Johan; Jochems, Arthur; Deist, Timo; van Wijk, Yvonka; Walsh, Sean; Lambin, Philippe; Dekker, Andre

    2017-01-01

    Data collected and generated by radiation oncology can be classified by the Volume, Variety, Velocity and Veracity (4Vs) of Big Data because they are spread across different care providers and not easily shared owing to patient privacy protection. The magnitude of the 4Vs is substantial in oncology, especially owing to imaging modalities and unclear data definitions. To create useful models ideally all data of all care providers are understood and learned from; however, this presents challenges in the guise of poor data quality, patient privacy concerns, geographical spread, interoperability and large volume. In radiation oncology, there are many efforts to collect data for research and innovation purposes. Clinical trials are the gold standard when proving any hypothesis that directly affects the patient. Collecting data in registries with strict predefined rules is also a common approach to find answers. A third approach is to develop data stores that can be used by modern machine learning techniques to provide new insights or answer hypotheses. We believe all three approaches have their strengths and weaknesses, but they should all strive to create Findable, Accessible, Interoperable, Reusable (FAIR) data. To learn from these data, we need distributed learning techniques, sending machine learning algorithms to FAIR data stores around the world, learning from trial data, registries and routine clinical data rather than trying to centralize all data. To improve and personalize medicine, rapid learning platforms must be able to process FAIR "Big Data" to evaluate current clinical practice and to guide further innovation.

  8. Big picture thinking in oil sands tailings disposal

    Energy Technology Data Exchange (ETDEWEB)

    Boswell, J. [Thurber Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    This PowerPoint presentation discussed methods of disposing oil sands tailings. Oil sands operators are currently challenged by a variety of legislative and environmental factors concerning the creation and disposal of oil sands tailings. The media has focused on the negative ecological impact of oil sands production, and technical issues are reducing the effect of some mitigation processes. Operators must learn to manage the interface between tailings production and removal, the environment, and public opinion. The successful management of oil sand tailings will include procedures designed to improve reclamation processes, understand environmental laws and regulations, and ensure that the cumulative impacts of tailings are mitigated. Geotechnical investigations, engineering designs and various auditing procedures can be used to develop tailings management plans. Environmental screening and impact assessments can be used to develop sustainable solutions. Public participation and environmental mediation is needed to integrate the public, environmental and technical tailings management strategies. Operators must ensure public accountability for all stakeholders. tabs., figs.

  9. Pictures in Pictures: Art History and Art Museums in Children's Picture Books

    Science.gov (United States)

    Yohlin, Elizabeth

    2012-01-01

    Children's picture books that recreate, parody, or fictionalize famous artworks and introduce the art museum experience, a genre to which I will refer as "children's art books," have become increasingly popular over the past decade. This essay explores the pedagogical implications of this trend through the family program "Picture Books and Picture…

  10. Learning Across the Big-Science Boundary: Leveraging Big-Science Centers for Technological Learning

    CERN Document Server

    Autio, E.; Streit-Bianchi, M.

    2003-01-01

    The interaction between industrial companies and the public research sector has intensified significantly during recent years (Bozeman, 2000), as firms attempt to build competitive advantage by leveraging external sources of learning (Lambe et al., 1997). By crossing the boundary between industrial and re- search spheres, firms may tap onto sources of technological learning, and thereby gain a knowledge- based competitive advantage over their competitors. Such activities have been actively supported by national governments, who strive to support the international competitiveness of their industries (Georghiou et al., 2000; Lee, 1994; Rothwell et al., 1992).

  11. Learning style, judgements of learning, and learning of verbal and visual information.

    Science.gov (United States)

    Knoll, Abby R; Otani, Hajime; Skeel, Reid L; Van Horn, K Roger

    2017-08-01

    The concept of learning style is immensely popular despite the lack of evidence showing that learning style influences performance. This study tested the hypothesis that the popularity of learning style is maintained because it is associated with subjective aspects of learning, such as judgements of learning (JOLs). Preference for verbal and visual information was assessed using the revised Verbalizer-Visualizer Questionnaire (VVQ). Then, participants studied a list of word pairs and a list of picture pairs, making JOLs (immediate, delayed, and global) while studying each list. Learning was tested by cued recall. The results showed that higher VVQ verbalizer scores were associated with higher immediate JOLs for words, and higher VVQ visualizer scores were associated with higher immediate JOLs for pictures. There was no association between VVQ scores and recall or JOL accuracy. As predicted, learning style was associated with subjective aspects of learning but not objective aspects of learning. © 2016 The British Psychological Society.

  12. A Comparative Study of Children's Concentration Performance on Picture Books: Age, Gender, and Media Forms

    Science.gov (United States)

    Ma, Min-Yuan; Wei, Chun-Chun

    2016-01-01

    The reading development of children depends on various sensory stimuli, which help them construct reading contexts and facilitate active learning and exploration. This study uses sensory stimuli provided by picture books using various forms of media to improve children's concentration performance. We employ picture books using four forms of media:…

  13. Highcrop picture tool

    OpenAIRE

    Fog, Erik

    2013-01-01

    Pictures give other impulses than words and numbers. With images, you can easily spot new opportunities. The Highcrop-tool allows for optimization of the organic arable farm based on picture-cards. The picture-cards are designed to make it easier and more inspiring to go close to the details of production. By using the picture-cards you can spot the areas, where there is a possibility to optimize the production system for better results in the future. Highcrop picture cards can be used to:...

  14. Leveraging Mobile Network Big Data for Developmental Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Transportation, diseases, socio-economic monitoring. The Sri Lankan think tank, Learning Initiatives on Reforms for Network Economies Asia (LIRNEasia), has been exploring the possibility of using big data to inform public policy since 2012. Supported by IDRC, this research focused on transportation planning in urban ...

  15. When Learning Analytics Meets E-Learning

    Science.gov (United States)

    Czerkawski, Betul C.

    2015-01-01

    While student data systems are nothing new and most educators have been dealing with student data for many years, learning analytics has emerged as a new concept to capture educational big data. Learning analytics is about better understanding of the learning and teaching process and interpreting student data to improve their success and learning…

  16. On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.

    Science.gov (United States)

    Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N

    2016-04-01

    An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.

  17. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  18. Picture-Word Differences in Discrimination Learning: 11. Effects of Conceptual Categories

    Science.gov (United States)

    Bourne, Lyle E.; And Others

    1976-01-01

    Investigates the prediction that the usual superiority of pictures over words for repetitions of the same items would disappear for items that were different instances of repeated categories. (Author/RK)

  19. The emerging role of Big Data in key development issues: Opportunities, challenges, and concerns

    Directory of Open Access Journals (Sweden)

    Nir Kshetri

    2014-12-01

    Full Text Available This paper presents a review of academic literature, policy documents from government organizations and international agencies, and reports from industries and popular media on the trends in Big Data utilization in key development issues and its worthwhileness, usefulness, and relevance. By looking at Big Data deployment in a number of key economic sectors, it seeks to provide a better understanding of the opportunities and challenges of using it for addressing key issues facing the developing world. It reviews the uses of Big Data in agriculture and farming activities in developing countries to assess the capabilities required at various levels to benefit from Big Data. It also provides insights into how the current digital divide is associated with and facilitated by the pattern of Big Data diffusion and its effective use in key development areas. It also discusses the lessons that developing countries can learn from the utilization of Big Data in big corporations as well as in other activities in industrialized countries.

  20. Using an adapted form of the picture exchange communication system to increase independent requesting in deafblind adults with learning disabilities.

    Science.gov (United States)

    Bracken, Maeve; Rohrer, Nicole

    2014-02-01

    The current study assessed the effectiveness of an adapted form of the Picture Exchange Communication System (PECS) in increasing independent requesting in deafblind adults with learning disabilities. PECS cards were created to accommodate individual needs, including adaptations such as enlarging photographs and using swelled images which consisted of images created on raised line drawing paper. Training included up to Phase III of PECS and procedures ensuring generalizations across individuals and contexts were included. The effects of the intervention were evaluated using a multiple baseline design across participants. Results demonstrated an increase in independent requesting with each of the participants reaching mastery criterion. These results suggest that PECS, in combination with some minor adaptations, may be an effective communicative alternative for individuals who are deafblind and have learning impairments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  2. The Astronaut Glove Challenge: Big Innovation from a (Very) Small Team

    Science.gov (United States)

    Homer, Peter

    2008-01-01

    Many measurements were taken by test engineers from Hamilton Sundstrand, the prime contractor for the current EVA suit. Because the raw measurements needed to be converted to torques and combined into a final score, it was impossible to keep track of who was ahead in this phase. The final comfort and dexterity test was performed in a depressurized glove box to simulate real on-orbit conditions. Each competitor was required to exercise the glove through a defined set of finger, thumb, and wrist motions without any sign of abrasion or bruising of the competitor's hand. I learned a lot about arm fatigue! This was a pass-fail event, and both of the remaining competitors came through intact. After taking what seemed like an eternity to tally the final scores, the judges announced that I had won the competition. My glove was the only one to have achieved lower finger-bending torques than the Phase VI glove. Looking back, I see three sources of the success of this project that I believe also operate in other programs where small teams have broken new ground in aerospace technologies. These are awareness, failure, and trust. By remaining aware of the big picture, continuously asking myself, "Am I converging on a solution?" and "Am I converging fast enough?" I was able to see that my original design was not going to succeed, leading to the decision to start over. I was also aware that, had I lingered over this choice or taken time to analyze it, I would not have been ready on the first day of competition. Failure forced me to look outside conventional thinking and opened the door to innovation. Choosing to make incremental failures enabled me to rapidly climb the learning curve. Trusting my "gut" feelings-which are really an internalized accumulation of experiences-and my newly acquired skills allowed me to devise new technologies rapidly and complete both gloves just in time. Awareness, failure, and trust are intertwined: failure provides experiences that inform awareness

  3. Using Motion Pictures to Teach Management: Refocusing the Camera Lens through the Infusion Approach to Diversity

    Science.gov (United States)

    Bumpus, Minnette A.

    2005-01-01

    Motion pictures and television shows can provide mediums to facilitate the learning of management and organizational behavior theories and concepts. Although the motion pictures and television shows cited in the literature cover a broad range of cinematic categories, racial inclusion is limited. The objectives of this article are to document the…

  4. Helping Students Understand the Role of Symmetry in Chemistry Using the Particle-in-a-Box Model

    Science.gov (United States)

    Manae, Meghna A.; Hazra, Anirban

    2016-01-01

    In a course on chemical applications of symmetry and group theory, students learn to use several useful tools (like character tables, projection operators, and correlation tables), but in the process of learning the mathematical details, they often miss the conceptual big picture about "why" and "how" symmetry leads to the…

  5. Evaluation of the Big-Two-Factor Theory of Motivation Orientations: An Evaluation of Jingle-Jangle Fallacies.

    Science.gov (United States)

    Marsh, Herbert W.; Craven, Rhonda G.; McInerney, Dennis; Debus, Raymond L.

    Motivation orientation research consistently finds two factors, Performance and Learning, that overlap substantially with other factors coming from different theoretical perspectives of motivation. Similar to related work in the Big-Five Theory of Personality, researchers posited a Big-Two-Factor Theory of motivation orientation and evaluated the…

  6. Pictures with narration versus pictures with on-screen text during teaching Mathematics

    Directory of Open Access Journals (Sweden)

    Panagiotis Ioannou

    2017-06-01

    Full Text Available The purpose of the present study was to compare the effects of two different teaching methods on students’ comprehension in Mathematics: pictures with concurrent narration versus pictures with on-screen text, during teaching triangles, a lesson in Mathematics. Forty primary school children (boys and girls selected to participate in this study. Students splitted into two experimental groups with the technique of simple random sampling. The first group consisted of students who viewed and listened (pictures with narration group, while the second group consisted of students who viewed (pictures with on-screen text a presentation of triangles. A recall test was used to evaluate students’ comprehension. The results showed that students’ comprehension was better when triangles' presentation (pictures was accompanied with spoken words, than with printed words. The pictures with narration group performed better than the pictures with on-screen text group, in recall test (M = 4.97, SD = 1.32 p<0.01. Results are consistent with the modality principle in which learners are more likely to build connections between corresponding words and pictures when words are presented in a spoken form (narration simultaneously with pictures.

  7. Consideration of vision and picture quality: psychological effects induced by picture sharpness

    Science.gov (United States)

    Kusaka, Hideo

    1989-08-01

    A psychological hierarchy model of human vision(1)(2) suggests that the visual signals are processed in a serial manner from lower to higher stages: that is "sensation" - "perception" - "emotion." For designing a future television system, it is important to find out what kinds of physical factors affect the "emotion" experienced by an observer in front of the display. This paper describes the psychological effects induced by the sharpness of the picture. The subjective picture quality was evaluated for the same pictures with five different levels of sharpness. The experiment was performed on two kinds of printed pictures: (A) a woman's face, and (B) a town corner. From these experiments, it was found that the amount of high-frequency peaking (physical value of the sharpness) which psychologically gives the best picture quality, differs between pictures (A) and (B). That is, the optimum picture sharpness differs depending on the picture content. From these results, we have concluded that the psychophysical sharpness of the picture is not only determined at the stage of "perception" (e.g., resolution or signal to noise ratio, which everyone can judge immediately), but also at the stage of "emotion" (e.g., sensation of reality or beauty).

  8. Cascaded Processing in Written Naming: Evidence from the Picture-Picture Interference Paradigm

    Science.gov (United States)

    Roux, Sebastien; Bonin, Patrick

    2012-01-01

    The issue of how information flows within the lexical system in written naming was investigated in five experiments. In Experiment 1, participants named target pictures that were accompanied by context pictures having phonologically and orthographically related or unrelated names (e.g., a picture of a "ball" superimposed on a picture of…

  9. Critical Review on Affect of Personality on Learning Styles

    Science.gov (United States)

    Kamarulzaman, Wirawani

    2012-01-01

    This paper is intended to review the affect of personality on learning styles. Costa and McCrae's Five-Factor Model of Personality (The Big 5) is explored against Kolb Learning Styles. The Big 5 factors are extraversion, neuroticism, openness, agreeableness and conscientiousness, whereas Kolb Learning Styles are divergers, assimilators,…

  10. The Effective Use of Symbols in Teaching Word Recognition to Children with Severe Learning Difficulties: A Comparison of Word Alone, Integrated Picture Cueing and the Handle Technique.

    Science.gov (United States)

    Sheehy, Kieron

    2002-01-01

    A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)

  11. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  12. Childhood obesity: are we missing the big picture?

    Science.gov (United States)

    Maziak, W; Ward, K D; Stockton, M B

    2008-01-01

    Childhood obesity is increasing worldwide, raising alarm about future trends of cardiovascular disease, diabetes and cancer. This article discusses what may underlie our failure to respond effectively to the obesity epidemic, and presents a wider perspective for future research and public health agendas. So far targeting individual-level determinants and clinical aspects of childhood obesity has produced limited success. There is growing interest in understanding the wider determinants of obesity such as the built environment (e.g. walkability), social interactions, food marketing and prices, but much needs to be learned. Particularly, we need to identify distal modifiable factors with multiple potential that would make them attractive for people and policymakers alike. For example, walking-biking-friendly cities can reduce obesity as well as energy consumption, air pollution and traffic delays. Such agenda needs to be driven by strong evidence from research involving multi-level influences on behaviour, as well as the study of wider politico-economic trends affecting people's choices. This article highlights available evidence and arguments for research and policy needed to curb the obesity epidemic. The upstream approach underlying these arguments aims to make healthy choices not only the most rational, but also the most feasible and affordable.

  13. Big Data and the Liberal Conception of Education

    Science.gov (United States)

    Clayton, Matthew; Halliday, Daniel

    2017-01-01

    This article develops a perspective on big data in education, drawing on a broadly liberal conception of education's primary purpose. We focus especially on the rise of so-called learning analytics and the associated rise of digitization, which we evaluate according to the liberal view that education should seek to cultivate individuality and…

  14. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  15. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  16. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  17. Forget the hype or reality. Big data presents new opportunities in Earth Science.

    Science.gov (United States)

    Lee, T. J.

    2015-12-01

    Earth science is arguably one of the most mature science discipline which constantly acquires, curates, and utilizes a large volume of data with diverse variety. We deal with big data before there is big data. For example, while developing the EOS program in the 1980s, the EOS data and information system (EOSDIS) was developed to manage the vast amount of data acquired by the EOS fleet of satellites. EOSDIS continues to be a shining example of modern science data systems in the past two decades. With the explosion of internet, the usage of social media, and the provision of sensors everywhere, the big data era has bring new challenges. First, Goggle developed the search algorithm and a distributed data management system. The open source communities quickly followed up and developed Hadoop file system to facility the map reduce workloads. The internet continues to generate tens of petabytes of data every day. There is a significant shortage of algorithms and knowledgeable manpower to mine the data. In response, the federal government developed the big data programs that fund research and development projects and training programs to tackle these new challenges. Meanwhile, comparatively to the internet data explosion, Earth science big data problem has become quite small. Nevertheless, the big data era presents an opportunity for Earth science to evolve. We learned about the MapReduce algorithms, in memory data mining, machine learning, graph analysis, and semantic web technologies. How do we apply these new technologies to our discipline and bring the hype to Earth? In this talk, I will discuss how we might want to apply some of the big data technologies to our discipline and solve many of our challenging problems. More importantly, I will propose new Earth science data system architecture to enable new type of scientific inquires.

  18. Associations From Pictures.

    Science.gov (United States)

    Pettersson, Rune

    A picture can be interpreted in different ways by various persons. There is often a difference between a picture's denotation (literal meaning), connotation (associative meaning), and private associations. Two studies were conducted in order to observe the private associations that pictures awaken in people. One study deals with associations made…

  19. Picture Power: Placing Artistry and Literacy on the Same Page

    Science.gov (United States)

    Soundy, Cathleen S; Guha, Smita; Qiu, Yun

    2007-01-01

    In this article, the authors describe Picture Power, a project they implemented during late spring in a full-day Montessori preschool-kindergarten program in Philadelphia. In this project, the authors set out to gather information about children's visual learning. The underlying question was whether artwork could provide useful clues to inform…

  20. The Effect of Picture Story Books on Students' Reading Comprehension

    Science.gov (United States)

    Roslina

    2017-01-01

    As a non formal education students, PKBM (a Non-Formal Community Learning Center) Medaso Kolaka students tend to encounter some difficulties in reading such as low motivation, infrequent tutors (non-formal education teachers) coming, inappropriate teaching materials, etc. This research aimed to investigate the effects of picture story books on the…

  1. When Big Ice Turns Into Water It Matters For Houses, Stores And Schools All Over

    Science.gov (United States)

    Bell, R. E.

    2017-12-01

    When ice in my glass turns to water it is not bad but when the big ice at the top and bottom of the world turns into water it is not good. This new water makes many houses, stores and schools wet. It is really bad during when the wind is strong and the rain is hard. New old ice water gets all over the place. We can not get to work or school or home. We go to the big ice at the top and bottom of the world to see if it will turn to water soon and make more houses wet. We fly over the big ice to see how it is doing. Most of the big ice sits on rock. Around the edge of the big sitting on rock ice, is really low ice that rides on top of the water. This really low ice slows down the big rock ice turning into water. If the really low ice cracks up and turns into little pieces of ice, the big rock ice will make more houses wet. We look to see if there is new water in the cracks. Water in the cracks is bad as it hurts the big rock ice. Water in the cracks on the really low ice will turn the low ice into many little pieces of ice. Then the big rock ice will turn to water. That is water in cracks is bad for the houses, schools and businesses. If water moves off the really low ice, it does not stay in the cracks. This is better for the really low ice. This is better for the big rock ice. We took pictures of the really low ice and saw water leaving. The water was not staying in the cracks. Water leaving the really low ice might be good for houses, schools and stores.

  2. Harnessing information from injury narratives in the 'big data' era: understanding and applying machine learning for injury surveillance.

    Science.gov (United States)

    Vallmuur, Kirsten; Marucci-Wellman, Helen R; Taylor, Jennifer A; Lehto, Mark; Corns, Helen L; Smith, Gordon S

    2016-04-01

    Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semiautomatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and PPV and reduced the need for human coding to less than a third of cases in one large occupational injury database. The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of 'big injury narrative data' opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Technology for Mining the Big Data of MOOCs

    Science.gov (United States)

    O'Reilly, Una-May; Veeramachaneni, Kalyan

    2014-01-01

    Because MOOCs bring big data to the forefront, they confront learning science with technology challenges. We describe an agenda for developing technology that enables MOOC analytics. Such an agenda needs to efficiently address the detailed, low level, high volume nature of MOOC data. It also needs to help exploit the data's capacity to reveal, in…

  4. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  5. Big-data-based edge biomarkers: study on dynamical drug sensitivity and resistance in individuals.

    Science.gov (United States)

    Zeng, Tao; Zhang, Wanwei; Yu, Xiangtian; Liu, Xiaoping; Li, Meiyi; Chen, Luonan

    2016-07-01

    Big-data-based edge biomarker is a new concept to characterize disease features based on biomedical big data in a dynamical and network manner, which also provides alternative strategies to indicate disease status in single samples. This article gives a comprehensive review on big-data-based edge biomarkers for complex diseases in an individual patient, which are defined as biomarkers based on network information and high-dimensional data. Specifically, we firstly introduce the sources and structures of biomedical big data accessible in public for edge biomarker and disease study. We show that biomedical big data are typically 'small-sample size in high-dimension space', i.e. small samples but with high dimensions on features (e.g. omics data) for each individual, in contrast to traditional big data in many other fields characterized as 'large-sample size in low-dimension space', i.e. big samples but with low dimensions on features. Then, we demonstrate the concept, model and algorithm for edge biomarkers and further big-data-based edge biomarkers. Dissimilar to conventional biomarkers, edge biomarkers, e.g. module biomarkers in module network rewiring-analysis, are able to predict the disease state by learning differential associations between molecules rather than differential expressions of molecules during disease progression or treatment in individual patients. In particular, in contrast to using the information of the common molecules or edges (i.e.molecule-pairs) across a population in traditional biomarkers including network and edge biomarkers, big-data-based edge biomarkers are specific for each individual and thus can accurately evaluate the disease state by considering the individual heterogeneity. Therefore, the measurement of big data in a high-dimensional space is required not only in the learning process but also in the diagnosing or predicting process of the tested individual. Finally, we provide a case study on analyzing the temporal expression

  6. Promoting Modeling and Covariational Reasoning among Secondary School Students in the Context of Big Data

    Science.gov (United States)

    Gil, Einat; Gibbs, Alison L.

    2017-01-01

    In this study, we follow students' modeling and covariational reasoning in the context of learning about big data. A three-week unit was designed to allow 12th grade students in a mathematics course to explore big and mid-size data using concepts such as trend and scatter to describe the relationships between variables in multivariate settings.…

  7. Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks

    OpenAIRE

    Harvey, Denise Y.; Schnur, Tatiana T.

    2016-01-01

    Naming pictures and matching words to pictures belonging to the same semantic category impairs performance relative to when stimuli come from different semantic categories (i.e., semantic interference). Despite similar semantic interference phenomena in both picture naming and word-picture matching tasks, the locus of interference has been attributed to different levels of the language system – lexical in naming and semantic in word-picture matching. Although both tasks involve access to shar...

  8. Acceptability of Big Books as Mother Tongue-based Reading Materials in Bulusan Dialect

    Directory of Open Access Journals (Sweden)

    Magdalena M. Ocbian

    2015-11-01

    Full Text Available Several studies have proven the superiority of using mother tongue in improving the pupils‟ performance. Research results revealed that using a language familiar to the pupils facilitates reading, writing and learning new concepts. However, at present, teachers are confronted with the insufficiency of instructional materials written in the local dialect and accepted by the end-users as possessing the qualities that could produce the desired learning outcomes. This descriptive evaluative research was conducted to address this problem. It determined the level of acceptability of the six researcher-made big books as mother tongue-based reading materials in Bulusan dialect for Grade 1 pupils. The big books were utilized by 11 Grade 1 teachers of Bulusan District to their pupils and were evaluated along suitability and appropriateness of the materials, visual appeal and quality of the story using checklist and open-ended questionnaire. Same materials were assessed by eight expert jurors. Findings showed that the big books possessed the desired qualities that made them very much acceptable to the Grade 1 teachers and much acceptable to the expert jurors. The comments and suggestions of the respondents served as inputs in the enhancement and revision of the six big books.

  9. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  10. Big data: the management revolution.

    Science.gov (United States)

    McAfee, Andrew; Brynjolfsson, Erik

    2012-10-01

    Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.

  11. Analyzing Learning in Professional Learning Communities: A Conceptual Framework

    Science.gov (United States)

    Van Lare, Michelle D.; Brazer, S. David

    2013-01-01

    The purpose of this article is to build a conceptual framework that informs current understanding of how professional learning communities (PLCs) function in conjunction with organizational learning. The combination of sociocultural learning theories and organizational learning theories presents a more complete picture of PLC processes that has…

  12. Perspectives on making big data analytics work for oncology.

    Science.gov (United States)

    El Naqa, Issam

    2016-12-01

    Oncology, with its unique combination of clinical, physical, technological, and biological data provides an ideal case study for applying big data analytics to improve cancer treatment safety and outcomes. An oncology treatment course such as chemoradiotherapy can generate a large pool of information carrying the 5Vs hallmarks of big data. This data is comprised of a heterogeneous mixture of patient demographics, radiation/chemo dosimetry, multimodality imaging features, and biological markers generated over a treatment period that can span few days to several weeks. Efforts using commercial and in-house tools are underway to facilitate data aggregation, ontology creation, sharing, visualization and varying analytics in a secure environment. However, open questions related to proper data structure representation and effective analytics tools to support oncology decision-making need to be addressed. It is recognized that oncology data constitutes a mix of structured (tabulated) and unstructured (electronic documents) that need to be processed to facilitate searching and subsequent knowledge discovery from relational or NoSQL databases. In this context, methods based on advanced analytics and image feature extraction for oncology applications will be discussed. On the other hand, the classical p (variables)≫n (samples) inference problem of statistical learning is challenged in the Big data realm and this is particularly true for oncology applications where p-omics is witnessing exponential growth while the number of cancer incidences has generally plateaued over the past 5-years leading to a quasi-linear growth in samples per patient. Within the Big data paradigm, this kind of phenomenon may yield undesirable effects such as echo chamber anomalies, Yule-Simpson reversal paradox, or misleading ghost analytics. In this work, we will present these effects as they pertain to oncology and engage small thinking methodologies to counter these effects ranging from

  13. Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks.

    Science.gov (United States)

    Harvey, Denise Y; Schnur, Tatiana T

    2016-01-01

    Naming pictures and matching words to pictures belonging to the same semantic category impairs performance relative to when stimuli come from different semantic categories (i.e., semantic interference). Despite similar semantic interference phenomena in both picture naming and word-picture matching tasks, the locus of interference has been attributed to different levels of the language system - lexical in naming and semantic in word-picture matching. Although both tasks involve access to shared semantic representations, the extent to which interference originates and/or has its locus at a shared level remains unclear, as these effects are often investigated in isolation. We manipulated semantic context in cyclical picture naming and word-picture matching tasks, and tested whether factors tapping semantic-level (generalization of interference to novel category items) and lexical-level processes (interactions with lexical frequency) affected the magnitude of interference, while also assessing whether interference occurs at a shared processing level(s) (transfer of interference across tasks). We found that semantic interference in naming was sensitive to both semantic- and lexical-level processes (i.e., larger interference for novel vs. old and low- vs. high-frequency stimuli), consistent with a semantically mediated lexical locus. Interference in word-picture matching exhibited stable interference for old and novel stimuli and did not interact with lexical frequency. Further, interference transferred from word-picture matching to naming. Together, these experiments provide evidence to suggest that semantic interference in both tasks originates at a shared processing stage (presumably at the semantic level), but that it exerts its effect at different loci when naming pictures vs. matching words to pictures.

  14. Ocean Acidification and Coral Reefs: An Emerging Big Picture

    Directory of Open Access Journals (Sweden)

    John E. N. Veron

    2011-05-01

    Full Text Available This article summarises the sometimes controversial contributions made by the different sciences to predict the path of ocean acidification impacts on the diversity of coral reefs during the present century. Although the seawater carbonate system has been known for a long time, the understanding of acidification impacts on marine biota is in its infancy. Most publications about ocean acidification are less than a decade old and over half are about coral reefs. Contributions from physiological studies, particularly of coral calcification, have covered such a wide spectrum of variables that no cohesive picture of the mechanisms involved has yet emerged. To date, these studies show that coral calcification varies with carbonate ion availability which, in turn controls aragonite saturation. They also reveal synergies between acidification and the better understood role of elevated temperature. Ecological studies are unlikely to reveal much detail except for the observations of the effects of carbon dioxide springs in reefs. Although ocean acidification events are not well constrained in the geological record, recent studies show that they are clearly linked to extinction events including four of the five greatest crises in the history of coral reefs. However, as ocean acidification is now occurring faster than at any know time in the past, future predictions based on past events are in unchartered waters. Pooled evidence to date indicates that ocean acidification will be severely affecting reefs by mid century and will have reduced them to ecologically collapsed carbonate platforms by the century’s end. This review concludes that most impacts will be synergistic and that the primary outcome will be a progressive reduction of species diversity correlated with habitat loss and widespread extinctions in most metazoan phyla.

  15. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  16. Picture This Character: Using Imagery To Teach a Japanese Syllabary.

    Science.gov (United States)

    Thompson, Joyce D.; Wakefield, John F.

    This study examined the effectiveness of imagery to teach native English speakers to associate hiragana characters (a Japanese script) with the spoken Japanese syllables that the characters represent. Twenty-one adults in a psychology of learning class for teachers were taught to picture a hiragana character in such a way as to establish an…

  17. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  18. Multimedia learning in children with dyslexia

    NARCIS (Netherlands)

    Knoop-van Campen, C.A.N.; Segers, P.C.J.; Verhoeven, L.T.W.

    2017-01-01

    The Cognitive Theory of Multimedia Learning predicts modality and redundancy effects. Spoken texts with pictures have often been shown to have a larger learning effect than written texts with pictures. However, this modality effect tends to reverse on the long term. This long-term effect has not

  19. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  20. Can companies benefit from Big Science? Science and Industry

    CERN Document Server

    Autio, Erkko; Bianchi-Streit, M

    2003-01-01

    Several studies have indicated that there are significant returns on financial investment via "Big Science" centres. Financial multipliers ranging from 2.7 (ESA) to 3.7 (CERN) have been found, meaning that each Euro invested in industry by Big Science generates a two- to fourfold return for the supplier. Moreover, laboratories such as CERN are proud of their record in technology transfer, where research developments lead to applications in other fields - for example, with particle accelerators and detectors. Less well documented, however, is the effect of the experience that technological firms gain through working in the arena of Big Science. Indeed, up to now there has been no explicit empirical study of such benefits. Our findings reveal a variety of outcomes, which include technological learning, the development of new products and markets, and impact on the firm's organization. The study also demonstrates the importance of technologically challenging projects for staff at CERN. Together, these findings i...

  1. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  2. Special Learners: Using Picture Books in Music Class to Encourage Participation of Students with Autistic Spectrum Disorder

    Science.gov (United States)

    Hagedorn, Victoria S.

    2004-01-01

    Many autistic students think and learn in pictures, not language. Visual representation of tasks, objects, and songs can greatly assist the autistic student. Using picture books in the music class is a popular strategy for many teachers. This article provides a list of books that a teacher has used with success in classes for children with…

  3. Big data based fraud risk management at Alibaba

    Directory of Open Access Journals (Sweden)

    Jidong Chen

    2015-12-01

    Full Text Available With development of mobile internet and finance, fraud risk comes in all shapes and sizes. This paper is to introduce the Fraud Risk Management at Alibaba under big data. Alibaba has built a fraud risk monitoring and management system based on real-time big data processing and intelligent risk models. It captures fraud signals directly from huge amount data of user behaviors and network, analyzes them in real-time using machine learning, and accurately predicts the bad users and transactions. To extend the fraud risk prevention ability to external customers, Alibaba also built up a big data based fraud prevention product called AntBuckler. AntBuckler aims to identify and prevent all flavors of malicious behaviors with flexibility and intelligence for online merchants and banks. By combining large amount data of Alibaba and customers', AntBuckler uses the RAIN score engine to quantify risk levels of users or transactions for fraud prevention. It also has a user-friendly visualization UI with risk scores, top reasons and fraud connections.

  4. SETI as a part of Big History

    Science.gov (United States)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive the Statistical Drake Equation (namely the statistical extension of the classical Drake Equation typical of SETI) can be regarded as the “frozen in time” part of GBM. This makes SETI a subset of our Big History Theory based on GBMs: just as the GBM is the “movie” unfolding in time, so the Statistical Drake Equation is its “still picture”, static in time, and the GBM is the time-extension of the Drake Equation. Darwinian Evolution on Earth may be easily described as an increasing GBM in the number of living species on Earth over the last 3.5 billion years. The first of them was RNA 3.5 billion years ago, and now 50 million living species or more exist, each

  5. Developing the role of big data and analytics in health professional education.

    Science.gov (United States)

    Ellaway, Rachel H; Pusic, Martin V; Galbraith, Robert M; Cameron, Terri

    2014-03-01

    As we capture more and more data about learners, their learning, and the organization of their learning, our ability to identify emerging patterns and to extract meaning grows exponentially. The insights gained from the analyses of these large amounts of data are only helpful to the extent that they can be the basis for positive action such as knowledge discovery, improved capacity for prediction, and anomaly detection. Big Data involves the aggregation and melding of large and heterogeneous datasets while education analytics involves looking for patterns in educational practice or performance in single or aggregate datasets. Although it seems likely that the use of education analytics and Big Data techniques will have a transformative impact on health professional education, there is much yet to be done before they can become part of mainstream health professional education practice. If health professional education is to be accountable for its programs run and are developed, then health professional educators will need to be ready to deal with the complex and compelling dynamics of analytics and Big Data. This article provides an overview of these emerging techniques in the context of health professional education.

  6. The hot big bang and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Turner, M.S. [Departments of Physics and of Astronomy & Astrophysics, Enrico Fermi Institute, The University of Chicago, Chicago, Illinois 60637-1433 (United States)]|[NASA/Fermilab Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, Illinois 60510-0500 (United States)

    1995-08-01

    The hot big-bang cosmology provides a reliable accounting of the Universe from about 10{sup {minus}2} sec after the bang until the present, as well as a robust framework for speculating back to times as early as 10{sup {minus}43} sec. Cosmology faces a number of important challenges; foremost among them are determining the quantity and composition of matter in the Universe and developing a detailed and coherent picture of how structure (galaxies, clusters of galaxies, superclusters, voids, great walls, and so on) developed. At present there is a working hypothesis{emdash}cold dark matter{emdash}which is based upon inflation and which, if correct, would extend the big bang model back to 10{sup {minus}32} sec and cast important light on the unification of the forces. Many experiments and observations, from CBR anisotropy experiments to Hubble Space Telescope observations to experiments at Fermilab and CERN, are now putting the cold dark matter theory to the test. At present it appears that the theory is viable only if the Hubble constant is smaller than current measurements indicate (around 30 km s{sup {minus}1} Mpc{sup {minus}1}), or if the theory is modified slightly, e.g., by the addition of a cosmological constant, a small admixture of hot dark matter (5 eV {open_quote}{open_quote}worth of neutrinos{close_quote}{close_quote}), more relativistic particle or a tilted spectrum of density perturbations.

  7. Chemistry--The Big Picture

    Science.gov (United States)

    Cassell, Anne

    2011-01-01

    Chemistry produces materials and releases energy by ionic or electronic rearrangements. Three structure types affect the ease with which a reaction occurs. In the Earth's crust, "solid crystals" change chemically only with extreme heat and pressure, unless their fixed ions touch moving fluids. On the other hand, in living things, "liquid crystals"…

  8. The Storyboard's Big Picture

    Science.gov (United States)

    Malloy, Cheryl A.; Cooley, William

    2003-01-01

    At Science Applications International Corporation (SAIC), Cape Canaveral Office, we're using a project management tool that facilitates team communication, keeps our project team focused, streamlines work and identifies potential issues. What did it cost us to install the tool? Almost nothing.

  9. Transforming Healthcare Delivery: Integrating Dynamic Simulation Modelling and Big Data in Health Economics and Outcomes Research.

    Science.gov (United States)

    Marshall, Deborah A; Burgos-Liz, Lina; Pasupathy, Kalyan S; Padula, William V; IJzerman, Maarten J; Wong, Peter K; Higashi, Mitchell K; Engbers, Jordan; Wiebe, Samuel; Crown, William; Osgood, Nathaniel D

    2016-02-01

    In the era of the Information Age and personalized medicine, healthcare delivery systems need to be efficient and patient-centred. The health system must be responsive to individual patient choices and preferences about their care, while considering the system consequences. While dynamic simulation modelling (DSM) and big data share characteristics, they present distinct and complementary value in healthcare. Big data and DSM are synergistic-big data offer support to enhance the application of dynamic models, but DSM also can greatly enhance the value conferred by big data. Big data can inform patient-centred care with its high velocity, volume, and variety (the three Vs) over traditional data analytics; however, big data are not sufficient to extract meaningful insights to inform approaches to improve healthcare delivery. DSM can serve as a natural bridge between the wealth of evidence offered by big data and informed decision making as a means of faster, deeper, more consistent learning from that evidence. We discuss the synergies between big data and DSM, practical considerations and challenges, and how integrating big data and DSM can be useful to decision makers to address complex, systemic health economics and outcomes questions and to transform healthcare delivery.

  10. Is the picture bizarreness effect a generation effect?

    Science.gov (United States)

    Marchal, A; Nicolas, S

    2000-08-01

    Bizarre stimuli usually facilitate recall compared to common stimuli. This investigation explored the so-called bizarreness effect in free recall by using 80 simple line drawings of common objects (common vs bizarre). 64 subjects participated with 16 subjects in each group. Half of the subjects received learning instructions and the other half rated the bizarreness of each drawing. Moreover, drawings were presented either alone or with the name of the object under mixed-list encoding conditions. After the free recall task, subjects had to make metamemory judgments about how many items of each format they had seen and recalled. The key result was that a superiority of bizarre pictures over common ones was found in all conditions although performance was better when the pictures were presented alone than with their corresponding label. Subsequent metamemory judgments, however, showed that subjects underestimated the number of bizarre items actually recalled.

  11. Selective Activation Around the Left Occipito-Temporal Sulcus for Words Relative to Pictures: Individual Variability or False Positives?

    OpenAIRE

    Wright, Nicholas D; Mechelli, Andrea; Noppeney, Uta; Veltman, Dick J; Rombouts, Serge ARB; Glensman, Janice; Haynes, John-Dylan; Price, Cathy J

    2007-01-01

    We used high-resolution fMRI to investigate claims that learning to read results in greater left occipito-temporal (OT) activation for written words relative to pictures of objects. In the first experiment, 9/16 subjects performing a one-back task showed activation in ?1 left OT voxel for words relative to pictures (P < 0.05 uncorrected). In a second experiment, another 9/15 subjects performing a semantic decision task activated ?1 left OT voxel for words relative to pictures. However, at thi...

  12. Distributed picture compilation demonstration

    Science.gov (United States)

    Alexander, Richard; Anderson, John; Leal, Jeff; Mullin, David; Nicholson, David; Watson, Graham

    2004-08-01

    A physical demonstration of distributed surveillance and tracking is described. The demonstration environment is an outdoor car park overlooked by a system of four rooftop cameras. The cameras extract moving objects from the scene, and these objects are tracked in a decentralized way, over a real communication network, using the information form of the standard Kalman filter. Each node therefore has timely access to the complete global picture and because there is no single point of failure in the system, it is robust. The demonstration system and its main components are described here, with an emphasis on some of the lessons we have learned as a result of applying a corpus of distributed data fusion theory and algorithms in practice. Initial results are presented and future plans to scale up the network are also outlined.

  13. [Big Data Revolution or Data Hubris? : On the Data Positivism of Molecular Biology].

    Science.gov (United States)

    Gramelsberger, Gabriele

    2017-12-01

    Genome data, the core of the 2008 proclaimed big data revolution in biology, are automatically generated and analyzed. The transition from the manual laboratory practice of electrophoresis sequencing to automated DNA-sequencing machines and software-based analysis programs was completed between 1982 and 1992. This transition facilitated the first data deluge, which was considerably increased by the second and third generation of DNA-sequencers during the 2000s. However, the strategies for evaluating sequence data were also transformed along with this transition. The paper explores both the computational strategies of automation, as well as the data evaluation culture connected with it, in order to provide a complete picture of the complexity of today's data generation and its intrinsic data positivism. This paper is thereby guided by the question, whether this data positivism is the basis of the big data revolution of molecular biology announced today, or it marks the beginning of its data hubris.

  14. Penggunaan Model Pembelajaran Picture and Picture Untuk Meningkatkan Kemampuan Siswa Menulis Karangan

    Directory of Open Access Journals (Sweden)

    Heriyanto Heriyanto

    2014-02-01

    Full Text Available Menulis merupakan keterampilan berbahasa yang kompleks, produktif dan ekspresif, karena penulis harus terampil menggunakan grofologi, struktur bahasa dan memiliki pengetahuan bahasa yang memadai, untuk itu perlu dilatih secara teratur dan cermat sejak kelas awal SD. Karangan sebagai salah satu hasil karya menulis, merupakan hasil pekerjaan dari mengarang. Menulis karangan dalam penelitian ini adalah menulis karangan narasi. Salah satu masalah dalam pembelajaran Bahasa Indonesia adalah kesulitan siswa dalam menulis karangan yang baik dan benar, yang juga terjadi pada siswa kelas IVA SDN Pinggir Papas 1. Pembelajaran menulis karangan dengan menggunakan model kooperatif tipe picture and picture  diharapkan mampu meningkatkan pemahaman siswa dalam hal menulis karangan. Untuk itu, dilakukan penelitian terhadap siswa kelas IVA SDN Pinggir Papas I Kecamatan Kalianget Kabupaten Sumenep yang berjumlah 33 siswa. Penelitian dengan judul “Penggunaan Model Pembelajaran Kooperatif Tipe Picture and Picture untuk Meningkatkan Kemampuan Siswa Menulis Karangan” ini menggunakan penelitian tindakan kelas sebanyak dua putaran. Setiap putaran terdiri dari tahap rancangan, kegiatan, pengamatan, dan refleksi. Data yang diperoleh berupa nilai hasil LKS dan kuis individu berupa menulis karangan, lembar observasi aktivitas guru dan siswa serta penilaian penggunaan model pembelajaran kooperatif picture and picture, juga hasil respon siswa. Dari hasil analis menunjukkan bahwa penggunaan model pembelajaran kooperatif tipe picture and picture dapat meningkatkan kemampuan menulis karangan siswa kelas IVA SDN Pinggir Papas 1. Peningkatan ini terjadi pada nilai rata-rata LKS, yaitu dari 55 menjadi 71,6. Adapun hasil karangan individu siswa dari rata-rata 56,7 dengan ketuntasan 55% menjadi 74,5 dengan ketuntasan mencapai 88% atau terjadi peningkatan sebesar 33% dari siklus I.

  15. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  16. Metaphor in pictures.

    Science.gov (United States)

    Kennedy, J M

    1982-01-01

    Pictures can be literal or metaphoric. Metaphoric pictures involve intended violations of standard modes of depiction that are universally recognizable. The types of metaphoric pictures correspond to major groups of verbal metaphors, with the addition of a class of pictorial runes. Often the correspondence between verbal and pictorial metaphors depends on individual features of objects and such physical parameters as change of scale. A more sophisticated analysis is required for some pictorial metaphors, involving juxtapositions of well-known objects and indirect reference.

  17. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  18. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  19. Evolution of the Air Toxics under the Big Sky Program

    Science.gov (United States)

    Marra, Nancy; Vanek, Diana; Hester, Carolyn; Holian, Andrij; Ward, Tony; Adams, Earle; Knuth, Randy

    2011-01-01

    As a yearlong exploration of air quality and its relation to respiratory health, the "Air Toxics Under the Big Sky" program offers opportunities for students to learn and apply science process skills through self-designed inquiry-based research projects conducted within their communities. The program follows a systematic scope and sequence…

  20. WE-H-BRB-01: Overview of the ASTRO-NIH-AAPM 2015 Workshop On Exploring Opportunities for Radiation Oncology in the Era of Big Data

    Energy Technology Data Exchange (ETDEWEB)

    Benedict, S. [University of California Davis Medical Center (United States)

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  1. WE-H-BRB-01: Overview of the ASTRO-NIH-AAPM 2015 Workshop On Exploring Opportunities for Radiation Oncology in the Era of Big Data

    International Nuclear Information System (INIS)

    Benedict, S.

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  2. Big data, advanced analytics and the future of comparative effectiveness research.

    Science.gov (United States)

    Berger, Marc L; Doban, Vitalii

    2014-03-01

    The intense competition that accompanied the growth of internet-based companies ushered in the era of 'big data' characterized by major innovations in processing of very large amounts of data and the application of advanced analytics including data mining and machine learning. Healthcare is on the cusp of its own era of big data, catalyzed by the changing regulatory and competitive environments, fueled by growing adoption of electronic health records, as well as efforts to integrate medical claims, electronic health records and other novel data sources. Applying the lessons from big data pioneers will require healthcare and life science organizations to make investments in new hardware and software, as well as in individuals with different skills. For life science companies, this will impact the entire pharmaceutical value chain from early research to postcommercialization support. More generally, this will revolutionize comparative effectiveness research.

  3. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  4. Video Feedforward for Rapid Learning of a Picture-Based Communication System

    Science.gov (United States)

    Smith, Jemma; Hand, Linda; Dowrick, Peter W.

    2014-01-01

    This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…

  5. Producing colour pictures from SCAN

    International Nuclear Information System (INIS)

    Robichaud, K.

    1982-01-01

    The computer code SCAN.TSK has been written for use on the Interdata 7/32 minicomputer which will convert the pictures produced by the SCAN program into colour pictures on a colour graphics VDU. These colour pictures are a more powerful aid to detecting errors in the MONK input data than the normal lineprinter pictures. This report is intended as a user manual for using the program on the Interdata 7/32, and describes the method used to produce the pictures and gives examples of JCL, input data and of the pictures that can be produced. (U.K.)

  6. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  7. Device for transmitting pictures and device for receiving said pictures

    NARCIS (Netherlands)

    1993-01-01

    Device for transmitting television pictures in the form of transform coefficients and motion vectors. The motion vectors of a sub-picture are converted (20) into a series of difference vectors and a reference vector. Said series is subsequently applied to a variable-length encoder (22) which encodes

  8. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    Science.gov (United States)

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  9. Semantic interference from distractor pictures in single-picture naming: evidence for competitive lexical selection.

    Science.gov (United States)

    Jescheniak, Jörg D; Matushanskaya, Asya; Mädebach, Andreas; Müller, Matthias M

    2014-10-01

    Picture-naming studies have demonstrated interference from semantic-categorically related distractor words, but not from corresponding distractor pictures, and the lack of generality of the interference effect has been argued to challenge theories viewing lexical selection in speech production as a competitive process. Here, we demonstrate that semantic interference from context pictures does become visible, if sufficient attention is allocated to them. We combined picture naming with a spatial-cuing procedure. When participants' attention was shifted to the distractor, semantically related distractor pictures interfered with the response, as compared with unrelated distractor pictures. This finding supports models conceiving lexical retrieval as competitive (Levelt, Roelofs, & Meyer, 1999) but is difficult to reconcile with the response exclusion hypothesis (Finkbeiner & Caramazza, 2006b) proposed as an alternative.

  10. Active Learning in the Era of Big Data

    Energy Technology Data Exchange (ETDEWEB)

    Jamieson, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, IV, Warren L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    Active learning methods automatically adapt data collection by selecting the most informative samples in order to accelerate machine learning. Because of this, real-world testing and comparing active learning algorithms requires collecting new datasets (adaptively), rather than simply applying algorithms to benchmark datasets, as is the norm in (passive) machine learning research. To facilitate the development, testing and deployment of active learning for real applications, we have built an open-source software system for large-scale active learning research and experimentation. The system, called NEXT, provides a unique platform for realworld, reproducible active learning research. This paper details the challenges of building the system and demonstrates its capabilities with several experiments. The results show how experimentation can help expose strengths and weaknesses of active learning algorithms, in sometimes unexpected and enlightening ways.

  11. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  12. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  13. Biomedical Big Data Training Collaborative (BBDTC): An effort to bridge the talent gap in biomedical science and research.

    Science.gov (United States)

    Purawat, Shweta; Cowart, Charles; Amaro, Rommie E; Altintas, Ilkay

    2016-06-01

    The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC collaborative is an e-learning platform that supports the biomedical community to access, develop and deploy open training materials. The BBDTC supports Big Data skill training for biomedical scientists at all levels, and from varied backgrounds. The natural hierarchy of courses allows them to be broken into and handled as modules . Modules can be reused in the context of multiple courses and reshuffled, producing a new and different, dynamic course called a playlist . Users may create playlists to suit their learning requirements and share it with individual users or the wider public. BBDTC leverages the maturity and design of the HUBzero content-management platform for delivering educational content. To facilitate the migration of existing content, the BBDTC supports importing and exporting course material from the edX platform. Migration tools will be extended in the future to support other platforms. Hands-on training software packages, i.e., toolboxes , are supported through Amazon EC2 and Virtualbox virtualization technologies, and they are available as: ( i ) downloadable lightweight Virtualbox Images providing a standardized software tool environment with software packages and test data on their personal machines, and ( ii ) remotely accessible Amazon EC2 Virtual Machines for accessing biomedical big data tools and scalable big data experiments. At the moment, the BBDTC site contains three open Biomedical big data training courses with lecture contents, videos and hands-on training utilizing VM toolboxes, covering diverse topics. The courses have enhanced the hands-on learning environment by providing structured content that users can use at their own pace. A four course biomedical big data series is

  14. The Big Bang: UK Young Scientists' and Engineers' Fair 2010

    Science.gov (United States)

    Allison, Simon

    2010-01-01

    The Big Bang: UK Young Scientists' and Engineers' Fair is an annual three-day event designed to promote science, technology, engineering and maths (STEM) careers to young people aged 7-19 through experiential learning. It is supported by stakeholders from business and industry, government and the community, and brings together people from various…

  15. Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.

    Science.gov (United States)

    Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi

    2018-04-01

    In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.

  16. Assessing STEM content learning: using the Arctic's changing climate to develop 21st century learner

    Science.gov (United States)

    Henderson, G. R.; Durkin, S.; Moran, A.

    2016-12-01

    In recent years the U.S. federal government has called for an increased focus on science, technology, engineering, and mathematics (STEM) in the educational system to ensure that there will be sufficient technical expertise to meet the needs of business and industry. As a direct result of this STEM emphasis, the number of outreach activities aimed at actively engaging these students in STEM learning has surged. Such activities, frequently in the form of summer camps led by university faculty, have targeted primary and secondary school students with the goal of growing student interest in STEM majors and STEM careers. This study assesses short-term content learning using a climate module that highlights rapidly changing Arctic climate conditions to illustrate concepts of radiative energy balance and climate feedback. Hands-on measurement of short and longwave radiation using simple instrumentation is used to demonstrate concepts that are then related back to the "big picture" Arctic issue. Pre and post module questionnaires were used to assess content learning, as this learning type has been identified as the basis for STEM literacy and the vehicle by which 21st century learning skills are usually developed. In this instance, students applied subject knowledge they gained by taking radiation measurements to better understand the real-world problem of climate change.

  17. Gossip Management at Universities Using Big Data Warehouse Model Integrated with a Decision Support System

    Directory of Open Access Journals (Sweden)

    Pelin Vardarlier

    2016-01-01

    Full Text Available Big Data has recently been used for many purposes like medicine, marketing and sports. It has helped improve management decisions. However, for almost each case a unique data warehouse should be built to benefit from the merits of data mining and Big Data. Hence, each time we start from scratch to form and build a Big Data Warehouse. In this study, we propose a Big Data Warehouse and a model for universities to be used for information management, to be more specific gossip management. The overall model is a decision support system that may help university administraitons when they are making decisions and also provide them with information or gossips being circulated among students and staff. In the model, unsupervised machine learning algorithms have been employed. A prototype of the proposed system has also been presented in the study. User generated data has been collected from students in order to learn gossips and students’ problems related to school, classes, staff and instructors. The findings and results of the pilot study suggest that social media messages among students may give important clues for the happenings at school and this information may be used for management purposes.The model may be developed and implemented by not only universities but also some other organisations.

  18. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  19. Place-Based Picture Books as an Adult Learning Tool: Supporting Agricultural Learning in Papua New Guinea

    Science.gov (United States)

    Simoncini, Kym; Pamphilon, Barbara; Mikhailovich, Katja

    2017-01-01

    This article describes the rationale, development, and outcomes of two place-based, dual-language picture books with agricultural messages for women farmers and their families in Papua New Guinea. The purpose of the books was to disseminate better agricultural and livelihood practices to women farmers with low literacy. The books were designed and…

  20. Examining the Big-Fish-Little-Pond Effect on Students' Self-Concept of Learning Science in Taiwan Based on the TIMSS Databases

    Science.gov (United States)

    Liou, Pey-Yan

    2014-08-01

    The purpose of this study is to examine the relationship between student self-concept and achievement in science in Taiwan based on the big-fish-little-pond effect (BFLPE) model using the Trends in International Mathematics and Science Study (TIMSS) 2003 and 2007 databases. Hierarchical linear modeling was used to examine the effects of the student-level and school-level science achievement on student self-concept of learning science. The results indicated that student science achievement was positively associated with individual self-concept of learning science in both TIMSS 2003 and 2007. On the contrary, while school-average science achievement was negatively related to student self-concept in TIMSS 2003, it had no statistically significant relationship with student self-concept in TIMSS 2007. The findings of this study shed light on possible explanations for the existence of BFLPE and also lead to an international discussion on the generalization of BFLPE.

  1. The power of pictures: Vertical picture angles in power pictures

    NARCIS (Netherlands)

    S.R. Giessner (Steffen); M.K. Ryan (Michelle); T.W. Schubert (Thomas); N. van Quaquebeke (Niels)

    2011-01-01

    textabstractAbstract: Conventional wisdom suggests that variations in vertical picture angle cause the subject to appear more powerful when depicted from below and less powerful when depicted from above. However, do the media actually use such associations to represent individual differences in

  2. Functional connectomics from a "big data" perspective.

    Science.gov (United States)

    Xia, Mingrui; He, Yong

    2017-10-15

    In the last decade, explosive growth regarding functional connectome studies has been observed. Accumulating knowledge has significantly contributed to our understanding of the brain's functional network architectures in health and disease. With the development of innovative neuroimaging techniques, the establishment of large brain datasets and the increasing accumulation of published findings, functional connectomic research has begun to move into the era of "big data", which generates unprecedented opportunities for discovery in brain science and simultaneously encounters various challenging issues, such as data acquisition, management and analyses. Big data on the functional connectome exhibits several critical features: high spatial and/or temporal precision, large sample sizes, long-term recording of brain activity, multidimensional biological variables (e.g., imaging, genetic, demographic, cognitive and clinic) and/or vast quantities of existing findings. We review studies regarding functional connectomics from a big data perspective, with a focus on recent methodological advances in state-of-the-art image acquisition (e.g., multiband imaging), analysis approaches and statistical strategies (e.g., graph theoretical analysis, dynamic network analysis, independent component analysis, multivariate pattern analysis and machine learning), as well as reliability and reproducibility validations. We highlight the novel findings in the application of functional connectomic big data to the exploration of the biological mechanisms of cognitive functions, normal development and aging and of neurological and psychiatric disorders. We advocate the urgent need to expand efforts directed at the methodological challenges and discuss the direction of applications in this field. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. The Picture Exchange Communication System: Communicative Outcomes for Young Children with Disabilities.

    Science.gov (United States)

    Schwartz, Ilene S.; Garfinkle, Ann N.; Bauer, Janet

    1998-01-01

    Presents two studies documenting the use of the Picture Exchange Communication System (PECS) for 31 preschool children with severe disabilities. Initial findings indicated the children could learn to use PECS quickly and efficiently. The second study, which included 18 participants, found that PECS use generalized to untrained settings. (Author/CR)

  4. Lesson 6. Picture unsharpness

    International Nuclear Information System (INIS)

    Chikirdin, Eh.G.

    1999-01-01

    Lecture concerning the picture sharpness in biomedical radiography is presented. Notion of picture sharpness and visual acuity as an analyser of picture sharpness is specified. Attention is paid to the POX-curve as a statistical method for assessment of visual acuity. Conceptions of the sensitivity of using X-ray image visualization system together with specificity and accuracy are considered. Among indices of sharp parameters of visualization system the resolution, resolving power, picture unsharpness are discussed. It is shown that gradation and sharp characteristics of the image closely correlate that need an attention in practice to factors determining them [ru

  5. Distributed Coordinate Descent Method for Learning with Big Data

    OpenAIRE

    Richtárik, Peter; Takáč, Martin

    2013-01-01

    In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for solving loss minimization problems with big data. We initially partition the coordinates (features) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bound...

  6. Big Books’ as Mother Tongue-Based Instructional Materials in Bicol for Grade One Pupils

    Directory of Open Access Journals (Sweden)

    Magdalena M. Ocbian,

    2015-11-01

    Full Text Available Language experts claim that it is easier for pupils to learn when the mother tongue is used in the teaching learning process including the learning of a second language. This study determined the reading comprehension level of Grade I pupils in Bulusan Central School for school year 2013-2014 as input in developing big books written in the vernacular that can be used as reading materials for Grade 1 pupils. Results of the evaluation revealed that they belong to the frustration and instructional levels in the literal skill; mostly are frustration readers along interpretative and evaluative skills; but are independent readers along applied skills; hence, they have low level of reading comprehension. Based on the result of the study, three big books as MTB-MLE instructional materials in Bicol were produced to develop or enhance Grade 1 pupils’ reading comprehension. Teaching guides were likewise developed.

  7. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  8. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  9. Contemporary Research Discourse and Issues on Big Data in Higher Education

    Science.gov (United States)

    Daniel, Ben

    2017-01-01

    The increasing availability of digital data in higher education provides an extraordinary resource for researchers to undertake educational research, targeted at understanding challenges facing the sector. Big data can stimulate new ways to transform processes relating to learning and teaching, and helps identify useful data, sources of evidence…

  10. "Big Questions" in the Introductory Religion Classroom: Expanding the Integrative Approach

    Science.gov (United States)

    Deffenbaugh, Daniel G.

    2011-01-01

    Recent research by Barbara Walvoord suggests a perceived disparity between faculty learning objectives and students' desire to engage "big questions" in the introductory religion classroom. Faculty opinions of such questions are varied, ranging from a refusal to employ any approach that diverts attention away from critical thinking, to a…

  11. Effects of Iconicity on Requesting with the Picture Exchange Communication System in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Angermeier, Katie; Schlosser, Ralf W.; Luiselli, James K.; Harrington, Caroline; Carter, Beth

    2008-01-01

    Research on graphic symbol learning suggests that symbols with a greater visual resemblance to their referents (greater iconicity) are more easily learned. The iconicity hypothesis has not yet been explored within the intervention protocol of the Picture Exchange Communication System (PECS). Within the PECS protocol, participants do not point to a…

  12. The Power of Pictures : Vertical Picture Angles in Power Pictures

    NARCIS (Netherlands)

    Giessner, Steffen R.; Ryan, Michelle K.; Schubert, Thomas W.; van Quaquebeke, Niels

    2011-01-01

    Conventional wisdom suggests that variations in vertical picture angle cause the subject to appear more powerful when depicted from below and less powerful when depicted from above. However, do the media actually use such associations to represent individual differences in power? We argue that the

  13. Effect of Time Delay on Recognition Memory for Pictures: The Modulatory Role of Emotion

    Science.gov (United States)

    Wang, Bo

    2014-01-01

    This study investigated the modulatory role of emotion in the effect of time delay on recognition memory for pictures. Participants viewed neutral, positive and negative pictures, and took a recognition memory test 5 minutes, 24 hours, or 1 week after learning. The findings are: 1) For neutral, positive and negative pictures, overall recognition accuracy in the 5-min delay did not significantly differ from that in the 24-h delay. For neutral and positive pictures, overall recognition accuracy in the 1-week delay was lower than in the 24-h delay; for negative pictures, overall recognition in the 24-h and 1-week delay did not significantly differ. Therefore negative emotion modulates the effect of time delay on recognition memory, maintaining retention of overall recognition accuracy only within a certain frame of time. 2) For the three types of pictures, recollection and familiarity in the 5-min delay did not significantly differ from that in the 24-h and the 1-week delay. Thus emotion does not appear to modulate the effect of time delay on recollection and familiarity. However, recollection in the 24-h delay was higher than in the 1-week delay, whereas familiarity in the 24-h delay was lower than in the 1-week delay. PMID:24971457

  14. Digital Picture Production and Picture aesthetic Competency in It-didactic Design

    DEFF Research Database (Denmark)

    Rasmussen, Helle

    , that It and media are only used seldom by 21 % of teachers in Visual Arts and 7 % of teachers in this subject never use It and Media in these lessons. Art teachers – among others - also express the need for continuing education. (Ministeriet for Børn og Undervisning 2011). Since lessons in digital picture...... production have been a demand in Visual Arts in Danish schools for more than two decades, these conditions call for development of new didactic knowledge. Besides new genres and ways of using digital pictures and media continuously develop. (Sørensen 2002). This ought to be an incessant challenge...... subject Visual Arts – and crosswise of subjects in school. The overall research question has been: How can IT-didactic designs support lessons in production of complex meaning in digital pictures and increase the development of pupil’s picture aesthetic competences? By using the expression ‘complex...

  15. Pictures in Training

    Science.gov (United States)

    Miller, Elmo E.

    1973-01-01

    Pictures definitely seem to help training, but a study for the military finds these pictures need not be in moving form, such as films or videotape. Just how the pictorial techniques should be employed and with how much success depends on individual trainee and program differences. (KP)

  16. Evidence for Evolution as Support for Big Bang

    Science.gov (United States)

    Gopal-Krishna

    1997-12-01

    With the exception of ZERO, the concept of BIG BANG is by far the most bizarre creation of the human mind. Three classical pillars of the Big Bang model of the origin of the universe are generally thought to be: (i) The abundances of the light elements; (ii) the microwave back-ground radiation; and (iii) the change with cosmic epoch in the average properties of galaxies (both active and non-active types). Evidence is also mounting for redshift dependence of the intergalactic medium, as discussed elsewhere in this volume in detail. In this contribution, I endeavour to highlight a selection of recent advances pertaining to the third category. The widely different levels of confidence in the claimed observational constraints in the field of cosmology can be guaged from the following excerpts from two leading astrophysicists: "I would bet odds of 10 to 1 on the validity of the general 'hot Big Bang' concept as a description of how our universe has evolved since it was around 1 sec. old" -M. Rees (1995), in 'Perspectives in Astrophysical Cosmology' CUP. "With the much more sensitive observations available today, no astrophysical property shows evidence of evolution, such as was claimed in the 1950s to disprove the Steady State theory" -F. Hoyle (1987), in 'Fifty years in cosmology', B. M. Birla Memorial Lecture, Hyderabad, India. The burgeoning multi-wavelength culture in astronomy has provided a tremendous boost to observational cosmology in recent years. We now proceed to illustrate this with a sequence of examples which reinforce the picture of an evolving universe. Also provided are some relevant details of the data used in these studies so that their scope can be independently judged by the readers.

  17. Earthquake prediction in California using regression algorithms and cloud-based big data infrastructure

    Science.gov (United States)

    Asencio-Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez-Álvarez, F.

    2018-06-01

    Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predict earthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.

  18. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  19. String theory and pre-big bang cosmology

    Science.gov (United States)

    Gasperini, M.; Veneziano, G.

    2016-09-01

    In string theory, the traditional picture of a Universe that emerges from the inflation of a very small and highly curved space-time patch is a possibility, not a necessity: quite different initial conditions are possible, and not necessarily unlikely. In particular, the duality symmetries of string theory suggest scenarios in which the Universe starts inflating from an initial state characterized by very small curvature and interactions. Such a state, being gravitationally unstable, will evolve towards higher curvature and coupling, until string-size effects and loop corrections make the Universe "bounce" into a standard, decreasing-curvature regime. In such a context, the hot big bang of conventional cosmology is replaced by a "hot big bounce" in which the bouncing and heating mechanisms originate from the quantum production of particles in the high-curvature, large-coupling pre-bounce phase. Here we briefly summarize the main features of this inflationary scenario, proposed a quarter century ago. In its simplest version (where it represents an alternative and not a complement to standard slow-roll inflation) it can produce a viable spectrum of density perturbations, together with a tensor component characterized by a "blue" spectral index with a peak in the GHz frequency range. That means, phenomenologically, a very small contribution to a primordial B-mode in the CMB polarization, and the possibility of a large enough stochastic background of gravitational waves to be measurable by present or future gravitational wave detectors.

  20. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  1. Predicting Refractive Surgery Outcome: Machine Learning Approach With Big Data.

    Science.gov (United States)

    Achiron, Asaf; Gur, Zvi; Aviv, Uri; Hilely, Assaf; Mimouni, Michael; Karmona, Lily; Rokach, Lior; Kaiserman, Igor

    2017-09-01

    To develop a decision forest for prediction of laser refractive surgery outcome. Data from consecutive cases of patients who underwent LASIK or photorefractive surgeries during a 12-year period in a single center were assembled into a single dataset. Training of machine-learning classifiers and testing were performed with a statistical classifier algorithm. The decision forest was created by feature vectors extracted from 17,592 cases and 38 clinical parameters for each patient. A 10-fold cross-validation procedure was applied to estimate the predictive value of the decision forest when applied to new patients. Analysis included patients younger than 40 years who were not treated for monovision. Efficacy of 0.7 or greater and 0.8 or greater was achieved in 16,198 (92.0%) and 14,945 (84.9%) eyes, respectively. Efficacy of less than 0.4 and less than 0.5 was achieved in 322 (1.8%) and 506 (2.9%) eyes, respectively. Patients in the low efficacy group (differences compared with the high efficacy group (≥ 0.8), yet were clinically similar (mean differences between groups of 0.7 years, of 0.43 mm in pupil size, of 0.11 D in cylinder, of 0.22 logMAR in preoperative CDVA, of 0.11 mm in optical zone size, of 1.03 D in actual sphere treatment, and of 0.64 D in actual cylinder treatment). The preoperative subjective CDVA had the highest gain (most important to the model). Correlations analysis revealed significantly decreased efficacy with increased age (r = -0.67, P big data from refractive surgeries may be of interest. [J Refract Surg. 2017;33(9):592-597.]. Copyright 2017, SLACK Incorporated.

  2. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  3. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  4. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    Science.gov (United States)

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  5. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  6. Mariner Mars 1971 television picture catalog. Volume 2: Sequence design and picture coverage

    Science.gov (United States)

    Koskela, P. E.; Helton, M. R.; Seeley, L. N.; Zawacki, S. J.

    1972-01-01

    A collection of data relating to the Mariner 9 TV picture is presented. The data are arranged to offer speedy identification of what took place during entire science cycles, on individual revolutions, and during individual science links or sequences. Summary tables present the nominal design for each of the major picture-taking cycles, along with the sequences actually taken on each revolution. These tables permit identification at a glance, all TV sequences and the corresponding individual pictures for the first 262 revolutions (primary mission). A list of TV pictures, categorized according to their latitude and longitude, is also provided. Orthographic and/or mercator plots for all pictures, along with pertinent numerical data for their center points are presented. Other tables and plots of interest are also included. This document is based upon data contained in the Supplementary Experiment Data Record (SEDR) files as of 21 August 1972.

  7. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  8. Comparison of Reversal Test Pictures among Three Groups of Students: Normal, Education Mental Retarded and Students with Learning Disabilities in Tehran

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Koushesh

    2007-01-01

    Full Text Available Objective: Riversal visual perception discrimination test is one of the dyslexia diagnostic tests in children which can be performed in the group (group-based and it is reliable to detect these disorders in students of the primary schools especially those who spend their first educational weeks or months. The aim of this survey is comparison of Riversal test pictures among three groups of students: normal, educable mental retarded students and students with learning disabilities, aged 8-12 years old that were under coverage of Tehran Welfare Department. Materials & Methods: This Comparative cross – sectional study has performed on 150 girls and boys of mentioned groups that were selected by simple randomize selection. Results: The findings suggested that there was significant difference between surveyed groups (P=0.001. The highest scores were related to normal students and the lowest scores to educable mental retarded. The interval of negative scores of educable mental retarded from normal students was more than that of between educable mental retarded and learning disabilities. Conclusion: This survey indicates that students with learning disabilities (dyslexia have problems in their visual perception and this test can help to diagnose and determine abnormal children as soon as possible in order to better treatment.

  9. Summarizing an Ontology: A "Big Knowledge" Coverage Approach.

    Science.gov (United States)

    Zheng, Ling; Perl, Yehoshua; Elhanan, Gai; Ochs, Christopher; Geller, James; Halper, Michael

    2017-01-01

    Maintenance and use of a large ontology, consisting of thousands of knowledge assertions, are hampered by its scope and complexity. It is important to provide tools for summarization of ontology content in order to facilitate user "big picture" comprehension. We present a parameterized methodology for the semi-automatic summarization of major topics in an ontology, based on a compact summary of the ontology, called an "aggregate partial-area taxonomy", followed by manual enhancement. An experiment is presented to test the effectiveness of such summarization measured by coverage of a given list of major topics of the corresponding application domain. SNOMED CT's Specimen hierarchy is the test-bed. A domain-expert provided a list of topics that serves as a gold standard. The enhanced results show that the aggregate taxonomy covers most of the domain's main topics.

  10. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  11. Big Data Analytics for Prostate Radiotherapy.

    Science.gov (United States)

    Coates, James; Souhami, Luis; El Naqa, Issam

    2016-01-01

    Radiation therapy is a first-line treatment option for localized prostate cancer and radiation-induced normal tissue damage are often the main limiting factor for modern radiotherapy regimens. Conversely, under-dosing of target volumes in an attempt to spare adjacent healthy tissues limits the likelihood of achieving local, long-term control. Thus, the ability to generate personalized data-driven risk profiles for radiotherapy outcomes would provide valuable prognostic information to help guide both clinicians and patients alike. Big data applied to radiation oncology promises to deliver better understanding of outcomes by harvesting and integrating heterogeneous data types, including patient-specific clinical parameters, treatment-related dose-volume metrics, and biological risk factors. When taken together, such variables make up the basis for a multi-dimensional space (the "RadoncSpace") in which the presented modeling techniques search in order to identify significant predictors. Herein, we review outcome modeling and big data-mining techniques for both tumor control and radiotherapy-induced normal tissue effects. We apply many of the presented modeling approaches onto a cohort of hypofractionated prostate cancer patients taking into account different data types and a large heterogeneous mix of physical and biological parameters. Cross-validation techniques are also reviewed for the refinement of the proposed framework architecture and checking individual model performance. We conclude by considering advanced modeling techniques that borrow concepts from big data analytics, such as machine learning and artificial intelligence, before discussing the potential future impact of systems radiobiology approaches.

  12. More Differences or More Similarities Regarding Education in Big, Middle-sized and Small Companies

    Directory of Open Access Journals (Sweden)

    Marjana Merkač

    2001-12-01

    Full Text Available The article presents the results of research of education and qualifying of employees in small, middle-sized and big Slovenian companies. The research shows some differences regarding the attitude to the development of employees as a part of a company's business strategy, some obstacles for developing their abilities, and connections between job satisfaction and motivation for learning. It also shows how important it is for the subjects concerning education and qualifying if an individual works for a big, middle-sized, or small company.

  13. Learning Analytics: Challenges and Limitations

    Science.gov (United States)

    Wilson, Anna; Watson, Cate; Thompson, Terrie Lynn; Drew, Valerie; Doyle, Sarah

    2017-01-01

    Learning analytic implementations are increasingly being included in learning management systems in higher education. We lay out some concerns with the way learning analytics--both data and algorithms--are often presented within an unproblematized Big Data discourse. We describe some potential problems with the often implicit assumptions about…

  14. The amazing unity of the Universe and its origin in the Big Bang

    CERN Document Server

    van den Heuvel, Edward

    2016-01-01

    In the first chapters the author describes how our knowledge of the position of Earth in space and time has developed, thanks to the work of many generations of astronomers and physicists. He discusses how our position in the Galaxy was discovered, and how in 1929, Hubble uncovered the fact that the Universe is expanding, leading to the picture of the Big Bang. He then explains how astronomers have found that the laws of physics that were discovered here on Earth and in the Solar System (the laws of mechanics, gravity, atomic physics, electromagnetism, etc.) are valid throughout the Universe. This is illustrated by the fact that all matter in the Universe consists of atoms of the same chemical elements that we know on Earth. This unity is all the more surprising when one realizes that in the original Big Bang theory, different parts of the Universe could never have communicated with each other. It then is a mystery how they could have shared the same physical laws. This problem was solved by the introduction ...

  15. MOSAICKING MEXICO - THE BIG PICTURE OF BIG DATA

    Directory of Open Access Journals (Sweden)

    F. Hruby

    2016-06-01

    Full Text Available The project presented in this article is to create a completely seamless and cloud-free mosaic of Mexico at a resolution of 5m, using approximately 4,500 RapidEye images. To complete this project in a timely manner and with limited operators, a number of processing architectures were required to handle a data volume of 12 terabytes. This paper will discuss the different operations realized to complete this project, which include, preprocessing, mosaic generation and post mosaic editing. Prior to mosaic generation, it was necessary to filter the 50,000 RapidEye images captured over Mexico between 2011 and 2014 to identify the top candidate images, based on season and cloud cover. Upon selecting the top candidate images, PCI Geomatics’ GXL system was used to reproject, color balance and generate seamlines for the output 1TB+ mosaic. This paper will also discuss innovative techniques used by the GXL for color balancing large volumes of imagery with substantial radiometric differences. Furthermore, post-mosaicking steps, such as, exposure correction, cloud and cloud shadow elimination will be presented.

  16. Directed forgetting: Comparing pictures and words.

    Science.gov (United States)

    Quinlan, Chelsea K; Taylor, Tracy L; Fawcett, Jonathan M

    2010-03-01

    The authors investigated directed forgetting as a function of the stimulus type (picture, word) presented at study and test. In an item-method directed forgetting task, study items were presented 1 at a time, each followed with equal probability by an instruction to remember or forget. Participants exhibited greater yes-no recognition of remember than forget items for each of the 4 study-test conditions (picture-picture, picture-word, word-word, word-picture). However, this difference was significantly smaller when pictures were studied than when words were studied. This finding demonstrates that the magnitude of the directed forgetting effect can be reduced by high item memorability, such as when the picture superiority effect is operating. This suggests caution in using pictures at study when the goal of an experiment is to examine potential group differences in the magnitude of the directed forgetting effect. 2010 APA, all rights reserved.

  17. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  18. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  19. Who Prophets from Big Data in Education? New Insights and New Challenges

    Science.gov (United States)

    Lynch, Collin F.

    2017-01-01

    Big Data can radically transform education by enabling personalized learning, deep student modeling, and true longitudinal studies that compare changes across classrooms, regions, and years. With these promises, however, come risks to individual privacy and educational validity, along with deep policy and ethical issues. Education is largely a…

  20. After the Big Bang: What's Next in Design Education? Time to Relax?

    Science.gov (United States)

    Fleischmann, Katja

    2015-01-01

    The article "Big Bang technology: What's next in design education, radical innovation or incremental change?" (Fleischmann, 2013) appeared in the "Journal of Learning Design" Volume 6, Issue 3 in 2013. Two years on, Associate Professor Fleischmann reflects upon her original article within this article. Although it has only been…

  1. Radiodiagnosis of lung picture changes

    International Nuclear Information System (INIS)

    Kamenetskij, M.S.; Lezova, T.F.

    1988-01-01

    The roentgenological picture of changes of the lung picture in the case of different pathological states in the lungs and the heart, is described. A developed diagnostic algorithm for the syndrome of lung picture change and the rules of its application are given. 5 refs.; 9 figs

  2. Processors and systems (picture processing)

    Energy Technology Data Exchange (ETDEWEB)

    Gemmar, P

    1983-01-01

    Automatic picture processing requires high performance computers and high transmission capacities in the processor units. The author examines the possibilities of operating processors in parallel in order to accelerate the processing of pictures. He therefore discusses a number of available processors and systems for picture processing and illustrates their capacities for special types of picture processing. He stresses the fact that the amount of storage required for picture processing is exceptionally high. The author concludes that it is as yet difficult to decide whether very large groups of simple processors or highly complex multiprocessor systems will provide the best solution. Both methods will be aided by the development of VLSI. New solutions have already been offered (systolic arrays and 3-d processing structures) but they also are subject to losses caused by inherently parallel algorithms. Greater efforts must be made to produce suitable software for multiprocessor systems. Some possibilities for future picture processing systems are discussed. 33 references.

  3. Clinical judgement in the era of big data and predictive analytics.

    Science.gov (United States)

    Chin-Yee, Benjamin; Upshur, Ross

    2017-12-13

    Clinical judgement is a central and longstanding issue in the philosophy of medicine which has generated significant interest over the past few decades. In this article, we explore different approaches to clinical judgement articulated in the literature, focusing in particular on data-driven, mathematical approaches which we contrast with narrative, virtue-based approaches to clinical reasoning. We discuss the tension between these different clinical epistemologies and further explore the implications of big data and machine learning for a philosophy of clinical judgement. We argue for a pluralistic, integrative approach, and demonstrate how narrative, virtue-based clinical reasoning will remain indispensable in an era of big data and predictive analytics. © 2017 John Wiley & Sons, Ltd.

  4. [Big data, medical language and biomedical terminology systems].

    Science.gov (United States)

    Schulz, Stefan; López-García, Pablo

    2015-08-01

    A variety of rich terminology systems, such as thesauri, classifications, nomenclatures and ontologies support information and knowledge processing in health care and biomedical research. Nevertheless, human language, manifested as individually written texts, persists as the primary carrier of information, in the description of disease courses or treatment episodes in electronic medical records, and in the description of biomedical research in scientific publications. In the context of the discussion about big data in biomedicine, we hypothesize that the abstraction of the individuality of natural language utterances into structured and semantically normalized information facilitates the use of statistical data analytics to distil new knowledge out of textual data from biomedical research and clinical routine. Computerized human language technologies are constantly evolving and are increasingly ready to annotate narratives with codes from biomedical terminology. However, this depends heavily on linguistic and terminological resources. The creation and maintenance of such resources is labor-intensive. Nevertheless, it is sensible to assume that big data methods can be used to support this process. Examples include the learning of hierarchical relationships, the grouping of synonymous terms into concepts and the disambiguation of homonyms. Although clear evidence is still lacking, the combination of natural language technologies, semantic resources, and big data analytics is promising.

  5. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    Science.gov (United States)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-01-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  6. Traffic modelling for Big Data backed telecom cloud

    OpenAIRE

    Via Baraldés, Anna

    2016-01-01

    The objective of this project is to provide traffic models based on new services characteristics. Specifically, we focus on modelling the traffic between origin-destination node pairs (also known as OD pairs) in a telecom network. Two use cases are distinguished: i) traffic generation in the context of simulation, and ii) traffic modelling for prediction in the context of big-data backed telecom cloud systems. To this aim, several machine learning and statistical models and technics are studi...

  7. The 2025 Big "G" Geriatrician: Defining Job Roles to Guide Fellowship Training.

    Science.gov (United States)

    Simpson, Deborah; Leipzig, Rosanne M; Sauvigné, Karen

    2017-10-01

    Changes in health care that are already in progress, including value- and population-based care, use of new technologies for care, big data and machine learning, and the patient as consumer and decision maker, will determine the job description for geriatricians practicing in 2025. Informed by these future certainties, 115 geriatrics educators attending the 2016 Donald W. Reynolds Foundation Annual meeting identified five 2025 geriatrician job roles: complexivist; consultant; health system leader and innovator; functional preventionist; and educator for big "G" and little "g" providers. By identifying these job roles, geriatrics fellowship training can be preemptively redesigned. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.

  8. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    Science.gov (United States)

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The

  9. Developing a Mobile Learning Management System for Outdoors Nature Science Activities Based on 5E Learning Cycle

    Science.gov (United States)

    Lai, Ah-Fur; Lai, Horng-Yih; Chuang, Wei-Hsiang; Wu, Zih-Heng

    2015-01-01

    Traditional outdoor learning activities such as inquiry-based learning in nature science encounter many dilemmas. Due to prompt development of mobile computing and widespread of mobile devices, mobile learning becomes a big trend on education. The main purpose of this study is to develop a mobile-learning management system for overcoming the…

  10. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  11. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  12. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  13. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  14. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  15. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  16. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives

    Science.gov (United States)

    Miron-Shatz, T.; Lau, A. Y. S.; Paton, C.

    2014-01-01

    Summary Objectives As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. Methods A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Results Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to “small data” would also be useful. PMID:25123717

  17. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  18. Impact of the picture exchange communication system: effects on communication and collateral effects on maladaptive behaviors.

    Science.gov (United States)

    Ganz, Jennifer B; Parker, Richard; Benson, Joanne

    2009-12-01

    Many children with autism require intensive instruction in the use of augmentative or alternative communication systems, such as the Picture Exchange Communication System (PECS). This study investigated the use of PECS with three young boys with autism to determine the impact of PECS training on use of pictures for requesting, use of intelligible words, and maladaptive behaviors. A multiple baseline-probe design with a staggered start was implemented. Results indicated that all of the participants quickly learned to make requests using pictures and that two used intelligible speech following PECS instruction; maladaptive behaviors were variable throughout baseline and intervention phases. Although all of the participants improved in at least one dependent variable, there remain questions regarding who is best suited for PECS and similar interventions.

  19. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  20. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  1. Evolutionary Feature Selection for Big Data Classification: A MapReduce Approach

    Directory of Open Access Journals (Sweden)

    Daniel Peralta

    2015-01-01

    Full Text Available Nowadays, many disciplines have to deal with big datasets that additionally involve a high number of features. Feature selection methods aim at eliminating noisy, redundant, or irrelevant features that may deteriorate the classification performance. However, traditional methods lack enough scalability to cope with datasets of millions of instances and extract successful results in a delimited time. This paper presents a feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets. The algorithm decomposes the original dataset in blocks of instances to learn from them in the map phase; then, the reduce phase merges the obtained partial results into a final vector of feature weights, which allows a flexible application of the feature selection procedure using a threshold to determine the selected subset of features. The feature selection method is evaluated by using three well-known classifiers (SVM, Logistic Regression, and Naive Bayes implemented within the Spark framework to address big data problems. In the experiments, datasets up to 67 millions of instances and up to 2000 attributes have been managed, showing that this is a suitable framework to perform evolutionary feature selection, improving both the classification accuracy and its runtime when dealing with big data problems.

  2. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  3. Snake pictures draw more early attention than spider pictures in non-phobic women: evidence from event-related brain potentials.

    Science.gov (United States)

    Van Strien, J W; Eijlers, R; Franken, I H A; Huijding, J

    2014-02-01

    Snakes were probably the first predators of mammals and may have been important agents of evolutionary changes in the primate visual system allowing rapid visual detection of fearful stimuli (Isbell, 2006). By means of early and late attention-related brain potentials, we examined the hypothesis that more early visual attention is automatically allocated to snakes than to spiders. To measure the early posterior negativity (EPN), 24 healthy, non-phobic women watched the random rapid serial presentation of 600 snake pictures, 600 spider pictures, and 600 bird pictures (three pictures per second). To measure the late positive potential (LPP), they also watched similar pictures (30 pictures per stimulus category) in a non-speeded presentation. The EPN amplitude was largest for snake pictures, intermediate for spider pictures and smallest for bird pictures. The LPP was significantly larger for both snake and spider pictures when compared to bird pictures. Interestingly, spider fear (as measured by a questionnaire) was associated with EPN amplitude for spider pictures, whereas snake fear was not associated with EPN amplitude for snake pictures. The results suggest that ancestral priorities modulate the early capture of visual attention and that early attention to snakes is more innate and independent of reported fear. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  5. English Idioms and Iranian Beginner Learners: A Focus on Short Stories and Pictures

    Science.gov (United States)

    Mehrpour, Saeed; Mansourzadeh, Nurullah

    2017-01-01

    Idiomatic expressions are among the most difficult and challenging aspects in the realm of lexicon. The focus of the present study was on investigating the effect of short stories and pictures on learning idiomatic expressions by beginner EFL learners. For this aim, 52 Iranian EFL learners were chosen and assigned to three groups randomly: two…

  6. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education

    Directory of Open Access Journals (Sweden)

    Christos Vaitsis

    2014-11-01

    Full Text Available Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education.Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them.Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i learning outcomes and teaching methods, (ii examination and learning outcomes, and (iii teaching methods, learning outcomes, examination results, and gap analysis.Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to

  7. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education.

    Science.gov (United States)

    Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil

    2014-01-01

    Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data

  8. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  9. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  10. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  12. RESEARCH ON THE CONSTRUCTION OF REMOTE SENSING AUTOMATIC INTERPRETATION SYMBOL BIG DATA

    Directory of Open Access Journals (Sweden)

    Y. Gao

    2018-04-01

    Full Text Available Remote sensing automatic interpretation symbol (RSAIS is an inexpensive and fast method in providing precise in-situ information for image interpretation and accuracy. This study designed a scientific and precise RSAIS data characterization method, as well as a distributed and cloud architecture massive data storage method. Additionally, it introduced an offline and online data update mode and a dynamic data evaluation mechanism, with the aim to create an efficient approach for RSAIS big data construction. Finally, a national RSAIS database with more than 3 million samples covering 86 land types was constructed during 2013–2015 based on the National Geographic Conditions Monitoring Project of China and then annually updated since the 2016 period. The RSAIS big data has proven to be a good method for large scale image interpretation and field validation. It is also notable that it has the potential to solve image automatic interpretation with the assistance of deep learning technology in the remote sensing big data era.

  13. Research on the Construction of Remote Sensing Automatic Interpretation Symbol Big Data

    Science.gov (United States)

    Gao, Y.; Liu, R.; Liu, J.; Cheng, T.

    2018-04-01

    Remote sensing automatic interpretation symbol (RSAIS) is an inexpensive and fast method in providing precise in-situ information for image interpretation and accuracy. This study designed a scientific and precise RSAIS data characterization method, as well as a distributed and cloud architecture massive data storage method. Additionally, it introduced an offline and online data update mode and a dynamic data evaluation mechanism, with the aim to create an efficient approach for RSAIS big data construction. Finally, a national RSAIS database with more than 3 million samples covering 86 land types was constructed during 2013-2015 based on the National Geographic Conditions Monitoring Project of China and then annually updated since the 2016 period. The RSAIS big data has proven to be a good method for large scale image interpretation and field validation. It is also notable that it has the potential to solve image automatic interpretation with the assistance of deep learning technology in the remote sensing big data era.

  14. Emerging Evidence on the Use of Big Data and Analytics in Workplace Learning: A Systematic Literature Review

    Science.gov (United States)

    Giacumo, Lisa A.; Breman, Jeroen

    2016-01-01

    This article provides a systematic literature review about nonprofit and for-profit organizations using "big data" to inform performance improvement initiatives. The review of literature resulted in 4 peer-reviewed articles and an additional 33 studies covering the topic for these contexts. The review found that big data and analytics…

  15. The Medium is the Message: Pictures and Objects Evoke Distinct Conceptual Relations in Parent-Child Conversations

    Science.gov (United States)

    Ware, Elizabeth A.; Gelman, Susan A.; Kleinberg, Felicia

    2013-01-01

    An important developmental task is learning to organize experience by forming conceptual relations among entities (e.g., a lion and a snake can be linked because both are animals; a lion and a cage can be linked because the lion lives in the cage). We propose that representational medium (i.e., pictures vs. objects) plays an important role in influencing which relations children consider. Prior work has demonstrated that pictures more readily evoke broader categories, whereas objects more readily call attention to specific individuals. We therefore predicted that pictures would encourage taxonomic and shared-property relations, whereas objects would encourage thematic and slot-filler relations. We observed 32 mother-child dyads (M child ages = 2.9 and 4.3) playing with pictures and objects, and identified utterances in which they made taxonomic, thematic, shared-property, or slot-filler links between items. The results confirmed our predictions and thus support representational medium as an important factor that influences the conceptual relations expressed during dyadic conversations. PMID:24273367

  16. Reversing the picture superiority effect: a speed-accuracy trade-off study of recognition memory.

    Science.gov (United States)

    Boldini, Angela; Russo, Riccardo; Punia, Sahiba; Avons, S E

    2007-01-01

    Speed-accuracy trade-off methods have been used to contrast single- and dual-process accounts of recognition memory. With these procedures, subjects are presented with individual test items and required to make recognition decisions under various time constraints. In three experiments, we presented words and pictures to be intentionally learned; test stimuli were always visually presented words. At test, we manipulated the interval between the presentation of each test stimulus and that of a response signal, thus controlling the amount of time available to retrieve target information. The standard picture superiority effect was significant in long response deadline conditions (i.e., > or = 2,000 msec). Conversely, a significant reverse picture superiority effect emerged at short response-signal deadlines (< 200 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory. Alternative accounts are also discussed.

  17. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  18. Improvement of encoding and retrieval in normal and pathological aging with word-picture paradigm.

    Science.gov (United States)

    Iodice, Rosario; Meilán, Juan José G; Carro, Juan

    2015-01-01

    During the aging process, there is a progressive deficit in the encoding of new information and its retrieval. Different strategies are used in order to maintain, optimize or diminish these deficits in people with and without dementia. One of the classic techniques is paired-associate learning (PAL), which is based on improving the encoding of memories, but it has yet to be used to its full potential in people with dementia. In this study, our aim is to corroborate the importance of PAL tasks as instrumental tools for creating contextual cues, during both the encoding and retrieval phases of memory. Additionally, we aim to identify the most effective form of presenting the related items. Pairs of stimuli were shown to healthy elderly people and to patients with moderate and mild Alzheimer's disease. The encoding conditions were as follows: word/word, picture/picture, picture/word, and word/picture. Associative cued recall of the second item in the pair shows that retrieval is higher for the word/picture condition in the two groups of patients with dementia when compared to the other conditions, while word/word is the least effective in all cases. These results confirm that PAL is an effective tool for creating contextual cues during both the encoding and retrieval phases in people with dementia when the items are presented using the word/picture condition. In this way, the encoding and retrieval deficit can be reduced in these people.

  19. Grappling with the Future Use of Big Data for Translational Medicine and Clinical Care.

    Science.gov (United States)

    Murphy, S; Castro, V; Mandl, K

    2017-08-01

    Objectives: Although patients may have a wealth of imaging, genomic, monitoring, and personal device data, it has yet to be fully integrated into clinical care. Methods: We identify three reasons for the lack of integration. The first is that "Big Data" is poorly managed by most Electronic Medical Record Systems (EMRS). The data is mostly available on "cloud-native" platforms that are outside the scope of most EMRs, and even checking if such data is available on a patient often must be done outside the EMRS. The second reason is that extracting features from the Big Data that are relevant to healthcare often requires complex machine learning algorithms, such as determining if a genomic variant is protein-altering. The third reason is that applications that present Big Data need to be modified constantly to reflect the current state of knowledge, such as instructing when to order a new set of genomic tests. In some cases, applications need to be updated nightly. Results: A new architecture for EMRS is evolving which could unite Big Data, machine learning, and clinical care through a microservice-based architecture which can host applications focused on quite specific aspects of clinical care, such as managing cancer immunotherapy. Conclusion: Informatics innovation, medical research, and clinical care go hand in hand as we look to infuse science-based practice into healthcare. Innovative methods will lead to a new ecosystem of applications (Apps) interacting with healthcare providers to fulfill a promise that is still to be determined. Georg Thieme Verlag KG Stuttgart.

  20. Teaching University Students Cultural Diversity by Means of Multi-Cultural Picture Books in Taiwan

    Science.gov (United States)

    Wu, Jia-Fen

    2017-01-01

    In a pluralistic society, learning about foreign cultures is an important goal in the kind of multi-cultural education that will lead to cultural competency. This study adopted a qualitative dominant mixed-method approach to examine the effectiveness of the multi-cultural picture books on: (1) students' achieving awareness towards cultural…

  1. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  2. A CDC 1700 on-line system for the analysis, data logging and monitoring of big bubble chamber pictures

    International Nuclear Information System (INIS)

    Guyonnet, J.-L.

    1975-01-01

    This work presents the analysis system of large bubble chamber such as Gargamelle, BEBC pictures realized in the heavy liquid bubble chamber group with scanning and measurement stations on-line with a CDC 1700 computer. This work deals with the general characteristics of these stations and of the computer, and puts emphasis on the conception and functions of the analysis programmes: scanning, measurement and data processing. The data acquisition system runs in a context of real time multiprogrammation [fr

  3. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  4. Targeted Learning in Healthcare Research.

    Science.gov (United States)

    Gruber, Susan

    2015-12-01

    The increasing availability of Big Data in healthcare encourages investigators to seek answers to big questions. However, nonparametric approaches to analyzing these data can suffer from the curse of dimensionality, and traditional parametric modeling does not necessarily scale. Targeted learning (TL) combines semiparametric methodology with advanced machine learning techniques to provide a sound foundation for extracting information from data. Predictive models, variable importance measures, and treatment benefits and risks can all be addressed within this framework. TL has been applied in a broad range of healthcare settings, including genomics, precision medicine, health policy, and drug safety. This article provides an introduction to the two main components of TL, targeted minimum loss-based estimation and super learning, and gives examples of applications in predictive modeling, variable importance ranking, and comparative effectiveness research.

  5. Ocean Networks Canada's "Big Data" Initiative

    Science.gov (United States)

    Dewey, R. K.; Hoeberechts, M.; Moran, K.; Pirenne, B.; Owens, D.

    2013-12-01

    Ocean Networks Canada operates two large undersea observatories that collect, archive, and deliver data in real time over the Internet. These data contribute to our understanding of the complex changes taking place on our ocean planet. Ocean Networks Canada's VENUS was the world's first cabled seafloor observatory to enable researchers anywhere to connect in real time to undersea experiments and observations. Its NEPTUNE observatory is the largest cabled ocean observatory, spanning a wide range of ocean environments. Most recently, we installed a new small observatory in the Arctic. Together, these observatories deliver "Big Data" across many disciplines in a cohesive manner using the Oceans 2.0 data management and archiving system that provides national and international users with open access to real-time and archived data while also supporting a collaborative work environment. Ocean Networks Canada operates these observatories to support science, innovation, and learning in four priority areas: study of the impact of climate change on the ocean; the exploration and understanding the unique life forms in the extreme environments of the deep ocean and below the seafloor; the exchange of heat, fluids, and gases that move throughout the ocean and atmosphere; and the dynamics of earthquakes, tsunamis, and undersea landslides. To date, the Ocean Networks Canada archive contains over 130 TB (collected over 7 years) and the current rate of data acquisition is ~50 TB per year. This data set is complex and diverse. Making these "Big Data" accessible and attractive to users is our priority. In this presentation, we share our experience as a "Big Data" institution where we deliver simple and multi-dimensional calibrated data cubes to a diverse pool of users. Ocean Networks Canada also conducts extensive user testing. Test results guide future tool design and development of "Big Data" products. We strive to bridge the gap between the raw, archived data and the needs and

  6. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  7. Manipulating affective state using extended picture presentations.

    Science.gov (United States)

    Sutton, S K; Davidson, R J; Donzella, B; Irwin, W; Dottl, D A

    1997-03-01

    Separate, extended series of positive, negative, and neutral pictures were presented to 24 (12 men, 12 women) undergraduates. Each series was presented on a different day, with full counterbalancing of presentation orders. Affective state was measured using (a) orbicularis oculi activity in response to acoustic startle probes during picture presentation, (b) corrugator supercilii activity between and during picture presentation, and (c) changes in self-reports of positive and negative affect. Participants exhibited larger eyeblink reflex magnitudes when viewing negative than when viewing positive pictures. Corrugator activity was also greater during the negative than during the positive picture set, during both picture presentation and the period between pictures. Self-reports of negative affect increased in response to the negative picture set, and self-reports of positive affect were greatest following the positive picture set. These findings suggest that extended picture presentation is an effective method of manipulating affective state and further highlight the utility of startle probe and facial electromyographic measures in providing on-line readouts of affective state.

  8. Facilitation and interference in naming: A consequence of the same learning process?

    Science.gov (United States)

    Hughes, Julie W; Schnur, Tatiana T

    2017-08-01

    Our success with naming depends on what we have named previously, a phenomenon thought to reflect learning processes. Repeatedly producing the same name facilitates language production (i.e., repetition priming), whereas producing semantically related names hinders subsequent performance (i.e., semantic interference). Semantic interference is found whether naming categorically related items once (continuous naming) or multiple times (blocked cyclic naming). A computational model suggests that the same learning mechanism responsible for facilitation in repetition creates semantic interference in categorical naming (Oppenheim, Dell, & Schwartz, 2010). Accordingly, we tested the predictions that variability in semantic interference is correlated across categorical naming tasks and is caused by learning, as measured by two repetition priming tasks (picture-picture repetition priming, Exp. 1; definition-picture repetition priming, Exp. 2, e.g., Wheeldon & Monsell, 1992). In Experiment 1 (77 subjects) semantic interference and repetition priming effects were robust, but the results revealed no relationship between semantic interference effects across contexts. Critically, learning (picture-picture repetition priming) did not predict semantic interference effects in either task. We replicated these results in Experiment 2 (81 subjects), finding no relationship between semantic interference effects across tasks or between semantic interference effects and learning (definition-picture repetition priming). We conclude that the changes underlying facilitatory and interfering effects inherent to lexical access are the result of distinct learning processes where multiple mechanisms contribute to semantic interference in naming. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Tweeting about Mental Health: Big Data Text Analysis of Twitter for Public Policy

    Science.gov (United States)

    Zaydman, Mikhail

    2017-01-01

    This dissertation examines conversations and attitudes about mental health in Twitter discourse. The research uses big data collection, machine learning classification, and social network analysis to answer the following questions (1) what mental health topics do people discuss on Twitter? (2) Have patterns of conversation changed over time? Have…

  10. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  11. Modern technologies of e-learning

    Directory of Open Access Journals (Sweden)

    G. A. gyzy Mamedova

    2017-01-01

    Full Text Available E-learning constitutes a significant competition to traditional education in many countries and has become a major tool for the modernization of education and economic growth. For the development and implementation of successful e-learning systems, we need technologies that allow working with them for any number of users, providing a good learning environment. The article provides an overview of the technologies used in foreign universities for managing e-learning, such as 3D technologies in training programs, interactive technologies, personalization of learning using cloud computing and big data technologies. It is shown that today quite a large number of software and hardware development was created and introduced, implementing various mechanisms of introducing information technologies in the educational process. One of such developments is the use of adaptive technologies in the learning process, allowing the student to adapt to the training material, choose the suitable method of mastering the material, and adjust the intensity of training at different stages of the learning process. Another development of information technologies in education is the use of cloud computing, allowing access to educational resources for teachers, students, and managers of the education system. It was revealed that the use of cloud technologies leads to a significant decrease in material costs for the purchase of expensive equipment and software, educational content from the cloud can be accessed from any device (laptop, smartphone, tablet, etc. and at a convenient time for the learner, it is enough to have Internet connection and a browser. In the e-learning environment, there are many different types of data, both structured and unstructured, processing of which is difficult to implement using traditional statistical methods. For the processing of such data technologies of processing big data are used such as NoSQL and Hadoop. The article shows that the

  12. From machine learning to deep learning: progress in machine intelligence for rational drug discovery.

    Science.gov (United States)

    Zhang, Lu; Tan, Jianjun; Han, Dan; Zhu, Hao

    2017-11-01

    Machine intelligence, which is normally presented as artificial intelligence, refers to the intelligence exhibited by computers. In the history of rational drug discovery, various machine intelligence approaches have been applied to guide traditional experiments, which are expensive and time-consuming. Over the past several decades, machine-learning tools, such as quantitative structure-activity relationship (QSAR) modeling, were developed that can identify potential biological active molecules from millions of candidate compounds quickly and cheaply. However, when drug discovery moved into the era of 'big' data, machine learning approaches evolved into deep learning approaches, which are a more powerful and efficient way to deal with the massive amounts of data generated from modern drug discovery approaches. Here, we summarize the history of machine learning and provide insight into recently developed deep learning approaches and their applications in rational drug discovery. We suggest that this evolution of machine intelligence now provides a guide for early-stage drug design and discovery in the current big data era. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  14. Interpreting Evidence-of-Learning: Educational Research in the Era of Big Data

    Science.gov (United States)

    Cope, Bill; Kalantzis, Mary

    2015-01-01

    In this article, we argue that big data can offer new opportunities and roles for educational researchers. In the traditional model of evidence-gathering and interpretation in education, researchers are independent observers, who pre-emptively create instruments of measurement, and insert these into the educational process in specialized times and…

  15. 3D Fourier synthesis of a new X-ray picture identical in projection to a previous picture

    International Nuclear Information System (INIS)

    Carlsson, P.E.

    1993-01-01

    A central problem in diagnostic radiology is to compare a new X-ray picture with a previous picture and from this comparison be able to decide if anatomical changes have occurred in the patient or not. It is of primary interest that these pictures are identical in projection. If not it is difficult to decide with confidence if differences between the pictures are due to anatomical changes or differences in their projection geometry. In this thesis we present a non invasive method that makes it possible to find the relative changes in the projection geometry between the exposure of a previous picture and a new picture. The method presented is based on the projection slice theorem (central section theorem). Instead of an elaborate search for a single new picture a pre-planned set of pictures are exposed from a circular orbit above the patient. By using 3D Fourier transform techniques we are able to synthesize a new X-ray picture from this set of pictures that is identical in projection to the previous one. The method has certain limits. Those are as follows: *The X-ray focus position must always be at a fixed distance from the image plane. *The object may only be translated parallel to the image plane and rotated around axes perpendicular to this plane. Under those restrictions, we may treat divergent projection pictures as if they are generated by a parallel projection of a scaled object. The unknown rotation and translation of the object in the previous case are both retrieved in two different procedures and compensated for. Experiments on synthetic data has proved that the method is working even in the presence of severe noise

  16. Morphing Images: A Potential Tool for Teaching Word Recognition to Children with Severe Learning Difficulties

    Science.gov (United States)

    Sheehy, Kieron

    2005-01-01

    Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…

  17. Big data in medical science--a biostatistical view.

    Science.gov (United States)

    Binder, Harald; Blettner, Maria

    2015-02-27

    Inexpensive techniques for measurement and data storage now enable medical researchers to acquire far more data than can conveniently be analyzed by traditional methods. The expression "big data" refers to quantities on the order of magnitude of a terabyte (1012 bytes); special techniques must be used to evaluate such huge quantities of data in a scientifically meaningful way. Whether data sets of this size are useful and important is an open question that currently confronts medical science. In this article, we give illustrative examples of the use of analytical techniques for big data and discuss them in the light of a selective literature review. We point out some critical aspects that should be considered to avoid errors when large amounts of data are analyzed. Machine learning techniques enable the recognition of potentially relevant patterns. When such techniques are used, certain additional steps should be taken that are unnecessary in more traditional analyses; for example, patient characteristics should be differentially weighted. If this is not done as a preliminary step before similarity detection, which is a component of many data analysis operations, characteristics such as age or sex will be weighted no higher than any one out of 10 000 gene expression values. Experience from the analysis of conventional observational data sets can be called upon to draw conclusions about potential causal effects from big data sets. Big data techniques can be used, for example, to evaluate observational data derived from the routine care of entire populations, with clustering methods used to analyze therapeutically relevant patient subgroups. Such analyses can provide complementary information to clinical trials of the classic type. As big data analyses become more popular, various statistical techniques for causality analysis in observational data are becoming more widely available. This is likely to be of benefit to medical science, but specific adaptations will

  18. Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology.

    Science.gov (United States)

    Salazar, Brittany M; Balczewski, Emily A; Ung, Choong Yong; Zhu, Shizhen

    2016-12-27

    Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring "big data" applications in pediatric oncology. Computational strategies derived from big data science-network- and machine learning-based modeling and drug repositioning-hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which "big data" and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases.

  19. Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology

    Directory of Open Access Journals (Sweden)

    Brittany M. Salazar

    2016-12-01

    Full Text Available Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring “big data” applications in pediatric oncology. Computational strategies derived from big data science–network- and machine learning-based modeling and drug repositioning—hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which “big data” and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases.

  20. Energy management: the big picture

    International Nuclear Information System (INIS)

    Vesma, Vilnis.

    1997-01-01

    Since the recent dramatic fall in energy prices may have come to an end, energy managers will have to turn to a range of non-price cost reduction techniques. A framework to aid this process is provided. It rests on ten categories of activity. These are: obtaining a refund; negotiating cheaper tariffs; modifying patterns of demand; inspection and maintenance; operating practices; training awareness and motivation; waste avoidance; retrofit technology; modifying plant and equipment; energy-efficient design. (UK)

  1. Asset management: the big picture.

    Science.gov (United States)

    Deinstadt, Deborah C

    2005-10-01

    To develop an comprehensive asset management plan, you need, first of all, to understand the asset management continuum. A key preliminary step is to thoroughly assess the existing equipment base. A critical objective is to ensure that there are open lines of communication among the teams charged with managing the plan's various phases.

  2. Connecting with the Big Picture

    Science.gov (United States)

    Brophy, Jere

    2009-01-01

    This article concludes the special issue on identity and motivation by discussing the five preceding contributions. It identifies strengths and limitations in each article and places them within a larger context, indicating ways that the authors could broaden the scope of their inquiries by breaking free of existing limitations or adding…

  3. Students Mental Representation of Biology Diagrams/Pictures Conventions Based on Formation of Causal Network

    Science.gov (United States)

    Sampurno, A. W.; Rahmat, A.; Diana, S.

    2017-09-01

    Diagrams/pictures conventions is one form of visual media that often used to assist students in understanding the biological concepts. The effectiveness of use diagrams/pictures in biology learning at school level has also been mostly reported. This study examines the ability of high school students in reading diagrams/pictures biological convention which is described by Mental Representation based on formation of causal networks. The study involved 30 students 11th grade MIA senior high school Banten Indonesia who are studying the excretory system. MR data obtained by Instrument worksheet, developed based on CNET-protocol, in which there are diagrams/drawings of nephron structure and urinary mechanism. Three patterns formed MR, namely Markov chain, feedback control with a single measurement, and repeated feedback control with multiple measurement. The third pattern is the most dominating pattern, differences in the pattern of MR reveal the difference in how and from which point the students begin to uncover important information contained in the diagram to establish a causal networks. Further analysis shows that a difference in the pattern of MR relate to how complex the students process the information contained in the diagrams/pictures.

  4. Gaze differences in processing pictures with emotional content.

    Science.gov (United States)

    Budimir, Sanja; Palmović, Marijan

    2011-01-01

    The International Affective Picture System (IAPS) is a set of standardized emotionally evocative color photographs developed by NIMH Center for Emotion and Attention at the University of Florida. It contains more than 900 emotional pictures indexed by emotional valence, arousal and dominance. However, when IAPS pictures were used in studying emotions with the event-related potentials, the results have shown a great deal of variation and inconsistency. In this research arousal and dominance of pictures were controlled while emotional valence was manipulated as 3 categories, pleasant, neutral and unpleasant pictures. Two experiments were conducted with an eye-tracker in order to determine to what the participants turn their gaze. Participants were 25 psychology students with normal vision. Every participant saw all pictures in color and same pictures in black/white version. This makes 200 analyzed units for color pictures and 200 for black and white pictures. Every picture was divided into figure and ground. Considering that perception can be influenced by color, edges, luminosity and contrast and since all those factors are collapsed on the pictures in IAPS, we compared color pictures with same black and white pictures. In first eye-tracking IAPS research we analyzed 12 emotional pictures and showed that participants have higher number of fixations for ground on neutral and unpleasant pictures and for figure on pleasant pictures. Second experiment was conducted with 4 sets of emotional complementary pictures (pleasant/unpleasant) which differ only on the content in the figure area and it was shown that participants were more focused on the figure area than on the ground area. Future ERP (event related potential) research with IAPS pictures should take into consideration these findings and to either choose pictures with blank ground or adjust pictures in the way that ground is blank. For the following experiments suggestion is to put emotional content in the figure

  5. How to use Big Data technologies to optimize operations in Upstream Petroleum Industry

    Directory of Open Access Journals (Sweden)

    Abdelkader Baaziz

    2013-12-01

    Full Text Available “Big Data is the oil of the new economy” is the most famous citation during the three last years. It has even been adopted by the World Economic Forum in 2011. In fact, Big Data is like crude! It’s valuable, but if unrefined it cannot be used. It must be broken down, analyzed for it to have value. But what about Big Data generated by the Petroleum Industry and particularly its upstream segment? Upstream is no stranger to Big Data. Understanding and leveraging data in the upstream segment enables firms to remain competitive throughout planning, exploration, delineation, and field development.Oil & Gas Companies conduct advanced geophysics modeling and simulation to support operations where 2D, 3D & 4D Seismic generate significant data during exploration phases. They closely monitor the performance of their operational assets. To do this, they use tens of thousands of data-collecting sensors in subsurface wells and surface facilities to provide continuous and real-time monitoring of assets and environmental conditions. Unfortunately, this information comes in various and increasingly complex forms, making it a challenge to collect, interpret, and leverage the disparate data. As an example, Chevron’s internal IT traffic alone exceeds 1.5 terabytes a day.Big Data technologies integrate common and disparate data sets to deliver the right information at the appropriate time to the correct decision-maker. These capabilities help firms act on large volumes of data, transforming decision-making from reactive to proactive and optimizing all phases of exploration, development and production. Furthermore, Big Data offers multiple opportunities to ensure safer, more responsible operations. Another invaluable effect of that would be shared learning.The aim of this paper is to explain how to use Big Data technologies to optimize operations. How can Big Data help experts to decision-making leading the desired outcomes?Keywords:Big Data; Analytics

  6. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  7. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  8. Revisiting the picture-superiority effect in symbolic comparisons: do pictures provide privileged access?

    Science.gov (United States)

    Amrhein, Paul C; McDaniel, Mark A; Waddill, Paula

    2002-09-01

    In 4 experiments, symbolic comparisons were investigated to test semantic-memory retrieval accounts espousing processing advantages for picture over word stimuli. In Experiment 1, participants judged pairs of animal names or pictures by responding to questions probing concrete or abstract attributes (texture or size, ferocity or intelligence). Per pair, attributes were salient or nonsalient concerning their prerated relevance to animals being compared. Distance (near or far) between attribute magnitudes was also varied. Pictures did not significantly speed responding relative to words across all other variables. Advantages were found forfar attribute magnitudes (i.e., the distance effect) and salient attributes. The distance effect was much less for salient than nonsalient concrete-attribute comparisons. These results were consistently found in additional experiments with increased statistical power to detect modality effects. Our findings argue against dual-coding and some common-code accounts of conceptual attribute processing, urging reexamination of the assumption that pictures confer privileged access to long-term knowledge.

  9. Effects of arousal and context on recognition memory for emotional pictures in younger and older adults

    Science.gov (United States)

    Wang, Yang; Yang, Jiongjiong

    2017-01-01

    Background/Study context Previous studies found that older adults tend to remember more positive than negative information (i.e., positivity bias), leading to an age-related positivity effect. However, the extent to which factors of arousal and contextual information influence the positivity bias in older adults remains to be determined. Methods In this study, 27 Chinese younger adults (20.00±1.75 years) and 33 Chinese older adults (70.76 ± 5.49) learned pictures with negative, positive and neutral valences. Half of the pictures had a human context, and the other half did not. In addition, emotional dimensions of negative and positive pictures were divided into high-arousal and low-arousal. The experimental task was to provide old/new recognition and confidence rating judgments. Results Both groups of subjects showed the positivity bias for low-arousal pictures, but the positivity bias was restricted to low-arousal pictures without the human context in older adults. In addition, the positivity bias was mainly driven by the recollection process in younger adults, and it was mainly driven by both the recollection and familiarity processes in older adults. The recognition of the nonhuman positive pictures was correlated with cognitive control abilities, but the recognition of pictures with human contexts was correlated with general memory abilities in older adults. Conclusion This study highlights the importance of arousal and contextual information in modulating emotional memory in younger and older adults. It suggests that there are different mechanisms for memorizing pictures with and without human contexts in older adults. PMID:28230422

  10. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  11. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  12. Big Data and Data Science: Opportunities and Challenges of iSchools

    Directory of Open Access Journals (Sweden)

    Il-Yeol Song

    2017-08-01

    Full Text Available Due to the recent explosion of big data, our society has been rapidly going through digital transformation and entering a new world with numerous eye-opening developments. These new trends impact the society and future jobs, and thus student careers. At the heart of this digital transformation is data science, the discipline that makes sense of big data. With many rapidly emerging digital challenges ahead of us, this article discusses perspectives on iSchools’ opportunities and suggestions in data science education. We argue that iSchools should empower their students with “information computing” disciplines, which we define as the ability to solve problems and create values, information, and knowledge using tools in application domains. As specific approaches to enforcing information computing disciplines in data science education, we suggest the three foci of user-based, tool-based, and application-based. These three foci will serve to differentiate the data science education of iSchools from that of computer science or business schools. We present a layered Data Science Education Framework (DSEF with building blocks that include the three pillars of data science (people, technology, and data, computational thinking, data-driven paradigms, and data science lifecycles. Data science courses built on the top of this framework should thus be executed with user-based, tool-based, and application-based approaches. This framework will help our students think about data science problems from the big picture perspective and foster appropriate problem-solving skills in conjunction with broad perspectives of data science lifecycles. We hope the DSEF discussed in this article will help fellow iSchools in their design of new data science curricula.

  13. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  14. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  15. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  16. The design of instructional tools affects secondary school students' learning of cardiopulmonary resuscitation (CPR) in reciprocal peer learning: a randomized controlled trial.

    Science.gov (United States)

    Iserbyt, Peter; Byra, Mark

    2013-11-01

    Research investigating design effects of instructional tools for learning Basic Life Support (BLS) is almost non-existent. To demonstrate the design of instructional tools matter. The effect of spatial contiguity, a design principle stating that people learn more deeply when words and corresponding pictures are placed close (i.e., integrated) rather than far from each other on a page was investigated on task cards for learning Cardiopulmonary Resuscitation (CPR) during reciprocal peer learning. A randomized controlled trial. A total of 111 students (mean age: 13 years) constituting six intact classes learned BLS through reciprocal learning with task cards. Task cards combine a picture of the skill with written instructions about how to perform it. In each class, students were randomly assigned to the experimental group or the control. In the control, written instructions were placed under the picture on the task cards. In the experimental group, written instructions were placed close to the corresponding part of the picture on the task cards reflecting application of the spatial contiguity principle. One-way analysis of variance found significantly better performances in the experimental group for ventilation volumes (P=.03, ηp2=.10) and flow rates (P=.02, ηp2=.10). For chest compression depth, compression frequency, compressions with correct hand placement, and duty cycles no significant differences were found. This study shows that the design of instructional tools (i.e., task cards) affects student learning. Research-based design of learning tools can enhance BLS and CPR education. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Deep Ecology: Educational Possibilities for the Twenty-First Century

    Science.gov (United States)

    Capra, Fritjof

    2013-01-01

    Fritjof Capra's two-part lecture presents the fundamentals of systems thinking and sustainability along with the power of an ecologically comprehensive theory to shape education to fit the needs of human development in relation to the environment. Dr. Capra aims for the big picture emphasizing that effective learning is a system embedded in the…

  18. The Geneva Smoking Pictures: development and preliminary validation.

    OpenAIRE

    Khazaal, Yasser; Zullino, Daniele; Billieux, Joël

    2012-01-01

    Cue reactivity is essential to the maintenance of addictive disorders. A useful way to study cue reactivity is by means of normative pictures, but few validated tobacco-related pictures are available. This study describes a database of smoking-related pictures: The Geneva Smoking Pictures (GSP). Sixty smoking-related pictures were presented to 91 participants who assessed them according to the classic emotional pictures validation provided by the International Affective Picture System (NIMH C...

  19. Personality and achievement motivation : relationship among Big Five domain and facet scales, achievement goals, and intelligence

    NARCIS (Netherlands)

    Bipp, T.; Steinmayr, R.; Spinath, B.

    2008-01-01

    In the present study we examined the nomological network of achievement motivation and personality by inspecting the relationships between four goal orientations (learning, performance-approach, performance-avoidance, work avoidance), the Big Five personality traits, and intelligence. Within a

  20. Focus of Attention and Choice of Text Modality in Multimedia Learning

    Science.gov (United States)

    Schnotz, Wolfgang; Mengelkamp, Christoph; Baadte, Christiane; Hauck, Georg

    2014-01-01

    The term "modality effect" in multimedia learning means that students learn better from pictures combined with spoken rather than written text. The most prominent explanations refer to the split attention between visual text reading and picture observation which could affect transfer of information into working memory, maintenance of…

  1. The picture superiority effect in associative recognition.

    Science.gov (United States)

    Hockley, William E

    2008-10-01

    The picture superiority effect has been well documented in tests of item recognition and recall. The present study shows that the picture superiority effect extends to associative recognition. In three experiments, students studied lists consisting of random pairs of concrete words and pairs of line drawings; then they discriminated between intact (old) and rearranged (new) pairs of words and pictures at test. The discrimination advantage for pictures over words was seen in a greater hit rate for intact picture pairs, but there was no difference in the false alarm rates for the two types of stimuli. That is, there was no mirror effect. The same pattern of results was found when the test pairs consisted of the verbal labels of the pictures shown at study (Experiment 4), indicating that the hit rate advantage for picture pairs represents an encoding benefit. The results have implications for theories of the picture superiority effect and models of associative recognition.

  2. Pictures, images, and recollective experience.

    Science.gov (United States)

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  3. Text-Picture Relations in Cooking Instructions

    NARCIS (Netherlands)

    van der Sluis, Ielka; Leito, Shadira; Redeker, Gisela; Bunt, Harry

    2016-01-01

    Like many other instructions, recipes on packages with ready-to-use ingredients for a dish combine a series of pictures with short text paragraphs. The information presentation in such multimodal instructions can be compact (either text or picture) and/or cohesive (text and picture). In an

  4. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data-Driven, Machine Learning Approach.

    Science.gov (United States)

    Taylor, R Andrew; Pare, Joseph R; Venkatesh, Arjun K; Mowafi, Hani; Melnick, Edward R; Fleischman, William; Hall, M Kennedy

    2016-03-01

    Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data-driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of

  5. Dynamic Influence of Emotional States on Novel Word Learning

    Science.gov (United States)

    Guo, Jingjing; Zou, Tiantian; Peng, Danling

    2018-01-01

    Many researchers realize that it's unrealistic to isolate language learning and processing from emotions. However, few studies on language learning have taken emotions into consideration so far, so that the probable influences of emotions on language learning are unclear. The current study thereby aimed to examine the effects of emotional states on novel word learning and their dynamic changes with learning continuing and task varying. Positive, negative or neutral pictures were employed to induce a given emotional state, and then participants learned the novel words through association with line-drawing pictures in four successive learning phases. At the end of each learning phase, participants were instructed to fulfill a semantic category judgment task (in Experiment 1) or a word-picture semantic consistency judgment task (in Experiment 2) to explore the effects of emotional states on different depths of word learning. Converging results demonstrated that negative emotional state led to worse performance compared with neutral condition; however, how positive emotional state affected learning varied with learning task. Specifically, a facilitative role of positive emotional state in semantic category learning was observed but disappeared in word specific meaning learning. Moreover, the emotional modulation on novel word learning was quite dynamic and changeable with learning continuing, and the final attainment of the learned words tended to be similar under different emotional states. The findings suggest that the impact of emotion can be offset when novel words became more and more familiar and a part of existent lexicon. PMID:29695994

  6. The Spectator in the Picture

    OpenAIRE

    Hopkins, Robert

    2001-01-01

    This paper considers whether pictures ever implicitly represent internal spectators of the scenes they depict, and what theoretical construal to offer of their doing so. Richard Wollheim's discussion (Painting as an Art, ch.3) is taken as the most sophisticated attempt to answer these questions. I argue that Wollheim does not provide convincing argument for his claim that some pictures implicitly represent an internal spectator with whom the viewer of the picture is to imaginatively identify....

  7. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  8. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  9. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  10. A general picture of the learning communities: characteristics, similarities and differences.

    NARCIS (Netherlands)

    Verkleij, K.A.M.; Francke, A.L.; Voordouw, I.; Albers, M.; Gobbens, R.J.J.

    2016-01-01

    Background: Because learning communities of community care nurses and nursing lectures are a new phenomenon, it is of interest to evaluate en monitor the learning communities. the Netherlands Institute for Health Services Research, NIVEL, was commissioned to monitor the realization of the learning

  11. Machine learning on geospatial big data

    CSIR Research Space (South Africa)

    Van Zyl, T

    2014-02-01

    Full Text Available When trying to understand the difference between machine learning and statistics, it is important to note that it is not so much the set of techniques and theory that are used but more importantly the intended use of the results. In fact, many...

  12. Exploring Multicultural Themes through Picture Books.

    Science.gov (United States)

    Farris, Pamela J.

    1995-01-01

    Advocates inclusion of multicultural picture books in social studies instruction to offer different outlooks and visions in a short format. Describes selection of picture books with multicultural themes and those that represent various cultures, gender equity, and religious themes. Suggests that picture books may help students develop better…

  13. Effects of Multimodal Information on Learning Performance and Judgment of Learning

    Science.gov (United States)

    Chen, Gongxiang; Fu, Xiaolan

    2003-01-01

    Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…

  14. Big Data Perspective and Challenges in Next Generation Networks

    Directory of Open Access Journals (Sweden)

    Kashif Sultan

    2018-06-01

    Full Text Available With the development towards the next generation cellular networks, i.e., 5G, the focus has shifted towards meeting the higher data rate requirements, potential of micro cells and millimeter wave spectrum. The goals for next generation networks are very high data rates, low latency and handling of big data. The achievement of these goals definitely require newer architecture designs, upgraded technologies with possible backward support, better security algorithms and intelligent decision making capability. In this survey, we identify the opportunities which can be provided by 5G networks and discuss the underlying challenges towards implementation and realization of the goals of 5G. This survey also provides a discussion on the recent developments made towards standardization, the architectures which may be potential candidates for deployment and the energy concerns in 5G networks. Finally, the paper presents a big data perspective and the potential of machine learning for optimization and decision making in 5G networks.

  15. Ubiquitous picture-rich content representation

    Science.gov (United States)

    Wang, Wiley; Dean, Jennifer; Muzzolini, Russ

    2010-02-01

    The amount of digital images taken by the average consumer is consistently increasing. People enjoy the convenience of storing and sharing their pictures through online (digital) and offline (traditional) media. A set of pictures can be uploaded to: online photo services, web blogs and social network websites. Alternatively, these images can be used to generate: prints, cards, photo books or other photo products. Through uploading and sharing, images are easily transferred from one format to another. And often, a different set of associated content (text, tags) is created across formats. For example, on his web blog, a user may journal his experiences of his recent travel; on his social network website, his friends tag and comment on the pictures; in his online photo album, some pictures are titled and keyword-tagged. When the user wants to tell a complete story, perhaps in a photo book, he must collect, across all formats: the pictures, writings and comments, etc. and organize them in a book format. The user has to arrange the content of his trip in each format. The arrangement, the associations between the images, tags, keywords and text, cannot be shared with other formats. In this paper, we propose a system that allows the content to be easily created and shared across various digital media formats. We define a uniformed data association structure to connect: images, documents, comments, tags, keywords and other data. This content structure allows the user to switch representation formats without reediting. The framework under each format can emphasize (display or hide) content elements based on preference. For example, a slide show view will emphasize the display of pictures with limited text; a blog view will display highlighted images and journal text; and the photo book will try to fit in all images and text content. In this paper, we will discuss the strategy to associate pictures with text content, so that it can naturally tell a story. We will also list

  16. Gradient phonological inconsistency affects vocabulary learning.

    Science.gov (United States)

    Muench, Kristin L; Creel, Sarah C

    2013-09-01

    Learners frequently experience phonologically inconsistent input, such as exposure to multiple accents. Yet, little is known about the consequences of phonological inconsistency for language learning. The current study examines vocabulary acquisition with different degrees of phonological inconsistency, ranging from no inconsistency (e.g., both talkers call a picture /vig/) to mild but detectable inconsistency (e.g., one talker calls a picture a /vig/, and the other calls it a /vIg/), up to extreme inconsistency (e.g., the same picture is both a /vig/ and a /dIdʒ/). Previous studies suggest that learners readily extract consistent phonological patterns, given variable input. However, in Experiment 1, adults acquired phonologically inconsistent vocabularies more slowly than phonologically consistent ones. Experiment 2 examined whether word-form inconsistency alone, without phonological competition, was a source of learning difficulty. Even without phonological competition, listeners learned faster in 1 accent than in 2 accents, but they also learned faster in 2 accents (/vig/ = /vIg/) than with completely different labels (/vig/ = /dIdʒ/). Overall, results suggest that learners exposed to multiple accents may experience difficulty learning when 2 forms mismatch by more than 1 phonological feature, plus increased phonological competition due to a greater number of word forms. Implications for learning from variable input are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  18. Supporting Imagers' VOICE: A National Training Program in Comparative Effectiveness Research and Big Data Analytics.

    Science.gov (United States)

    Kang, Stella K; Rawson, James V; Recht, Michael P

    2017-12-05

    Provided methodologic training, more imagers can contribute to the evidence basis on improved health outcomes and value in diagnostic imaging. The Value of Imaging Through Comparative Effectiveness Research Program was developed to provide hands-on, practical training in five core areas for comparative effectiveness and big biomedical data research: decision analysis, cost-effectiveness analysis, evidence synthesis, big data principles, and applications of big data analytics. The program's mixed format consists of web-based modules for asynchronous learning as well as in-person sessions for practical skills and group discussion. Seven diagnostic radiology subspecialties and cardiology are represented in the first group of program participants, showing the collective potential for greater depth of comparative effectiveness research in the imaging community. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. Psycho-informatics: Big Data shaping modern psychometrics.

    Science.gov (United States)

    Markowetz, Alexander; Błaszkiewicz, Konrad; Montag, Christian; Switala, Christina; Schlaepfer, Thomas E

    2014-04-01

    For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  1. Quantifying Precision and Availability of Location Memory in Everyday Pictures and Some Implications for Picture Database Design

    Science.gov (United States)

    Lansdale, Mark W.; Oliff, Lynda; Baguley, Thom S.

    2005-01-01

    The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is…

  2. Domain-specific and domain-general constraints on word and sequence learning.

    Science.gov (United States)

    Archibald, Lisa M D; Joanisse, Marc F

    2013-02-01

    The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word-picture and nonword-picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.

  3. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  4. Harnessing Big Data for Systems Pharmacology.

    Science.gov (United States)

    Xie, Lei; Draizen, Eli J; Bourne, Philip E

    2017-01-06

    Systems pharmacology aims to holistically understand mechanisms of drug actions to support drug discovery and clinical practice. Systems pharmacology modeling (SPM) is data driven. It integrates an exponentially growing amount of data at multiple scales (genetic, molecular, cellular, organismal, and environmental). The goal of SPM is to develop mechanistic or predictive multiscale models that are interpretable and actionable. The current explosions in genomics and other omics data, as well as the tremendous advances in big data technologies, have already enabled biologists to generate novel hypotheses and gain new knowledge through computational models of genome-wide, heterogeneous, and dynamic data sets. More work is needed to interpret and predict a drug response phenotype, which is dependent on many known and unknown factors. To gain a comprehensive understanding of drug actions, SPM requires close collaborations between domain experts from diverse fields and integration of heterogeneous models from biophysics, mathematics, statistics, machine learning, and semantic webs. This creates challenges in model management, model integration, model translation, and knowledge integration. In this review, we discuss several emergent issues in SPM and potential solutions using big data technology and analytics. The concurrent development of high-throughput techniques, cloud computing, data science, and the semantic web will likely allow SPM to be findable, accessible, interoperable, reusable, reliable, interpretable, and actionable.

  5. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  6. Cognitive components of picture naming.

    Science.gov (United States)

    Johnson, C J; Paivio, A; Clark, J M

    1996-07-01

    A substantial research literature documents the effects of diverse item attributes, task conditions, and participant characteristics on the case of picture naming. The authors review what the research has revealed about 3 generally accepted stages of naming a pictured object: object identification, name activation, and response generation. They also show that dual coding theory gives a coherent and plausible account of these findings without positing amodal conceptual representations, and they identify issues and methods that may further advance the understanding of picture naming and related cognitive tasks.

  7. Examining lateralized semantic access using pictures.

    Science.gov (United States)

    Lovseth, Kyle; Atchley, Ruth Ann

    2010-03-01

    A divided visual field (DVF) experiment examined the semantic processing strategies employed by the cerebral hemispheres to determine if strategies observed with written word stimuli generalize to other media for communicating semantic information. We employed picture stimuli and vary the degree of semantic relatedness between the picture pairs. Participants made an on-line semantic relatedness judgment in response to sequentially presented pictures. We found that when pictures are presented to the right hemisphere responses are generally more accurate than the left hemisphere for semantic relatedness judgments for picture pairs. Furthermore, consistent with earlier DVF studies employing words, we conclude that the RH is better at accessing or maintaining access to information that has a weak or more remote semantic relationship. We also found evidence of faster access for pictures presented to the LH in the strongly-related condition. Overall, these results are consistent with earlier DVF word studies that argue that the cerebral hemispheres each play an important and separable role during semantic retrieval. Copyright 2009 Elsevier Inc. All rights reserved.

  8. TreePics: visualizing trees with pictures

    Directory of Open Access Journals (Sweden)

    Nicolas Puillandre

    2017-09-01

    Full Text Available While many programs are available to edit phylogenetic trees, associating pictures with branch tips in an efficient and automatic way is not an available option. Here, we present TreePics, a standalone software that uses a web browser to visualize phylogenetic trees in Newick format and that associates pictures (typically, pictures of the voucher specimens to the tip of each branch. Pictures are visualized as thumbnails and can be enlarged by a mouse rollover. Further, several pictures can be selected and displayed in a separate window for visual comparison. TreePics works either online or in a full standalone version, where it can display trees with several thousands of pictures (depending on the memory available. We argue that TreePics can be particularly useful in a preliminary stage of research, such as to quickly detect conflicts between a DNA-based phylogenetic tree and morphological variation, that may be due to contamination that needs to be removed prior to final analyses, or the presence of species complexes.

  9. How They Move Reveals What Is Happening: Understanding the Dynamics of Big Events from Human Mobility Pattern

    Directory of Open Access Journals (Sweden)

    Jean Damascène Mazimpaka

    2017-01-01

    Full Text Available The context in which a moving object moves contributes to the movement pattern observed. Likewise, the movement pattern reflects the properties of the movement context. In particular, big events influence human mobility depending on the dynamics of the events. However, this influence has not been explored to understand big events. In this paper, we propose a methodology for learning about big events from human mobility pattern. The methodology involves extracting and analysing the stopping, approaching, and moving-away interactions between public transportation vehicles and the geographic context. The analysis is carried out at two different temporal granularity levels to discover global and local patterns. The results of evaluating this methodology on bus trajectories demonstrate that it can discover occurrences of big events from mobility patterns, roughly estimate the event start and end time, and reveal the temporal patterns of arrival and departure of event attendees. This knowledge can be usefully applied in transportation and event planning and management.

  10. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  11. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  12. Long-term interference at the semantic level: Evidence from blocked-cyclic picture matching.

    Science.gov (United States)

    Wei, Tao; Schnur, Tatiana T

    2016-01-01

    Processing semantically related stimuli creates interference across various domains of cognition, including language and memory. In this study, we identify the locus and mechanism of interference when retrieving meanings associated with words and pictures. Subjects matched a probe stimulus (e.g., cat) to its associated target picture (e.g., yarn) from an array of unrelated pictures. Across trials, probes were either semantically related or unrelated. To test the locus of interference, we presented probes as either words or pictures. If semantic interference occurs at the stage common to both tasks, that is, access to semantic representations, then interference should occur in both probe presentation modalities. Results showed clear semantic interference effects independent of presentation modality and lexical frequency, confirming a semantic locus of interference in comprehension. To test the mechanism of interference, we repeated trials across 4 presentation cycles and manipulated the number of unrelated intervening trials (zero vs. two). We found that semantic interference was additive across cycles and survived 2 intervening trials, demonstrating interference to be long-lasting as opposed to short-lived. However, interference was smaller with zero versus 2 intervening trials, which we interpret to suggest that short-lived facilitation counteracted the long-lived interference. We propose that retrieving meanings associated with words/pictures from the same semantic category yields both interference due to long-lasting changes in connection strength between semantic representations (i.e., incremental learning) and facilitation caused by short-lived residual activation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Learning Apache Mahout

    CERN Document Server

    Tiwary, Chandramani

    2015-01-01

    If you are a Java developer and want to use Mahout and machine learning to solve Big Data Analytics use cases then this book is for you. Familiarity with shell scripts is assumed but no prior experience is required.

  14. Growing and Educational Environment of College Students and Their Motivational and Self-regulated Learning

    Science.gov (United States)

    Peng, Cuixin

    Students growing and being educated in different social background may perform differently in their learning process. These differences can be found in self-regulated behavior in fulfilling a certain task. This paper focuses on the differences of students' various growing and educational environment in motivation and self-regulated learning. Results reveal that there exist differences among students from big cities, middle and small town and countryside in motivational and self-regulated learning. It also indicates that students from big cities gain more knowledge of cognitive strategies in there learning process.

  15. Who Owns Educational Theory? Big Data, Algorithms and the Expert Power of Education Data Science

    Science.gov (United States)

    Williamson, Ben

    2017-01-01

    "Education data science" is an emerging methodological field which possesses the algorithm-driven technologies required to generate insights and knowledge from educational big data. This article consists of an analysis of the Lytics Lab, Stanford University's laboratory for research and development in learning analytics, and the Center…

  16. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  17. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  18. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  19. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  20. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  1. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  2. Housing Value Forecasting Based on Machine Learning Methods

    OpenAIRE

    Mu, Jingyi; Wu, Fang; Zhang, Aihua

    2014-01-01

    In the era of big data, many urgent issues to tackle in all walks of life all can be solved via big data technique. Compared with the Internet, economy, industry, and aerospace fields, the application of big data in the area of architecture is relatively few. In this paper, on the basis of the actual data, the values of Boston suburb houses are forecast by several machine learning methods. According to the predictions, the government and developers can make decisions about whether developing...

  3. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  4. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  5. Using children's picture books for reflective learning in nurse education.

    Science.gov (United States)

    Crawley, Josephine; Ditzel, Liz; Walton, Sue

    2012-08-01

    One way in which nursing students may build their practice is through reflective learning from stories. Stories in children's literature offer a special source of narratives that enable students to build empathy and to examine and reconstruct their personal concepts around human experience. Illustrated storybooks written for children are a particularly attractive teaching resource, as they tend to be short, interesting, colourful and easy to read. Yet, little has been written about using such books as a reflective learning tool for nursing students. In this article we describe how we use two children's books and McDrury and Alterio's (2002) 'Reflective Learning through Storytelling' model to educate first year nursing students about loss, grief and death.

  6. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  7. Insight, innovation, and the big picture in system design : application of FunKey architecting

    NARCIS (Netherlands)

    Bonnema, Gerrit Maarten

    2010-01-01

    Systems architecting is the design phase where the top-level functions and performance of a system are distributed over the system's parts, its environment, and its users. Up till now, system architects had to largely learn the required skills in practice. Some courses exist that teach the right

  8. THE EFFECT OF USING FLASH CARD AND PICTURE STORY IN VOCABULARY MASTERY TO THE SEVENTH GRADER OF SMP PGRI 1 MARGATIGA

    Directory of Open Access Journals (Sweden)

    Khoirul Hidayat -

    2017-05-01

    Full Text Available Based on the content standard, junior high school students are hoped to master vocabulary about 1000 words, so they can understand the conversation. But it fact, most of the students in junior high school do not master vocabulary well. So, the teacher should be able to choose a good media to help students to increase their vocabulary. In this case, flash card with picture story is two of media that can be used to transfer vocabulary material for the students.   The objective of this research is to find the significant differences of using flash card with picture story in vocabulary, and to find which medium is more effective to use as media in vocabulary. This research was true experiment design. Pre test and post test were use to collect the data. There are two problems of the study, (1 are there any significant differences between flash card with picture story in vocabulary mastery for the seventh grade students in SMP PGRI 2 Margatiga academic years 2013/2014?, (2 which one is more effective media in vocabulary learning process, is it flash card or picture story at seventh grade of SMP PGRI 2 Margatiga, academic years 2013/2014?. As a source of data, the researcher utilized flash card with picture story to teach vocabulary for the students. The media help students to learn vocabulary easier and more interesting to understand the subject, so their vocabulary would increase.   The result of the research, it was found that the mean score of picture story in pre test is 53,86, in treatment is 81, and in post test 85,33. Meanwhile the mean score of the flash card media in pre test 59,33, in treatment is 73,5, and in post test is 80,66. It means that the student’s vocabulary has increased, and there is significant difference of using picture story and flash card in vocabulary instruction, and picture story was more effective to use in vocabulary instruction.

  9. Data as an asset: What the oil and gas sector can learn from other industries about “Big Data”

    International Nuclear Information System (INIS)

    Perrons, Robert K.; Jensen, Jesse W.

    2015-01-01

    The upstream oil and gas industry has been contending with massive data sets and monolithic files for many years, but “Big Data” is a relatively new concept that has the potential to significantly re-shape the industry. Despite the impressive amount of value that is being realized by Big Data technologies in other parts of the marketplace, however, much of the data collected within the oil and gas sector tends to be discarded, ignored, or analyzed in a very cursory way. This viewpoint examines existing data management practices in the upstream oil and gas industry, and compares them to practices and philosophies that have emerged in organizations that are leading the way in Big Data. The comparison shows that, in companies that are widely considered to be leaders in Big Data analytics, data is regarded as a valuable asset—but this is usually not true within the oil and gas industry insofar as data is frequently regarded there as descriptive information about a physical asset rather than something that is valuable in and of itself. The paper then discusses how the industry could potentially extract more value from data, and concludes with a series of policy-related questions to this end. -- Highlights: •Upstream oil and gas industry frequently discards or ignores the data it collects •The sector tends to view data as descriptive information about the state of assets •Leaders in Big Data, by stark contrast, regard data as an asset in and of itself •Industry should use Big Data tools to extract more value from digital information

  10. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  11. Big data handling mechanisms in the healthcare applications: A comprehensive and systematic literature review.

    Science.gov (United States)

    Pashazadeh, Asma; Jafari Navimipour, Nima

    2018-04-12

    Healthcare provides many services such as diagnosing, treatment, prevention of diseases, illnesses, injuries, and other physical and mental disorders. Large-scale distributed data processing applications in healthcare as a basic concept operates on large amounts of data. Therefore, big data application functions are the main part of healthcare operations, but there was not any comprehensive and systematic survey about studying and evaluating the important techniques in this field. Therefore, this paper aims at providing the comprehensive, detailed, and systematic study of the state-of-the-art mechanisms in the big data related to healthcare applications in five categories, including machine learning, cloud-based, heuristic-based, agent-based, and hybrid mechanisms. Also, this paper displayed a systematic literature review (SLR) of the big data applications in the healthcare literature up to the end of 2016. Initially, 205 papers were identified, but a paper selection process reduced the number of papers to 29 important studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Who Chokes Under Pressure? The Big Five Personality Traits and Decision-Making under Pressure.

    Science.gov (United States)

    Byrne, Kaileigh A; Silasi-Mansat, Crina D; Worthy, Darrell A

    2015-02-01

    The purpose of the present study was to examine whether the Big Five personality factors could predict who thrives or chokes under pressure during decision-making. The effects of the Big Five personality factors on decision-making ability and performance under social (Experiment 1) and combined social and time pressure (Experiment 2) were examined using the Big Five Personality Inventory and a dynamic decision-making task that required participants to learn an optimal strategy. In Experiment 1, a hierarchical multiple regression analysis showed an interaction between neuroticism and pressure condition. Neuroticism negatively predicted performance under social pressure, but did not affect decision-making under low pressure. Additionally, the negative effect of neuroticism under pressure was replicated using a combined social and time pressure manipulation in Experiment 2. These results support distraction theory whereby pressure taxes highly neurotic individuals' cognitive resources, leading to sub-optimal performance. Agreeableness also negatively predicted performance in both experiments.

  13. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  14. Wigner method dynamics in the interaction picture

    DEFF Research Database (Denmark)

    Møller, Klaus Braagaard; Dahl, Jens Peder; Henriksen, Niels Engholm

    1994-01-01

    that the dynamics of the interaction picture Wigner function is solved by running a swarm of trajectories in the classical interaction picture introduced previously in the literature. Solving the Wigner method dynamics of collision processes in the interaction picture ensures that the calculated transition......The possibility of introducing an interaction picture in the semiclassical Wigner method is investigated. This is done with an interaction Picture description of the density operator dynamics as starting point. We show that the dynamics of the density operator dynamics as starting point. We show...... probabilities are unambiguous even when the asymptotic potentials are anharmonic. An application of the interaction picture Wigner method to a Morse oscillator interacting with a laser field is presented. The calculated transition probabilities are in good agreement with results obtained by a numerical...

  15. Does "a picture is worth 1000 words" apply to iconic Chinese words? Relationship of Chinese words and pictures.

    Science.gov (United States)

    Lo, Shih-Yu; Yeh, Su-Ling

    2018-05-29

    The meaning of a picture can be extracted rapidly, but the form-to-meaning relationship is less obvious for printed words. In contrast to English words that follow grapheme-to-phoneme correspondence rule, the iconic nature of Chinese words might predispose them to activate their semantic representations more directly from their orthographies. By using the paradigm of repetition blindness (RB) that taps into the early level of word processing, we examined whether Chinese words activate their semantic representations as directly as pictures do. RB refers to the failure to detect the second occurrence of an item when it is presented twice in temporal proximity. Previous studies showed RB for semantically related pictures, suggesting that pictures activate their semantic representations directly from their shapes and thus two semantically related pictures are represented as repeated. However, this does not apply to English words since no RB was found for English synonyms. In this study, we replicated the semantic RB effect for pictures, and further showed the absence of semantic RB for Chinese synonyms. Based on our findings, it is suggested that Chinese words are processed like English words, which do not activate their semantic representations as directly as pictures do.

  16. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  18. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  19. Iconicity influences how effectively minimally verbal children with autism and ability-matched typically developing children use pictures as symbols in a search task.

    Science.gov (United States)

    Hartley, Calum; Allen, Melissa L

    2015-07-01

    Previous word learning studies suggest that children with autism spectrum disorder may have difficulty understanding pictorial symbols. Here we investigate the ability of children with autism spectrum disorder and language-matched typically developing children to contextualize symbolic information communicated by pictures in a search task that did not involve word learning. Out of the participant's view, a small toy was concealed underneath one of four unique occluders that were individuated by familiar nameable objects or unfamiliar unnamable objects. Children were shown a picture of the hiding location and then searched for the toy. Over three sessions, children completed trials with color photographs, black-and-white line drawings, and abstract color pictures. The results reveal zero group differences; neither children with autism spectrum disorder nor typically developing children were influenced by occluder familiarity, and both groups' errorless retrieval rates were above-chance with all three picture types. However, both groups made significantly more errorless retrievals in the most-iconic photograph trials, and performance was universally predicted by receptive language. Therefore, our findings indicate that children with autism spectrum disorder and young typically developing children can contextualize pictures and use them to adaptively guide their behavior in real time and space. However, this ability is significantly influenced by receptive language development and pictorial iconicity. © The Author(s) 2014.

  20. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit