WorldWideScience

Sample records for tiny science big

  1. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  2. Before big science the pursuit of modern chemistry and physics, 1800-1940

    CERN Document Server

    Nye, Mary Jo

    1999-01-01

    Today's vast multinational scientific monoliths bear little resemblance to the modest laboratories of the early nineteenth century. Yet early in the nineteenth century--when heat and electricity were still counted among the elements--changes were already under way that would revolutionize chemistry and physics into the "big science" of the late twentieth century, expanding tiny, makeshift laboratories into bustling research institutes and replacing the scientific amateurs and generalist savants of the early Victorian era with the professional specialists of contemporary physical science. Mary Jo Nye traces the social and intellectual history of the physical sciences from the early 1800s to the beginning of the Second World War, examining the sweeping transformation of scientific institutions and professions during the period and the groundbreaking experiments that fueled that change, from the earliest investigations of molecular chemistry and field dynamics to the revolutionary breakthroughs of quantum mecha...

  3. Science Big Bang comes to the Alps

    CERN Multimedia

    2008-01-01

    The most extensive and expensive scientific instrument in history is due to start working this summer at Cern, the European particle physics laboratory near Geneva. Two beams of protons will accelerate in opposite directions around a 27km tunnel under the Alpine foothills until they are travelling almost at the speed of light - and then smash together, reproducing on a tiny scale the intense energy of the new-born universe after the inaugural Big Bang 15bn years ago.

  4. Big Science

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1986-05-15

    Astronomy, like particle physics, has become Big Science where the demands of front line research can outstrip the science budgets of whole nations. Thus came into being the European Southern Observatory (ESO), founded in 1962 to provide European scientists with a major modern observatory to study the southern sky under optimal conditions.

  5. Science: Big Bang comes to the Alps

    CERN Multimedia

    Cookson, Clive

    2008-01-01

    "The most extensive and expensive scientific instrument in history is due to start working this summer at CERN, the European particle physics laboratory near Geneva. Two beams of protons will accelerate in opposite directions around a 27 km tunnel under the alpine foothills until they are travelling almost at the speed of light - and then smash together, reproducing on a tiny scale the intense energy of the new-born universe after the inaugural Big Bang 15bn years ago. (1 page)

  6. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  7. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text Available “Big Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  8. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  9. Semantic Web technologies for the big data in life sciences.

    Science.gov (United States)

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  10. Resonance – Journal of Science Education | Indian Academy of ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education. Balasubramanian Karthick. Articles written in Resonance – Journal of Science Education. Volume 20 Issue 10 October 2015 pp 919-930 General Article. The Diatoms: Big Significance of Tiny Glass Houses · Aditi Kale Balasubramanian Karthick · More Details ...

  11. The Tiny Terminators

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 5. The Tiny Terminators - Mosquitoes and Diseases. P K Sumodan. General Article Volume 6 Issue 5 May 2001 pp 48-55. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/05/0048-0055 ...

  12. Can companies benefit from Big Science? Science and Industry

    CERN Document Server

    Autio, Erkko; Bianchi-Streit, M

    2003-01-01

    Several studies have indicated that there are significant returns on financial investment via "Big Science" centres. Financial multipliers ranging from 2.7 (ESA) to 3.7 (CERN) have been found, meaning that each Euro invested in industry by Big Science generates a two- to fourfold return for the supplier. Moreover, laboratories such as CERN are proud of their record in technology transfer, where research developments lead to applications in other fields - for example, with particle accelerators and detectors. Less well documented, however, is the effect of the experience that technological firms gain through working in the arena of Big Science. Indeed, up to now there has been no explicit empirical study of such benefits. Our findings reveal a variety of outcomes, which include technological learning, the development of new products and markets, and impact on the firm's organization. The study also demonstrates the importance of technologically challenging projects for staff at CERN. Together, these findings i...

  13. Big data e data science

    OpenAIRE

    Cavique, Luís

    2014-01-01

    Neste artigo foram apresentados os conceitos básicos de Big Data e a nova área a que deu origem, a Data Science. Em Data Science foi discutida e exemplificada a noção de redução da dimensionalidade dos dados.

  14. Big science transformed science, politics and organization in Europe and the United States

    CERN Document Server

    Hallonsten, Olof

    2016-01-01

    This book analyses the emergence of a transformed Big Science in Europe and the United States, using both historical and sociological perspectives. It shows how technology-intensive natural sciences grew to a prominent position in Western societies during the post-World War II era, and how their development cohered with both technological and social developments. At the helm of post-war science are large-scale projects, primarily in physics, which receive substantial funds from the public purse. Big Science Transformed shows how these projects, popularly called 'Big Science', have become symbols of progress. It analyses changes to the political and sociological frameworks surrounding publicly-funding science, and their impact on a number of new accelerator and reactor-based facilities that have come to prominence in materials science and the life sciences. Interdisciplinary in scope, this book will be of great interest to historians, sociologists and philosophers of science.

  15. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  16. From big data to deep insight in developmental science.

    Science.gov (United States)

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  17. Big Data and Data Science in Critical Care.

    Science.gov (United States)

    Sanchez-Pinto, L Nelson; Luo, Yuan; Churpek, Matthew M

    2018-05-09

    The digitalization of the healthcare system has resulted in a deluge of clinical Big Data and has prompted the rapid growth of data science in medicine. Data science, which is the field of study dedicated to the principled extraction of knowledge from complex data, is particularly relevant in the critical care setting. The availability of large amounts of data in the intensive care unit, the need for better evidence-based care, and the complexity of critical illness makes the use of data science techniques and data-driven research particularly appealing to intensivists. Despite the increasing number of studies and publications in the field, so far there have been few examples of data science projects that have resulted in successful implementations of data-driven systems in the intensive care unit. However, given the expected growth in the field, intensivists should be familiar with the opportunities and challenges of Big Data and data science. In this paper, we review the definitions, types of algorithms, applications, challenges, and future of Big Data and data science in critical care. Copyright © 2018. Published by Elsevier Inc.

  18. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  19. Nursing Knowledge: Big Data Science-Implications for Nurse Leaders.

    Science.gov (United States)

    Westra, Bonnie L; Clancy, Thomas R; Sensmeier, Joyce; Warren, Judith J; Weaver, Charlotte; Delaney, Connie W

    2015-01-01

    The integration of Big Data from electronic health records and other information systems within and across health care enterprises provides an opportunity to develop actionable predictive models that can increase the confidence of nursing leaders' decisions to improve patient outcomes and safety and control costs. As health care shifts to the community, mobile health applications add to the Big Data available. There is an evolving national action plan that includes nursing data in Big Data science, spearheaded by the University of Minnesota School of Nursing. For the past 3 years, diverse stakeholders from practice, industry, education, research, and professional organizations have collaborated through the "Nursing Knowledge: Big Data Science" conferences to create and act on recommendations for inclusion of nursing data, integrated with patient-generated, interprofessional, and contextual data. It is critical for nursing leaders to understand the value of Big Data science and the ways to standardize data and workflow processes to take advantage of newer cutting edge analytics to support analytic methods to control costs and improve patient quality and safety.

  20. Limitations of constitutive relations for TiNi shape memory alloys

    International Nuclear Information System (INIS)

    Tang, W.; Sandstroem, R.

    1995-01-01

    Phase transformation tensor Ω in the constitutive equation proposed by Tanaka has been evaluated by employing experimental data of TiNi alloys in a constrained recovery process. It demonstrates that the absolute value of Ω for the constrained recovery process is typically about 0.6 ∼ 0.7 x 10 3 MPa, which is much smaller than that for the stress - induced martensitic transformation (typically 2.5 ∼ 3.5 x 10 3 ). Based on the evaluated results for Ω, predicted recovery stress - temperature relations by the constitutive equation are compared with the experimental data for TiNi rods under different strains. Big discrepancy exists for large strain conditions. Several transformation kinetic expressions are examined for the constitutive relation of the constrained recovery process. (orig.)

  1. Data Management and Preservation Planning for Big Science

    Directory of Open Access Journals (Sweden)

    Juan Bicarregui

    2013-06-01

    Full Text Available ‘Big Science’ - that is, science which involves large collaborations with dedicated facilities, and involving large data volumes and multinational investments – is often seen as different when it comes to data management and preservation planning. Big Science handles its data differently from other disciplines and has data management problems that are qualitatively different from other disciplines. In part, these differences arise from the quantities of data involved, but possibly more importantly from the cultural, organisational and technical distinctiveness of these academic cultures. Consequently, the data management systems are typically and rationally bespoke, but this means that the planning for data management and preservation (DMP must also be bespoke.These differences are such that ‘just read and implement the OAIS specification’ is reasonable Data Management and Preservation (DMP advice, but this bald prescription can and should be usefully supported by a methodological ‘toolkit’, including overviews, case-studies and costing models to provide guidance on developing best practice in DMP policy and infrastructure for these projects, as well as considering OAIS validation, audit and cost modelling.In this paper, we build on previous work with the LIGO collaboration to consider the role of DMP planning within these big science scenarios, and discuss how to apply current best practice. We discuss the result of the MaRDI-Gross project (Managing Research Data Infrastructures – Big Science, which has been developing a toolkit to provide guidelines on the application of best practice in DMP planning within big science projects. This is targeted primarily at projects’ engineering managers, but intending also to help funders collaborate on DMP plans which satisfy the requirements imposed on them.

  2. Legitimizing ESS Big Science as a collaboration across boundaries

    CERN Document Server

    O'Dell, Tom

    2013-01-01

    Legitimizing ESS 'Big Science' is a broad epithet that can be associated with research projects as different as the Manhattan Project, the Hubble Telescope-construction, and the CERN-establishment in Geneva. While the science produced by these projects is vastly different, they have in common the fact that they all involve huge budgets, big facilities, complex instrumentation, years of planning, and large multidis...

  3. Big data science: A literature review of nursing research exemplars.

    Science.gov (United States)

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. John C. Mather, the Big Bang, and the COBE

    Science.gov (United States)

    Bang theory and showing that the Big Bang was complete in the first instants, with only a tiny fraction dropdown arrow Site Map A-Z Index Menu Synopsis John C. Mather, the Big Bang, and the COBE Resources with collaborative work on understanding the Big Bang. Mather and Smoot analyzed data from NASA's Cosmic Background

  5. 'Big data' in pharmaceutical science: challenges and opportunities.

    Science.gov (United States)

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  6. The faces of Big Science.

    Science.gov (United States)

    Schatz, Gottfried

    2014-06-01

    Fifty years ago, academic science was a calling with few regulations or financial rewards. Today, it is a huge enterprise confronted by a plethora of bureaucratic and political controls. This change was not triggered by specific events or decisions but reflects the explosive 'knee' in the exponential growth that science has sustained during the past three-and-a-half centuries. Coming to terms with the demands and benefits of 'Big Science' is a major challenge for today's scientific generation. Since its foundation 50 years ago, the European Molecular Biology Organization (EMBO) has been of invaluable help in meeting this challenge.

  7. TinyDebug

    DEFF Research Database (Denmark)

    Hansen, Morten Tranberg

    2011-01-01

    Debugging embedded wireless systems can be cumbersome due to low visibility. To ease the task of debugging this paper present TinyDebug which is a multi-purpose passive debugging framework for developing embedded wireless sys- tems. TinyDebug is designed to be used throughout the entire system...... logging to extraction and show how the frame- work improves upon existing message based and event log- ging debugging techniques while enabling distributed event processing. We also present a number of optional event anal- ysis tools demonstrating the generality of the TinyDebug debug messages....

  8. Origins of tiny neutrino mass and large flavor mixings

    International Nuclear Information System (INIS)

    Haba, Naoyuki

    2015-01-01

    Active neutrino masses are extremely smaller than those of other quarks and leptons, and there are large flavor mixings in the lepton sector, contrary to the quark sector. They are great mysteries in the standard model, but also excellent hints of new physics beyond the standard model. Thus, questions 'What is an origin of tiny neutrino mass?' and 'What is an origin of large lepton flavor mixings?' are very important. In this paper, we overview various attempts to solve these big questions. (author)

  9. The Ethics of Big Data and Nursing Science.

    Science.gov (United States)

    Milton, Constance L

    2017-10-01

    Big data is a scientific, social, and technological trend referring to the process and size of datasets available for analysis. Ethical implications arise as healthcare disciplines, including nursing, struggle over questions of informed consent, privacy, ownership of data, and its possible use in epistemology. The author offers straight-thinking possibilities for the use of big data in nursing science.

  10. Preliminary investigations on TINI based distributed instrumentation systems

    International Nuclear Information System (INIS)

    Bezboruah, T.; Kalita, M.

    2006-04-01

    A prototype web enabled distributed instrumentation system is being proposed in the Department of Electronics Science, Gauhati University, Assam, India. The distributed instrumentation system contains sensors, legacy hardware, TCP/IP protocol converter, TCP/IP network Ethernet, Database Server, Web/Application Server and Client PCs. As part of the proposed work, Tiny Internet Interface (TINI, TBM390: Dallas Semiconductor) has been deployed as TCP/IP stack, and java programming language as software tools. A feature supported by Java, that is particularly relevant to the distributed system is its applet. An applet is a java class that can be downloaded from the web server and can be run in a context application such as web browser or an applet viewer. TINI has been installed as TCP/IP stack, as it is the best suited embedded system with java programming language and it has been uniquely designed for communicating over One Wire Devices (OWD) over network. Here we will discuss the hardware and software aspects of TINI with OWD for the present system. (author)

  11. Kulturní hodnoty tzv. Mariánského trojúhelníku - Sloup, Vranov, Křtiny

    OpenAIRE

    Bezděková, Veronika

    2009-01-01

    In the Moravian Karst there are three big churches consecrated to The Virgin Mary. These are visited by many pilgrims and have their own calendar of pilgrimages. They are Vranov, Křtiny and Sloup. Churches in these towns are consecrated to The Virgin Mary: Vranov commemorates the birth of The Virgin Mary, Křtiny commemorates the name of The Virgin Mary and Sloup commemorates the sufferings of The Virgin Mary. So we talk about the triangle of The Virgin Mary. This term is the main point of my ...

  12. Towards Geo-spatial Information Science in Big Data Era

    Directory of Open Access Journals (Sweden)

    LI Deren

    2016-04-01

    Full Text Available Since the 1990s, with the advent of worldwide information revolution and the development of internet, geospatial information science have also come of age, which pushed forward the building of digital Earth and cyber city. As we entered the 21st century, with the development and integration of global information technology and industrialization, internet of things and cloud computing came into being, human society enters into the big data era. This article covers the key features (ubiquitous, multi-dimension and dynamics, internet+networking, full automation and real-time, from sensing to recognition, crowdsourcing and VGI, and service-oriented of geospatial information science in the big data era and addresses the key technical issues (non-linear four dimensional Earth reference frame system, space based enhanced GNSS, space-air and land unified network communication techniques, on board processing techniques for multi-sources image data, smart interface service techniques for space-borne information, space based resource scheduling and network security, design and developing of a payloads based multi-functional satellite platform. That needs to be resolved to provide a new definition of geospatial information science in big data era. Based on the discussion in this paper, the author finally proposes a new definition of geospatial information science (geomatics, i.e. Geomatics is a multiple discipline science and technology which, using a systematic approach, integrates all the means for spatio-temporal data acquisition, information extraction, networked management, knowledge discovering, spatial sensing and recognition, as well as intelligent location based services of any physical objects and human activities around the earth and its environment. Starting from this new definition, geospatial information science will get much more chances and find much more tasks in big data era for generation of smart earth and smart city . Our profession

  13. Big Data: New science, new challenges, new dialogical opportunities

    OpenAIRE

    Fuller, Michael

    2015-01-01

    The advent of extremely large datasets, known as “big data”, has been heralded as the instantiation of a new science, requiring a new kind of practitioner: the “data scientist”. This paper explores the concept of big data, drawing attention to a number of new issues – not least ethical concerns, and questions surrounding interpretation – which big data sets present. It is observed that the skills required for data scientists are in some respects closer to those traditionally associated with t...

  14. The sociology of big science | Public Lecture by Ulrike Felt | 15 July

    CERN Multimedia

    2014-01-01

    "The sociology of big science" Public Lecture by Prof. Ulrike Felt Tuesday 15 July 2014 - 7.30 p.m. Globe of Science and Innovation Lecture in English, translated in French. Entrance free. Limited number of seats. Reservation essential: +41 22 767 76 76 or cern.reception@cern.ch What science for what kind of society? Reflecting the development of big science Without any doubt, CERN can be described as being among the most ambitious scientific enterprises ever undertaken. For 60 years, the Member States have not only invested considerable financial means into this institution, but have also supported the creation of a highly visionary research programme. And this has led to a change in the way science is done, as captured by the idea of "big science". Yet this naturally also raises a number of quite fundamental questions: How did the meaning of "doing science" change? What justifies societal engagement with and support for such a cost-intensive long-t...

  15. Forget the hype or reality. Big data presents new opportunities in Earth Science.

    Science.gov (United States)

    Lee, T. J.

    2015-12-01

    Earth science is arguably one of the most mature science discipline which constantly acquires, curates, and utilizes a large volume of data with diverse variety. We deal with big data before there is big data. For example, while developing the EOS program in the 1980s, the EOS data and information system (EOSDIS) was developed to manage the vast amount of data acquired by the EOS fleet of satellites. EOSDIS continues to be a shining example of modern science data systems in the past two decades. With the explosion of internet, the usage of social media, and the provision of sensors everywhere, the big data era has bring new challenges. First, Goggle developed the search algorithm and a distributed data management system. The open source communities quickly followed up and developed Hadoop file system to facility the map reduce workloads. The internet continues to generate tens of petabytes of data every day. There is a significant shortage of algorithms and knowledgeable manpower to mine the data. In response, the federal government developed the big data programs that fund research and development projects and training programs to tackle these new challenges. Meanwhile, comparatively to the internet data explosion, Earth science big data problem has become quite small. Nevertheless, the big data era presents an opportunity for Earth science to evolve. We learned about the MapReduce algorithms, in memory data mining, machine learning, graph analysis, and semantic web technologies. How do we apply these new technologies to our discipline and bring the hype to Earth? In this talk, I will discuss how we might want to apply some of the big data technologies to our discipline and solve many of our challenging problems. More importantly, I will propose new Earth science data system architecture to enable new type of scientific inquires.

  16. Earth Science Data Analysis in the Era of Big Data

    Science.gov (United States)

    Kuo, K.-S.; Clune, T. L.; Ramachandran, R.

    2014-01-01

    Anyone with even a cursory interest in information technology cannot help but recognize that "Big Data" is one of the most fashionable catchphrases of late. From accurate voice and facial recognition, language translation, and airfare prediction and comparison, to monitoring the real-time spread of flu, Big Data techniques have been applied to many seemingly intractable problems with spectacular successes. They appear to be a rewarding way to approach many currently unsolved problems. Few fields of research can claim a longer history with problems involving voluminous data than Earth science. The problems we are facing today with our Earth's future are more complex and carry potentially graver consequences than the examples given above. How has our climate changed? Beside natural variations, what is causing these changes? What are the processes involved and through what mechanisms are these connected? How will they impact life as we know it? In attempts to answer these questions, we have resorted to observations and numerical simulations with ever-finer resolutions, which continue to feed the "data deluge." Plausibly, many Earth scientists are wondering: How will Big Data technologies benefit Earth science research? As an example from the global water cycle, one subdomain among many in Earth science, how would these technologies accelerate the analysis of decades of global precipitation to ascertain the changes in its characteristics, to validate these changes in predictive climate models, and to infer the implications of these changes to ecosystems, economies, and public health? Earth science researchers need a viable way to harness the power of Big Data technologies to analyze large volumes and varieties of data with velocity and veracity. Beyond providing speedy data analysis capabilities, Big Data technologies can also play a crucial, albeit indirect, role in boosting scientific productivity by facilitating effective collaboration within an analysis environment

  17. Making every gram count - Big measurements from tiny platforms (Invited)

    Science.gov (United States)

    Fish, C. S.; Neilsen, T. L.; Stromberg, E. M.

    2013-12-01

    The most significant advances in Earth, solar, and space physics over the next decades will originate from new, system-level observational techniques. The most promising technique to still be fully developed and exploited requires conducting multi-point or distributed constellation-based observations. This system-level observational approach is required to understand the 'big picture' coupling between disparate regions such as the solar-wind, magnetosphere, ionosphere, upper atmosphere, land, and ocean. The national research council, NASA science mission directorate, and the larger heliophysics community have repeatedly identified the pressing need for multipoint scientific investigations to be implemented via satellite constellations. The NASA Solar Terrestrial Probes Magnetospheric Multiscale (MMS) mission and NASA Earth Science Division's 'A-train', consisting of the AQUA, CloudSat, CALIPSO and AURA satellites, are examples of such constellations. However, the costs to date of these and other similar proposed constellations have been prohibitive given the 'large satellite' architectures and the multiple launch vehicles required for implementing the constellations. Financially sustainable development and deployment of multi-spacecraft constellations can only be achieved through the use of small spacecraft that allow for multiple hostings per launch vehicle. The revolution in commercial mobile and other battery powered consumer technology has helped enable researchers in recent years to build and fly very small yet capable satellites, principally CubeSats. A majority of the CubeSat activity and development to date has come from international academia and the amateur radio satellite community, but several of the typical large-satellite vendors have developed CubeSats as well. Recent government-sponsored CubeSat initiatives, such as the NRO Colony, NSF CubeSat Space Weather, NASA Office of Chief Technologist Edison and CubeSat Launch Initiative (CSLI) Educational

  18. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives

    Science.gov (United States)

    Miron-Shatz, T.; Lau, A. Y. S.; Paton, C.

    2014-01-01

    Summary Objectives As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. Methods A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Results Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to “small data” would also be useful. PMID:25123717

  19. TinyOS Alliance Structure

    DEFF Research Database (Denmark)

    Bonnet, Philippe; Culler, David; Estrin, Deborah

    2006-01-01

    This memo describes the goals and organization structure of the TinyOS Alliance. It covers membership, the working group forums for contribution, intellectual property, source licensing, and the TinyOS Steering Committee (TSC)....

  20. Detection and Characterisation of Meteors as a Big Data Citizen Science project

    Science.gov (United States)

    Gritsevich, M.

    2017-12-01

    Out of a total around 50,000 meteorites currently known to science, the atmospheric passage was recorded instrumentally in only 30 cases with the potential to derive their atmospheric trajectories and pre-impact heliocentric orbits. Similarly, while the observations of meteors, add thousands of new entries per month to existing databases, it is extremely rare they lead to meteorite recovery. Meteor studies thus represent an excellent example of the Big Data citizen science project, where progress in the field largely depends on the prompt identification and characterisation of meteor events as well as on extensive and valuable contributions by amateur observers. Over the last couple of decades technological advancements in observational techniques have yielded drastic improvements in the quality, quantity and diversity of meteor data, while even more ambitious instruments are about to become operational. This empowers meteor science to boost its experimental and theoretical horizons and seek more advanced scientific goals. We review some of the developments that push meteor science into the Big Data era that requires more complex methodological approaches through interdisciplinary collaborations with other branches of physics and computer science. We argue that meteor science should become an integral part of large surveys in astronomy, aeronomy and space physics, and tackle the complexity of micro-physics of meteor plasma and its interaction with the atmosphere. The recent increased interest in meteor science triggered by the Chelyabinsk fireball helps in building the case for technologically and logistically more ambitious meteor projects. This requires developing new methodological approaches in meteor research, with Big Data science and close collaboration between citizen science, geoscience and astronomy as critical elements. We discuss possibilities for improvements and promote an opportunity for collaboration in meteor science within the currently

  1. Green data science : using big data in an "environmentally friendly" manner

    NARCIS (Netherlands)

    Van Der Aalst, W.M.P.

    2016-01-01

    The widespread use of "Big Data" is heavily impacting organizations and individuals for which these data are collected. Sophisticated data science techniques aim to extract as much value from data as possible. Powerful mixtures of Big Data and analytics are rapidly changing the way we do business,

  2. Big Data and Data Science: Opportunities and Challenges of iSchools

    Directory of Open Access Journals (Sweden)

    Il-Yeol Song

    2017-08-01

    Full Text Available Due to the recent explosion of big data, our society has been rapidly going through digital transformation and entering a new world with numerous eye-opening developments. These new trends impact the society and future jobs, and thus student careers. At the heart of this digital transformation is data science, the discipline that makes sense of big data. With many rapidly emerging digital challenges ahead of us, this article discusses perspectives on iSchools’ opportunities and suggestions in data science education. We argue that iSchools should empower their students with “information computing” disciplines, which we define as the ability to solve problems and create values, information, and knowledge using tools in application domains. As specific approaches to enforcing information computing disciplines in data science education, we suggest the three foci of user-based, tool-based, and application-based. These three foci will serve to differentiate the data science education of iSchools from that of computer science or business schools. We present a layered Data Science Education Framework (DSEF with building blocks that include the three pillars of data science (people, technology, and data, computational thinking, data-driven paradigms, and data science lifecycles. Data science courses built on the top of this framework should thus be executed with user-based, tool-based, and application-based approaches. This framework will help our students think about data science problems from the big picture perspective and foster appropriate problem-solving skills in conjunction with broad perspectives of data science lifecycles. We hope the DSEF discussed in this article will help fellow iSchools in their design of new data science curricula.

  3. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  4. French environmental labs may get 'big science' funds

    CERN Multimedia

    2000-01-01

    France is considering expanding its network of enviromental laboratories to study the long term impacts of environmental change. It has been suggested that this could be funded using the 'big science' budget usually used for facilities such as particle accelerators (2 para).

  5. A Big Data Guide to Understanding Climate Change: The Case for Theory-Guided Data Science.

    Science.gov (United States)

    Faghmous, James H; Kumar, Vipin

    2014-09-01

    Global climate change and its impact on human life has become one of our era's greatest challenges. Despite the urgency, data science has had little impact on furthering our understanding of our planet in spite of the abundance of climate data. This is a stark contrast from other fields such as advertising or electronic commerce where big data has been a great success story. This discrepancy stems from the complex nature of climate data as well as the scientific questions climate science brings forth. This article introduces a data science audience to the challenges and opportunities to mine large climate datasets, with an emphasis on the nuanced difference between mining climate data and traditional big data approaches. We focus on data, methods, and application challenges that must be addressed in order for big data to fulfill their promise with regard to climate science applications. More importantly, we highlight research showing that solely relying on traditional big data techniques results in dubious findings, and we instead propose a theory-guided data science paradigm that uses scientific theory to constrain both the big data techniques as well as the results-interpretation process to extract accurate insight from large climate data .

  6. Perspectives on Policy and the Value of Nursing Science in a Big Data Era.

    Science.gov (United States)

    Gephart, Sheila M; Davis, Mary; Shea, Kimberly

    2018-01-01

    As data volume explodes, nurse scientists grapple with ways to adapt to the big data movement without jeopardizing its epistemic values and theoretical focus that celebrate while acknowledging the authority and unity of its body of knowledge. In this article, the authors describe big data and emphasize ways that nursing science brings value to its study. Collective nursing voices that call for more nursing engagement in the big data era are answered with ways to adapt and integrate theoretical and domain expertise from nursing into data science.

  7. Big Data in Plant Science: Resources and Data Mining Tools for Plant Genomics and Proteomics.

    Science.gov (United States)

    Popescu, George V; Noutsos, Christos; Popescu, Sorina C

    2016-01-01

    In modern plant biology, progress is increasingly defined by the scientists' ability to gather and analyze data sets of high volume and complexity, otherwise known as "big data". Arguably, the largest increase in the volume of plant data sets over the last decade is a consequence of the application of the next-generation sequencing and mass-spectrometry technologies to the study of experimental model and crop plants. The increase in quantity and complexity of biological data brings challenges, mostly associated with data acquisition, processing, and sharing within the scientific community. Nonetheless, big data in plant science create unique opportunities in advancing our understanding of complex biological processes at a level of accuracy without precedence, and establish a base for the plant systems biology. In this chapter, we summarize the major drivers of big data in plant science and big data initiatives in life sciences with a focus on the scope and impact of iPlant, a representative cyberinfrastructure platform for plant science.

  8. Big data in medical science--a biostatistical view.

    Science.gov (United States)

    Binder, Harald; Blettner, Maria

    2015-02-27

    Inexpensive techniques for measurement and data storage now enable medical researchers to acquire far more data than can conveniently be analyzed by traditional methods. The expression "big data" refers to quantities on the order of magnitude of a terabyte (1012 bytes); special techniques must be used to evaluate such huge quantities of data in a scientifically meaningful way. Whether data sets of this size are useful and important is an open question that currently confronts medical science. In this article, we give illustrative examples of the use of analytical techniques for big data and discuss them in the light of a selective literature review. We point out some critical aspects that should be considered to avoid errors when large amounts of data are analyzed. Machine learning techniques enable the recognition of potentially relevant patterns. When such techniques are used, certain additional steps should be taken that are unnecessary in more traditional analyses; for example, patient characteristics should be differentially weighted. If this is not done as a preliminary step before similarity detection, which is a component of many data analysis operations, characteristics such as age or sex will be weighted no higher than any one out of 10 000 gene expression values. Experience from the analysis of conventional observational data sets can be called upon to draw conclusions about potential causal effects from big data sets. Big data techniques can be used, for example, to evaluate observational data derived from the routine care of entire populations, with clustering methods used to analyze therapeutically relevant patient subgroups. Such analyses can provide complementary information to clinical trials of the classic type. As big data analyses become more popular, various statistical techniques for causality analysis in observational data are becoming more widely available. This is likely to be of benefit to medical science, but specific adaptations will

  9. The Human Genome Project: big science transforms biology and medicine

    OpenAIRE

    Hood, Leroy; Rowen, Lee

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called ‘big science’ - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and a...

  10. Small wormholes change our picture of the big bang

    CERN Multimedia

    1990-01-01

    Matt Visser has studied tiny wormholes, which may be produced on a subatomic scale by quantum fluctuations in the energy of the vacuum. He believes these quantum wormholes could change our picture of the origin of the Universe in the big bang (1/2 p)

  11. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  12. The big questions in science the quest to solve the great unknowns

    CERN Document Server

    Birch, Hayley; Stuart, Colin

    2016-01-01

    What are the great scientific questions of our modern age and why don't we know the answers? The Big Questions in Science takes on the most fascinating and pressing mysteries we have yet to crack and explains how tantalizingly close science is to solving them (or how frustratingly out of reach they remain). Some, such as "Can we live forever? and "What makes us human? " are eternal questions; others, such as "How do we solve the population problem? " and "How do we get more energy from the sun? " are essential to our future survival. Written by experienced science writers, adept at translating the complicated concepts of "hard science" into an engaging and insightful discussion for the general reader, The Big Questions in Science grapples with 20 hot topics across the disciplines of biology, chemistry, physics, astronomy and computer science to ignite the inquistitive scientist in all of us.

  13. Big Science, co-publication and collaboration: getting to the core

    Energy Technology Data Exchange (ETDEWEB)

    Kahn, M.

    2016-07-01

    International collaboration in science has risen considerably in the last two decades (UNESCO, 2010). In the same period Big Science collaborations have proliferated in physics, astronomy, astrophysics, and medicine. Publications that use Big Science data draw on the expertise of those who design and build the equipment and software, as well as the scientific community. Over time a set of ‘rules of use’ has emerged that protects their intellectual property but that may have the unintended consequence of enhancing co-publication counts. This in turn distorts the use of co-publication data as a proxy for collaboration. The distorting effects are illustrated by means of a case study of the BRICS countries that recently issued a declaration on scientific and technological cooperation with specific fields allocated to each country. It is found that with a single exception the dominant research areas of collaboration are different to individual country specializations. The disjuncture between such ‘collaboration’ and the intent of the declaration raises questions of import to science policy, for the BRICS in particular and the measurement of scientific collaboration more generally. (Author)

  14. From darwin to the census of marine life: marine biology as big science.

    Science.gov (United States)

    Vermeulen, Niki

    2013-01-01

    With the development of the Human Genome Project, a heated debate emerged on biology becoming 'big science'. However, biology already has a long tradition of collaboration, as natural historians were part of the first collective scientific efforts: exploring the variety of life on earth. Such mappings of life still continue today, and if field biology is gradually becoming an important subject of studies into big science, research into life in the world's oceans is not taken into account yet. This paper therefore explores marine biology as big science, presenting the historical development of marine research towards the international 'Census of Marine Life' (CoML) making an inventory of life in the world's oceans. Discussing various aspects of collaboration--including size, internationalisation, research practice, technological developments, application, and public communication--I will ask if CoML still resembles traditional collaborations to collect life. While showing both continuity and change, I will argue that marine biology is a form of natural history: a specific way of working together in biology that has transformed substantially in interaction with recent developments in the life sciences and society. As a result, the paper does not only give an overview of transformations towards large scale research in marine biology, but also shines a new light on big biology, suggesting new ways to deepen the understanding of collaboration in the life sciences by distinguishing between different 'collective ways of knowing'.

  15. From darwin to the census of marine life: marine biology as big science.

    Directory of Open Access Journals (Sweden)

    Niki Vermeulen

    Full Text Available With the development of the Human Genome Project, a heated debate emerged on biology becoming 'big science'. However, biology already has a long tradition of collaboration, as natural historians were part of the first collective scientific efforts: exploring the variety of life on earth. Such mappings of life still continue today, and if field biology is gradually becoming an important subject of studies into big science, research into life in the world's oceans is not taken into account yet. This paper therefore explores marine biology as big science, presenting the historical development of marine research towards the international 'Census of Marine Life' (CoML making an inventory of life in the world's oceans. Discussing various aspects of collaboration--including size, internationalisation, research practice, technological developments, application, and public communication--I will ask if CoML still resembles traditional collaborations to collect life. While showing both continuity and change, I will argue that marine biology is a form of natural history: a specific way of working together in biology that has transformed substantially in interaction with recent developments in the life sciences and society. As a result, the paper does not only give an overview of transformations towards large scale research in marine biology, but also shines a new light on big biology, suggesting new ways to deepen the understanding of collaboration in the life sciences by distinguishing between different 'collective ways of knowing'.

  16. Big Data: Philosophy, Emergence, Crowdledge, and Science Education

    Science.gov (United States)

    dos Santos, Renato P.

    2015-01-01

    Big Data already passed out of hype, is now a field that deserves serious academic investigation, and natural scientists should also become familiar with Analytics. On the other hand, there is little empirical evidence that any science taught in school is helping people to lead happier, more prosperous, or more politically well-informed lives. In…

  17. The role of administrative data in the big data revolution in social science research.

    Science.gov (United States)

    Connelly, Roxanne; Playford, Christopher J; Gayle, Vernon; Dibben, Chris

    2016-09-01

    The term big data is currently a buzzword in social science, however its precise meaning is ambiguous. In this paper we focus on administrative data which is a distinctive form of big data. Exciting new opportunities for social science research will be afforded by new administrative data resources, but these are currently under appreciated by the research community. The central aim of this paper is to discuss the challenges associated with administrative data. We emphasise that it is critical for researchers to carefully consider how administrative data has been produced. We conclude that administrative datasets have the potential to contribute to the development of high-quality and impactful social science research, and should not be overlooked in the emerging field of big data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. A tiny tick can cause a big health problem

    Directory of Open Access Journals (Sweden)

    Manuel John

    2017-01-01

    Full Text Available Ticks are tiny crawling bugs in the spider family that feed by sucking blood from animals. They are second only to mosquitoes as vectors of human disease, both infectious and toxic. Infected ticks spread over a hundred diseases, some of which are fatal if undetected. They spread the spirochete (which multiplies in the insect's gut with a subsequent bite to the next host. We describe the only reported cases of peri ocular tick bite from India that presented to us within a span of 3 days and its management. Due suspicion and magnification of the lesions revealed the ticks which otherwise masqueraded as small skin tags/moles on gross examination. The ticks were firmly latched on to the skin and careful removal prevented incarceration of the mouth parts. Rickettsial diseases that were believed to have disappeared from India are reemerging and their presence has recently been documented in at least 11 states in the country. Among vector borne diseases, the most common, Lyme disease, also known as the great mimicker, can present with rheumatoid arthritis, fibromyalgia, depression, attention deficit hyperactivity disorder, multiple sclerosis, chronic fatigue syndrome, cardiac manifestations, encephalitis, and mental illness, to name some of the many associations. Common ocular symptoms and signs include conjunctivitis, keratitis, uveitis, and retinitis. Early detection and treatment of tick borne diseases is important to prevent multi system complications that can develop later in life.

  19. Big Science and Long-tail Science

    CERN Document Server

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  20. The science of tiny things: physics at the nanoscale

    Energy Technology Data Exchange (ETDEWEB)

    Copp, Stacy Marla [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-06-07

    Nanoscience is the study of tiny objects that are only a billionth of a meter in size, or about 1,000 to 10,000 times smaller than a human hair. From the electronics in your smartphone to the molecular motors that are in your body’s cells, nanoscientists study and design materials that span a huge range of subjects, from physics to chemistry to biology. I will talk about some of what we do at LANL’s Center for Integrated Technologies, as well as how I first got interested in nanoscience and how I became a nanoscientist at LANL.

  1. Decision Sciences, Economics, Finance, Business, Computing, and Big Data: Connections

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2018-01-01

    textabstractThis paper provides a review of some connecting literature in Decision Sciences, Economics, Finance, Business, Computing, and Big Data. We then discuss some research that is related to the six cognate disciplines. Academics could develop theoretical models and subsequent

  2. Betsy Pugel, Tiny houses: Planetary protection-focused materials selection for spaceflight hardware surfaces

    OpenAIRE

    Schriml, Lynn

    2017-01-01

    Betsy Pugel, National Aeronautics and Space Administration Tiny houses: Planetary protection-focused materials selection for spaceflight hardware surfacesOn October 10-12th, 2017 the Alfred P. Sloan Foundation and The National Academies of Sciences, Engineering and Medicine co-hosting MoBE 2017 (Microbiology of the Built Environment Research and Applications Symposium) at the National Academy of Sciences Building to present the current state-of-the-science in understanding the formation and ...

  3. Principales parámetros para el estudio de la colaboración científica en Big Science

    Directory of Open Access Journals (Sweden)

    Ortoll, Eva

    2014-12-01

    Full Text Available In several scientific disciplines research has shifted from experiments of a reduced scale to large and complex collaborations. Many recent scientific achievements like the human genome sequencing or the discovery of the Higgs boson have taken place within the “big science” paradigm. The study of scientific collaboration needs to take into account all the diverse factors that have an influence on it. In the case of big science experiments, some of those aspects are particularly important: number of institutions involved, cultural differences, diversity of spaces and infrastructures or the conceptualization of research problems. By considering these specific factors we present a set of parameters for the analysis of scientific collaboration in big science projects. The utility of these parameters is illustrated through a comparative study of two large big science projects: the ATLAS experiment and the Human Genome Project.En varias áreas de la ciencia se ha pasado de trabajar en experimentos reducidos a participar en grandes y complejas colaboraciones. Muchos de los grandes avances científicos recientes como la secuenciación del genoma humano o el descubrimiento del bosón de Higgs se enmarcan en el paradigma denominado big science. El estudio de la colaboración científica debe tener en cuenta los factores de todo tipo que influyen en dicha colaboración. Los experimentos de big science inciden especialmente en algunos de estos aspectos: volumen de instituciones implicadas, diferencias culturales, diversidad de espacios e infraestructuras o la propia conceptualización del problema de investigación. Atendiendo a estas particularidades, en este trabajo presentamos un conjunto de parámetros para el análisis de la colaboración científica en proyectos big science. Ilustramos la utilidad de esos parámetros mediante un estudio comparativo de dos grandes proyectos de big science: el experimento ATLAS y el Proyecto Genoma Humano.

  4. "Big Science" exhibition at Balexert

    CERN Multimedia

    2008-01-01

    CERN is going out to meet those members of the general public who were unable to attend the recent Open Day. The Laboratory will be taking its "Big Science" exhibition from the Globe of Science and Innovation to the Balexert shopping centre from 19 to 31 May 2008. The exhibition, which shows the LHC and its experiments through the eyes of a photographer, features around thirty spectacular photographs measuring 4.5 metres high and 2.5 metres wide. Welcomed and guided around the exhibition by CERN volunteers, shoppers at Balexert will also have the opportunity to discover LHC components on display and watch films. "Fun with Physics" workshops will be held at certain times of the day. Main hall of the Balexert shopping centre, ground floor, from 9.00 a.m. to 7.00 p.m. Monday to Friday and from 10 a.m. to 6 p.m. on the two Saturdays. Call for volunteers All members of the CERN personnel are invited to enrol as volunteers to help welcom...

  5. Big questions, big science: meeting the challenges of global ecology.

    Science.gov (United States)

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  6. Think big: learning contexts, algorithms and data science

    Directory of Open Access Journals (Sweden)

    Baldassarre Michele

    2016-12-01

    Full Text Available Due to the increasing growth in available data in recent years, all areas of research and the managements of institutions and organisations, specifically schools and universities, feel the need to give meaning to this availability of data. This article, after a brief reference to the definition of big data, intends to focus attention and reflection on their type to proceed to an extension of their characterisation. One of the hubs to make feasible the use of Big Data in operational contexts is to give a theoretical basis to which to refer. The Data, Information, Knowledge and Wisdom (DIKW model correlates these four aspects, concluding in Data Science, which in many ways could revolutionise the established pattern of scientific investigation. The Learning Analytics applications on online learning platforms can be tools for evaluating the quality of teaching. And that is where some problems arise. It becomes necessary to handle with care the available data. Finally, a criterion for deciding whether it makes sense to think of an analysis based on Big Data can be to think about the interpretability and relevance in relation to both institutional and personal processes.

  7. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  8. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  9. The Human Genome Project: big science transforms biology and medicine.

    Science.gov (United States)

    Hood, Leroy; Rowen, Lee

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called 'big science' - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and analytical tools, and how it brought the expertise of engineers, computer scientists and mathematicians together with biologists. It established an open approach to data sharing and open-source software, thereby making the data resulting from the project accessible to all. The genome sequences of microbes, plants and animals have revolutionized many fields of science, including microbiology, virology, infectious disease and plant biology. Moreover, deeper knowledge of human sequence variation has begun to alter the practice of medicine. The Human Genome Project has inspired subsequent large-scale data acquisition initiatives such as the International HapMap Project, 1000 Genomes, and The Cancer Genome Atlas, as well as the recently announced Human Brain Project and the emerging Human Proteome Project.

  10. Opening the Big Black Box: European study reveals visitors' impressions of science laboratories

    CERN Multimedia

    2004-01-01

    "On 29 - 30 March the findings of 'Inside the Big Black Box'- a Europe-wide science and society project - will be revealed during a two-day seminar hosted by CERN*. The principle aim of Inside the Big Black Box (IN3B) is to determine whether a working scientific laboratory can capture the curiosity of the general public through visits" (1 page)

  11. A big bang in a little room the quest to create new universes

    CERN Document Server

    Merali, Zeeya

    2017-01-01

    What if you could become God, with the ability to build a whole new universe? As startling as it sounds, modern physics suggests that within the next two decades, scientists may be able to perform this seemingly divine feat-to concoct an entirely new baby universe, complete with its own physical laws, star systems, galaxies, and even intelligent life. A Big Bang in a Little Room takes the reader on a journey through the history of cosmology and unravels-particle by particle, theory by theory, and experiment by experiment-the ideas behind this provocative claim made by some of the most respected physicists alive today. Beyond simply explaining the science, A Big Bang in a Little Room also tells the story of the people who have been laboring for more than thirty years to make this seemingly impossible dream a reality. What has driven them to continue on what would seem, at first glance, to be a quixotic quest? This mind-boggling book reveals that we can nurse other worlds in the tiny confines of a lab, raising...

  12. Big Data, Computational Science, Economics, Finance, Marketing, Management, and Psychology: Connections

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2018-01-01

    textabstractThe paper provides a review of the literature that connects Big Data, Computational Science, Economics, Finance, Marketing, Management, and Psychology, and discusses some research that is related to the seven disciplines. Academics could develop theoretical models and subsequent

  13. "small problems, Big Trouble": An Art and Science Collaborative Exhibition Reflecting Seemingly small problems Leading to Big Threats

    Science.gov (United States)

    Waller, J. L.; Brey, J. A.

    2014-12-01

    "small problems, Big Trouble" (spBT) is an exhibition of artist Judith Waller's paintings accompanied by text panels written by Earth scientist Dr. James A. Brey and several science researchers and educators. The text panels' message is as much the focus of the show as the art--true interdisciplinarity! Waller and Brey's history of art and earth science collaborations include the successful exhibition "Layers: Places in Peril". New in spBT is extended collaboration with other scientists in order to create awareness of geoscience and other subjects (i.e. soil, parasites, dust, pollutants, invasive species, carbon, ground water contaminants, solar wind) small in scale which pose significant threats. The paintings are the size of a mirror, a symbol suggesting the problems depicted are those we increasingly need to face, noting our collective reflections of shared current and future reality. Naturalistic rendering and abstract form in the art helps reach a broad audience including those familiar with art and those familiar with science. The goal is that gallery visitors gain greater appreciation and understanding of both—and of the sober content of the show as a whole. "small problems, Big Trouble" premiers in Wisconsin April, 2015. As in previous collaborations, Waller and Brey actively utilize art and science (specifically geoscience) as an educational vehicle for active student learning. Planned are interdisciplinary university and area high school activities linked through spBT. The exhibition in a public gallery offers a means to enhance community awareness of and action on scientific issues through art's power to engage people on an emotional level. This AGU presentation includes a description of past Waller and Brey activities: incorporating art and earth science in lab and studio classrooms, producing gallery and museum exhibitions and delivering workshops and other presentations. They also describe how walking the paths of several past earth science

  14. The Natural Science Underlying Big History

    Directory of Open Access Journals (Sweden)

    Eric J. Chaisson

    2014-01-01

    Full Text Available Nature’s many varied complex systems—including galaxies, stars, planets, life, and society—are islands of order within the increasingly disordered Universe. All organized systems are subject to physical, biological, or cultural evolution, which together comprise the grander interdisciplinary subject of cosmic evolution. A wealth of observational data supports the hypothesis that increasingly complex systems evolve unceasingly, uncaringly, and unpredictably from big bang to humankind. These are global history greatly extended, big history with a scientific basis, and natural history broadly portrayed across ∼14 billion years of time. Human beings and our cultural inventions are not special, unique, or apart from Nature; rather, we are an integral part of a universal evolutionary process connecting all such complex systems throughout space and time. Such evolution writ large has significant potential to unify the natural sciences into a holistic understanding of who we are and whence we came. No new science (beyond frontier, nonequilibrium thermodynamics is needed to describe cosmic evolution’s major milestones at a deep and empirical level. Quantitative models and experimental tests imply that a remarkable simplicity underlies the emergence and growth of complexity for a wide spectrum of known and diverse systems. Energy is a principal facilitator of the rising complexity of ordered systems within the expanding Universe; energy flows are as central to life and society as they are to stars and galaxies. In particular, energy rate density—contrasting with information content or entropy production—is an objective metric suitable to gauge relative degrees of complexity among a hierarchy of widely assorted systems observed throughout the material Universe. Operationally, those systems capable of utilizing optimum amounts of energy tend to survive, and those that cannot are nonrandomly eliminated.

  15. Complementary social science? Quali-quantitative experiments in a Big Data world

    Directory of Open Access Journals (Sweden)

    Anders Blok

    2014-08-01

    Full Text Available The rise of Big Data in the social realm poses significant questions at the intersection of science, technology, and society, including in terms of how new large-scale social databases are currently changing the methods, epistemologies, and politics of social science. In this commentary, we address such epochal (“large-scale” questions by way of a (situated experiment: at the Danish Technical University in Copenhagen, an interdisciplinary group of computer scientists, physicists, economists, sociologists, and anthropologists (including the authors is setting up a large-scale data infrastructure, meant to continually record the digital traces of social relations among an entire freshman class of students ( N  > 1000. At the same time, fieldwork is carried out on friendship (and other relations amongst the same group of students. On this basis, the question we pose is the following: what kind of knowledge is obtained on this social micro-cosmos via the Big (computational, quantitative and Small (embodied, qualitative Data, respectively? How do the two relate? Invoking Bohr’s principle of complementarity as analogy, we hypothesize that social relations, as objects of knowledge, depend crucially on the type of measurement device deployed. At the same time, however, we also expect new interferences and polyphonies to arise at the intersection of Big and Small Data, provided that these are, so to speak, mixed with care. These questions, we stress, are important not only for the future of social science methods but also for the type of societal (self-knowledge that may be expected from new large-scale social databases.

  16. Big Data and Clinicians: A Review on the State of the Science

    Science.gov (United States)

    Wang, Weiqi

    2014-01-01

    Background In the past few decades, medically related data collection saw a huge increase, referred to as big data. These huge datasets bring challenges in storage, processing, and analysis. In clinical medicine, big data is expected to play an important role in identifying causality of patient symptoms, in predicting hazards of disease incidence or reoccurrence, and in improving primary-care quality. Objective The objective of this review was to provide an overview of the features of clinical big data, describe a few commonly employed computational algorithms, statistical methods, and software toolkits for data manipulation and analysis, and discuss the challenges and limitations in this realm. Methods We conducted a literature review to identify studies on big data in medicine, especially clinical medicine. We used different combinations of keywords to search PubMed, Science Direct, Web of Knowledge, and Google Scholar for literature of interest from the past 10 years. Results This paper reviewed studies that analyzed clinical big data and discussed issues related to storage and analysis of this type of data. Conclusions Big data is becoming a common feature of biological and clinical studies. Researchers who use clinical big data face multiple challenges, and the data itself has limitations. It is imperative that methodologies for data analysis keep pace with our ability to collect and store data. PMID:25600256

  17. Lecture 10: The European Bioinformatics Institute - "Big data" for biomedical sciences

    CERN Multimedia

    CERN. Geneva; Dana, Jose

    2013-01-01

    Part 1: Big data for biomedical sciences (Tom Hancocks) Ten years ago witnessed the completion of the first international 'Big Biology' project that sequenced the human genome. In the years since biological sciences, have seen a vast growth in data. In the coming years advances will come from integration of experimental approaches and the translation into applied technologies is the hospital, clinic and even at home. This talk will examine the development of infrastructure, physical and virtual, that will allow millions of life scientists across Europe better access to biological data Tom studied Human Genetics at the University of Leeds and McMaster University, before completing an MSc in Analytical Genomics at the University of Birmingham. He has worked for the UK National Health Service in diagnostic genetics and in training healthcare scientists and clinicians in bioinformatics. Tom joined the EBI in 2012 and is responsible for the scientific development and delivery of training for the BioMedBridges pr...

  18. Tiny plastic lung mimics human pulmonary function

    Science.gov (United States)

    Careers Inclusion & Diversity Work-Life Balance Career Resources Apply for a Job Postdocs Students Goals Recycling Green Purchasing Pollution Prevention Reusing Water Resources Environmental Management Releases - 2016 » April » Tiny plastic lung mimics human pulmonary function Tiny plastic lung mimics

  19. Big Data and Intellectual Property Rights in the Health and Life Sciences

    DEFF Research Database (Denmark)

    Minssen, Timo

    The vast prospects of Big Data and the shift to more “personalized”, “open” and “transparent” innovation models highlight the importance of an effective governance, regulation and stimulation of high-quality data-uses in the health and life sciences. Intellectual Property Rights (IPRs) and related...... rights come into play when research is translated into safe and efficient “real world” uses. While the need of recalibrating IPRs to fully support Big Data advances is being intensely debated among multiple stakeholders, there seems to be much confusion about the availability of IPRs and their legal...... effects. In this very brief presentation I intend to provide a very brief overview on the most relevant IPRs for data-based life science research. Realizing that the choice of how to address, use and interact with IPRs differs among various areas of applications, I also intend to sketch out and discuss...

  20. Legal dimensions of Big Data in the Health and Life Sciences

    DEFF Research Database (Denmark)

    Minssen, Timo

    2016-01-01

    Please find below my welcome speech at last-weeks mini-symposium on “Legal dimensions of Big Data in the Health and Life Sciences – From Intellectual Property Rights and Global Pandemics to Privacy and Ethics at the University of Copenhagen (UCPH). The event was organized by our Global Genes –Local...

  1. Big-Data-Driven Stem Cell Science and Tissue Engineering: Vision and Unique Opportunities.

    Science.gov (United States)

    Del Sol, Antonio; Thiesen, Hans J; Imitola, Jaime; Carazo Salas, Rafael E

    2017-02-02

    Achieving the promises of stem cell science to generate precise disease models and designer cell samples for personalized therapeutics will require harnessing pheno-genotypic cell-level data quantitatively and predictively in the lab and clinic. Those requirements could be met by developing a Big-Data-driven stem cell science strategy and community. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. An overview of big data and data science education at South African universities

    Directory of Open Access Journals (Sweden)

    Eduan Kotzé

    2016-02-01

    Full Text Available Man and machine are generating data electronically at an astronomical speed and in such a way that society is experiencing cognitive challenges to analyse this data meaningfully. Big data firms, such as Google and Facebook, identified this problem several years ago and are continuously developing new technologies or improving existing technologies in order to facilitate the cognitive analysis process of these large data sets. The purpose of this article is to contribute to our theoretical understanding of the role that big data might play in creating new training opportunities for South African universities. The article investigates emerging literature on the characteristics and main components of big data, together with the Hadoop application stack as an example of big data technology. Due to the rapid development of big data technology, a paradigm shift of human resources is required to analyse these data sets; therefore, this study examines the state of big data teaching at South African universities. This article also provides an overview of possible big data sources for South African universities, as well as relevant big data skills that data scientists need. The study also investigates existing academic programs in South Africa, where the focus is on teaching advanced database systems. The study found that big data and data science topics are introduced to students on a postgraduate level, but that the scope is very limited. This article contributes by proposing important theoretical topics that could be introduced as part of the existing academic programs. More research is required, however, to expand these programs in order to meet the growing demand for data scientists with big data skills.

  3. Big Data Science: Opportunities and Challenges to Address Minority Health and Health Disparities in the 21st Century

    Science.gov (United States)

    Zhang, Xinzhi; Pérez-Stable, Eliseo J.; Bourne, Philip E.; Peprah, Emmanuel; Duru, O. Kenrik; Breen, Nancy; Berrigan, David; Wood, Fred; Jackson, James S.; Wong, David W.S.; Denny, Joshua

    2017-01-01

    Addressing minority health and health disparities has been a missing piece of the puzzle in Big Data science. This article focuses on three priority opportunities that Big Data science may offer to the reduction of health and health care disparities. One opportunity is to incorporate standardized information on demographic and social determinants in electronic health records in order to target ways to improve quality of care for the most disadvantaged populations over time. A second opportunity is to enhance public health surveillance by linking geographical variables and social determinants of health for geographically defined populations to clinical data and health outcomes. Third and most importantly, Big Data science may lead to a better understanding of the etiology of health disparities and understanding of minority health in order to guide intervention development. However, the promise of Big Data needs to be considered in light of significant challenges that threaten to widen health disparities. Care must be taken to incorporate diverse populations to realize the potential benefits. Specific recommendations include investing in data collection on small sample populations, building a diverse workforce pipeline for data science, actively seeking to reduce digital divides, developing novel ways to assure digital data privacy for small populations, and promoting widespread data sharing to benefit under-resourced minority-serving institutions and minority researchers. With deliberate efforts, Big Data presents a dramatic opportunity for reducing health disparities but without active engagement, it risks further widening them. PMID:28439179

  4. Big Data Science: Opportunities and Challenges to Address Minority Health and Health Disparities in the 21st Century.

    Science.gov (United States)

    Zhang, Xinzhi; Pérez-Stable, Eliseo J; Bourne, Philip E; Peprah, Emmanuel; Duru, O Kenrik; Breen, Nancy; Berrigan, David; Wood, Fred; Jackson, James S; Wong, David W S; Denny, Joshua

    2017-01-01

    Addressing minority health and health disparities has been a missing piece of the puzzle in Big Data science. This article focuses on three priority opportunities that Big Data science may offer to the reduction of health and health care disparities. One opportunity is to incorporate standardized information on demographic and social determinants in electronic health records in order to target ways to improve quality of care for the most disadvantaged populations over time. A second opportunity is to enhance public health surveillance by linking geographical variables and social determinants of health for geographically defined populations to clinical data and health outcomes. Third and most importantly, Big Data science may lead to a better understanding of the etiology of health disparities and understanding of minority health in order to guide intervention development. However, the promise of Big Data needs to be considered in light of significant challenges that threaten to widen health disparities. Care must be taken to incorporate diverse populations to realize the potential benefits. Specific recommendations include investing in data collection on small sample populations, building a diverse workforce pipeline for data science, actively seeking to reduce digital divides, developing novel ways to assure digital data privacy for small populations, and promoting widespread data sharing to benefit under-resourced minority-serving institutions and minority researchers. With deliberate efforts, Big Data presents a dramatic opportunity for reducing health disparities but without active engagement, it risks further widening them.

  5. Big Data: An Opportunity for Collaboration with Computer Scientists on Data-Driven Science

    Science.gov (United States)

    Baru, C.

    2014-12-01

    Big data technologies are evolving rapidly, driven by the need to manage ever increasing amounts of historical data; process relentless streams of human and machine-generated data; and integrate data of heterogeneous structure from extremely heterogeneous sources of information. Big data is inherently an application-driven problem. Developing the right technologies requires an understanding of the applications domain. Though, an intriguing aspect of this phenomenon is that the availability of the data itself enables new applications not previously conceived of! In this talk, we will discuss how the big data phenomenon creates an imperative for collaboration among domain scientists (in this case, geoscientists) and computer scientists. Domain scientists provide the application requirements as well as insights about the data involved, while computer scientists help assess whether problems can be solved with currently available technologies or require adaptaion of existing technologies and/or development of new technologies. The synergy can create vibrant collaborations potentially leading to new science insights as well as development of new data technologies and systems. The area of interface between geosciences and computer science, also referred to as geoinformatics is, we believe, a fertile area for interdisciplinary research.

  6. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    Science.gov (United States)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-01-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  7. Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology.

    Science.gov (United States)

    Salazar, Brittany M; Balczewski, Emily A; Ung, Choong Yong; Zhu, Shizhen

    2016-12-27

    Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring "big data" applications in pediatric oncology. Computational strategies derived from big data science-network- and machine learning-based modeling and drug repositioning-hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which "big data" and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases.

  8. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  9. What science for what kind of society? Reflecting the development of big science

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Lecture will be in English– Translation available in French Without any doubt, CERN can be described as being among the most ambitious scientific enterprises ever undertaken. For 60 years, the Member States have not only invested considerable financial means into this institution, but have also supported the creation of a highly visionary research programme. And this has led to a change in the way science is done, as captured by the idea of "big science". Yet this naturally also raises a number of quite fundamental questions: How did the meaning of "doing science" change? What justifies societal engagement with and support for such a cost-intensive long-term scientific undertaking? And finally, in what ways does (and did) this research enterprise contribute to the development of contemporary societies? By focusing on some key examples, the talk will thus explore how the ways of doing research and scientific and societal relations have undergone change over the ...

  10. Nanocellulose, a tiny fiber with huge applications.

    Science.gov (United States)

    Abitbol, Tiffany; Rivkin, Amit; Cao, Yifeng; Nevo, Yuval; Abraham, Eldho; Ben-Shalom, Tal; Lapidot, Shaul; Shoseyov, Oded

    2016-06-01

    Nanocellulose is of increasing interest for a range of applications relevant to the fields of material science and biomedical engineering due to its renewable nature, anisotropic shape, excellent mechanical properties, good biocompatibility, tailorable surface chemistry, and interesting optical properties. We discuss the main areas of nanocellulose research: photonics, films and foams, surface modifications, nanocomposites, and medical devices. These tiny nanocellulose fibers have huge potential in many applications, from flexible optoelectronics to scaffolds for tissue regeneration. We hope to impart the readers with some of the excitement that currently surrounds nanocellulose research, which arises from the green nature of the particles, their fascinating physical and chemical properties, and the diversity of applications that can be impacted by this material. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. "Big Science: the LHC in Pictures" in the Globe

    CERN Multimedia

    2008-01-01

    An exhibition of spectacular photographs of the LHC and its experiments is about to open in the Globe. The LHC and its four experiments are not only huge in size but also uniquely beautiful, as the exhibition "Big Science: the LHC in Pictures" in the Globe of Science and Innovation will show. The exhibition features around thirty spectacular photographs measuring 4.5 metres high and 2.5 metres wide. These giant pictures reflecting the immense scale of the LHC and the mysteries of the Universe it is designed to uncover fill the Globe with shape and colour. The exhibition, which will open on 4 March, is divided into six different themes: CERN, the LHC and the four experiments ATLAS, LHCb, CMS and ALICE. Facts about all these subjects will be available at information points and in an explanatory booklet accompanying the exhibition (which visitors will be able to buy if they wish to take it home with them). Globe of Science and Innovatio...

  12. Big Data in Science and Healthcare: A Review of Recent Literature and Perspectives. Contribution of the IMIA Social Media Working Group.

    Science.gov (United States)

    Hansen, M M; Miron-Shatz, T; Lau, A Y S; Paton, C

    2014-08-15

    As technology continues to evolve and rise in various industries, such as healthcare, science, education, and gaming, a sophisticated concept known as Big Data is surfacing. The concept of analytics aims to understand data. We set out to portray and discuss perspectives of the evolving use of Big Data in science and healthcare and, to examine some of the opportunities and challenges. A literature review was conducted to highlight the implications associated with the use of Big Data in scientific research and healthcare innovations, both on a large and small scale. Scientists and health-care providers may learn from one another when it comes to understanding the value of Big Data and analytics. Small data, derived by patients and consumers, also requires analytics to become actionable. Connectivism provides a framework for the use of Big Data and analytics in the areas of science and healthcare. This theory assists individuals to recognize and synthesize how human connections are driving the increase in data. Despite the volume and velocity of Big Data, it is truly about technology connecting humans and assisting them to construct knowledge in new ways. Concluding Thoughts: The concept of Big Data and associated analytics are to be taken seriously when approaching the use of vast volumes of both structured and unstructured data in science and health-care. Future exploration of issues surrounding data privacy, confidentiality, and education are needed. A greater focus on data from social media, the quantified self-movement, and the application of analytics to "small data" would also be useful.

  13. Structural analysis of an off-grid tiny house

    Science.gov (United States)

    Calluari, Karina Arias; Alonso-Marroquín, Fernando

    2017-06-01

    The off-grid technologies and tiny house movement have experimented an unprecedented growth in recent years. Putting both sides together, we are trying to achieve an economic and environmental friendly solution to the higher cost of residential properties. This solution is the construction of off-grid tiny houses. This article presents a design for a small modular off-grid house made by pine timber. A numerical analysis of the proposed tiny house was performed to ensure its structural stability. The results were compared with the suggested serviceability limit state criteria, which are contended in the Australia Guidelines Standards making this design reliable for construction.

  14. TinyOS-based quality of service management in wireless sensor networks

    Science.gov (United States)

    Peterson, N.; Anusuya-Rangappa, L.; Shirazi, B.A.; Huang, R.; Song, W.-Z.; Miceli, M.; McBride, D.; Hurson, A.; LaHusen, R.

    2009-01-01

    Previously the cost and extremely limited capabilities of sensors prohibited Quality of Service (QoS) implementations in wireless sensor networks. With advances in technology, sensors are becoming significantly less expensive and the increases in computational and storage capabilities are opening the door for new, sophisticated algorithms to be implemented. Newer sensor network applications require higher data rates with more stringent priority requirements. We introduce a dynamic scheduling algorithm to improve bandwidth for high priority data in sensor networks, called Tiny-DWFQ. Our Tiny-Dynamic Weighted Fair Queuing scheduling algorithm allows for dynamic QoS for prioritized communications by continually adjusting the treatment of communication packages according to their priorities and the current level of network congestion. For performance evaluation, we tested Tiny-DWFQ, Tiny-WFQ (traditional WFQ algorithm implemented in TinyOS), and FIFO queues on an Imote2-based wireless sensor network and report their throughput and packet loss. Our results show that Tiny-DWFQ performs better in all test cases. ?? 2009 IEEE.

  15. Data science and big data analytics discovering, analyzing, visualizing and presenting data

    CERN Document Server

    2014-01-01

    Data Science and Big Data Analytics is about harnessing the power of data for new insights. The book covers the breadth of activities and methods and tools that Data Scientists use. The content focuses on concepts, principles and practical applications that are applicable to any industry and technology environment, and the learning is supported and explained with examples that you can replicate using open-source software. This book will help you: Become a contributor on a data science teamDeploy a structured lifecycle approach to data analytics problemsApply appropriate analytic techniques and

  16. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  17. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  18. Ontologies, methodologies, and new uses of Big Data in the social and cultural sciences

    Directory of Open Access Journals (Sweden)

    Robin Wagner-Pacifici

    2015-12-01

    Full Text Available In our Introduction to the Conceiving the Social with Big Data Special Issue of Big Data & Society , we survey the 18 contributions from scholars in the humanities and social sciences, and highlight several questions and themes that emerge within and across them. These emergent issues reflect the challenges, problems, and promises of working with Big Data to access and assess the social. They include puzzles about the locus and nature of human life, the nature of interpretation, the categorical constructions of individual entities and agents, the nature and relevance of contexts and temporalities, and the determinations of causality. As such, the Introduction reflects on the contributions along a series of binaries that capture the dualities and dynamisms of these themes: Life/Data; Mind/Machine; and Induction/Deduction.

  19. Big data, computational science, economics, finance, marketing, management, and psychology: connections

    OpenAIRE

    Chang, Chia-Lin; McAleer, Michael; Wong, Wing-Keung

    2018-01-01

    textabstractThe paper provides a review of the literature that connects Big Data, Computational Science, Economics, Finance, Marketing, Management, and Psychology, and discusses some research that is related to the seven disciplines. Academics could develop theoretical models and subsequent econometric and statistical models to estimate the parameters in the associated models, as well as conduct simulation to examine whether the estimators in their theories on estimation and hypothesis testin...

  20. Applying science and mathematics to big data for smarter buildings.

    Science.gov (United States)

    Lee, Young M; An, Lianjun; Liu, Fei; Horesh, Raya; Chae, Young Tae; Zhang, Rui

    2013-08-01

    Many buildings are now collecting a large amount of data on operations, energy consumption, and activities through systems such as a building management system (BMS), sensors, and meters (e.g., submeters and smart meters). However, the majority of data are not utilized and are thrown away. Science and mathematics can play an important role in utilizing these big data and accurately assessing how energy is consumed in buildings and what can be done to save energy, make buildings energy efficient, and reduce greenhouse gas (GHG) emissions. This paper discusses an analytical tool that has been developed to assist building owners, facility managers, operators, and tenants of buildings in assessing, benchmarking, diagnosing, tracking, forecasting, and simulating energy consumption in building portfolios. © 2013 New York Academy of Sciences.

  1. Enabling a new Paradigm to Address Big Data and Open Science Challenges

    Science.gov (United States)

    Ramamurthy, Mohan; Fisher, Ward

    2017-04-01

    Data are not only the lifeblood of the geosciences but they have become the currency of the modern world in science and society. Rapid advances in computing, communi¬cations, and observational technologies — along with concomitant advances in high-resolution modeling, ensemble and coupled-systems predictions of the Earth system — are revolutionizing nearly every aspect of our field. Modern data volumes from high-resolution ensemble prediction/projection/simulation systems and next-generation remote-sensing systems like hyper-spectral satellite sensors and phased-array radars are staggering. For example, CMIP efforts alone will generate many petabytes of climate projection data for use in assessments of climate change. And NOAA's National Climatic Data Center projects that it will archive over 350 petabytes by 2030. For researchers and educators, this deluge and the increasing complexity of data brings challenges along with the opportunities for discovery and scientific breakthroughs. The potential for big data to transform the geosciences is enormous, but realizing the next frontier depends on effectively managing, analyzing, and exploiting these heterogeneous data sources, extracting knowledge and useful information from heterogeneous data sources in ways that were previously impossible, to enable discoveries and gain new insights. At the same time, there is a growing focus on the area of "Reproducibility or Replicability in Science" that has implications for Open Science. The advent of cloud computing has opened new avenues for not only addressing both big data and Open Science challenges to accelerate scientific discoveries. However, to successfully leverage the enormous potential of cloud technologies, it will require the data providers and the scientific communities to develop new paradigms to enable next-generation workflows and transform the conduct of science. Making data readily available is a necessary but not a sufficient condition. Data providers

  2. Crowd-funded micro-grants for genomics and "big data": an actionable idea connecting small (artisan) science, infrastructure science, and citizen philanthropy.

    Science.gov (United States)

    Özdemir, Vural; Badr, Kamal F; Dove, Edward S; Endrenyi, Laszlo; Geraci, Christy Jo; Hotez, Peter J; Milius, Djims; Neves-Pereira, Maria; Pang, Tikki; Rotimi, Charles N; Sabra, Ramzi; Sarkissian, Christineh N; Srivastava, Sanjeeva; Tims, Hesther; Zgheib, Nathalie K; Kickbusch, Ilona

    2013-04-01

    Biomedical science in the 21(st) century is embedded in, and draws from, a digital commons and "Big Data" created by high-throughput Omics technologies such as genomics. Classic Edisonian metaphors of science and scientists (i.e., "the lone genius" or other narrow definitions of expertise) are ill equipped to harness the vast promises of the 21(st) century digital commons. Moreover, in medicine and life sciences, experts often under-appreciate the important contributions made by citizen scholars and lead users of innovations to design innovative products and co-create new knowledge. We believe there are a large number of users waiting to be mobilized so as to engage with Big Data as citizen scientists-only if some funding were available. Yet many of these scholars may not meet the meta-criteria used to judge expertise, such as a track record in obtaining large research grants or a traditional academic curriculum vitae. This innovation research article describes a novel idea and action framework: micro-grants, each worth $1000, for genomics and Big Data. Though a relatively small amount at first glance, this far exceeds the annual income of the "bottom one billion"-the 1.4 billion people living below the extreme poverty level defined by the World Bank ($1.25/day). We describe two types of micro-grants. Type 1 micro-grants can be awarded through established funding agencies and philanthropies that create micro-granting programs to fund a broad and highly diverse array of small artisan labs and citizen scholars to connect genomics and Big Data with new models of discovery such as open user innovation. Type 2 micro-grants can be funded by existing or new science observatories and citizen think tanks through crowd-funding mechanisms described herein. Type 2 micro-grants would also facilitate global health diplomacy by co-creating crowd-funded micro-granting programs across nation-states in regions facing political and financial instability, while sharing similar disease

  3. Big Data Science Education: A Case Study of a Project-Focused Introductory Course

    Science.gov (United States)

    Saltz, Jeffrey; Heckman, Robert

    2015-01-01

    This paper reports on a case study of a project-focused introduction to big data science course. The pedagogy of the course leveraged boundary theory, where students were positioned to be at the boundary between a client's desire to understand their data and the academic class. The results of the case study demonstrate that using live clients…

  4. Stress transmission through Ti-Ni alloy, titanium and stainless steel in impact compression test.

    Science.gov (United States)

    Yoneyama, T; Doi, H; Kobayashi, E; Hamanaka, H; Tanabe, Y; Bonfield, W

    2000-06-01

    Impact stress transmission of Ti-Ni alloy was evaluated for biomedical stress shielding. Transformation temperatures of the alloy were investigated by means of DSC. An impact compression test was carried out with use of split-Hopkinson pressure-bar technique with cylindrical specimens of Ti-Ni alloy, titanium and stainless steel. As a result, the transmitted pulse through Ti-Ni alloy was considerably depressed as compared with those through titanium and stainless steel. The initial stress reduction was large through Ti-Ni alloy and titanium, but the stress reduction through Ti-Ni alloy was more continuous than titanium. The maximum value in the stress difference between incident and transmitted pulses through Ti-Ni alloy or titanium was higher than that through stainless steel, while the stress reduction in the maximum stress through Ti-Ni alloy was statistically larger than that through titanium or stainless steel. Ti-Ni alloy transmitted less impact stress than titanium or stainless steel, which suggested that the loading stress to adjacent tissues could be decreased with use of Ti-Ni alloy as a component material in an implant system. Copyright 2000 Kluwer Academic Publishers

  5. Thinking about information work of nuclear science and technology in the age of big data: speaking of the information analysis and research

    International Nuclear Information System (INIS)

    Chen Tieyong

    2014-01-01

    Human society is entering a 'PB' (1024TB) the new era as the unit of structured and unstructured data, In the network era, with the development of mobile communications, electronic commerce, the emergence and development of social network. Now, a large-scale production, sharing and application data era is opening. How to explore the value of data, to conquer big data, to get useful information, is an important task of our science and technology information workers. This paper tries to analyze the development of the nuclear science and technology information work from big data obtain, analysis, application. Our analysis and research work for information will be increasingly based on all data and analysis, Instead of random sampling. The data 'sound' is possible. A lot of results of information analysis and research can be expressed quantitatively. We should attach great importance to data collection, careful analysis of the big data. We involves the professional division of labor, but also to cooperation In nuclear science and technology information analysis and research process. In addition, we should strengthen the nuclear science and technology information resource construction, improve Information supply; strengthen the analysis and research of nuclear science and technology information, improve the information service; strengthen information management of nuclear science and technology, pay attention to the security problems and intellectual property rights in information sharing; strengthen personnel training, continuously improve the nuclear science and technology information work efficiency and performance. In the age of big data, our nuclear science and technology information workers shall be based on the information analysis and study as the core, one hand grasping information collection, another hand grasping information service, forge ahead and innovation, continuous improvement working ability of nuclear science and technology information, improve the

  6. Toward a Big Data Science: A challenge of "Science Cloud"

    Science.gov (United States)

    Murata, Ken T.; Watanabe, Hidenobu

    2013-04-01

    During these 50 years, along with appearance and development of high-performance computers (and super-computers), numerical simulation is considered to be a third methodology for science, following theoretical (first) and experimental and/or observational (second) approaches. The variety of data yielded by the second approaches has been getting more and more. It is due to the progress of technologies of experiments and observations. The amount of the data generated by the third methodologies has been getting larger and larger. It is because of tremendous development and programming techniques of super computers. Most of the data files created by both experiments/observations and numerical simulations are saved in digital formats and analyzed on computers. The researchers (domain experts) are interested in not only how to make experiments and/or observations or perform numerical simulations, but what information (new findings) to extract from the data. However, data does not usually tell anything about the science; sciences are implicitly hidden in the data. Researchers have to extract information to find new sciences from the data files. This is a basic concept of data intensive (data oriented) science for Big Data. As the scales of experiments and/or observations and numerical simulations get larger, new techniques and facilities are required to extract information from a large amount of data files. The technique is called as informatics as a fourth methodology for new sciences. Any methodologies must work on their facilities: for example, space environment are observed via spacecraft and numerical simulations are performed on super-computers, respectively in space science. The facility of the informatics, which deals with large-scale data, is a computational cloud system for science. This paper is to propose a cloud system for informatics, which has been developed at NICT (National Institute of Information and Communications Technology), Japan. The NICT science

  7. Toward a manifesto for the 'public understanding of big data'.

    Science.gov (United States)

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  8. Big Data, Biostatistics and Complexity Reduction

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2018-01-01

    Roč. 14, č. 2 (2018), s. 24-32 ISSN 1801-5603 R&D Projects: GA MZd(CZ) NV15-29835A Institutional support: RVO:67985807 Keywords : Biostatistics * Big data * Multivariate statistics * Dimensionality * Variable selection Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) https://www.ejbi.org/scholarly-articles/big-data-biostatistics-and-complexity-reduction.pdf

  9. Tiny Devices Project Sharp, Colorful Images

    Science.gov (United States)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  10. Aditi Kale

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education. Aditi Kale. Articles written in Resonance – Journal of Science Education. Volume 20 Issue 10 October 2015 pp 919-930 General Article. The Diatoms: Big Significance of Tiny Glass Houses · Aditi Kale Balasubramanian Karthick · More Details Fulltext PDF ...

  11. A Brief Review on Leading Big Data Models

    Directory of Open Access Journals (Sweden)

    Sugam Sharma

    2014-11-01

    Full Text Available Today, science is passing through an era of transformation, where the inundation of data, dubbed data deluge is influencing the decision making process. The science is driven by the data and is being termed as data science. In this internet age, the volume of the data has grown up to petabytes, and this large, complex, structured or unstructured, and heterogeneous data in the form of “Big Data” has gained significant attention. The rapid pace of data growth through various disparate sources, especially social media such as Facebook, has seriously challenged the data analytic capabilities of traditional relational databases. The velocity of the expansion of the amount of data gives rise to a complete paradigm shift in how new age data is processed. Confidence in the data engineering of the existing data processing systems is gradually fading whereas the capabilities of the new techniques for capturing, storing, visualizing, and analyzing data are evolving. In this review paper, we discuss some of the modern Big Data models that are leading contributors in the NoSQL era and claim to address Big Data challenges in reliable and efficient ways. Also, we take the potential of Big Data into consideration and try to reshape the original operationaloriented definition of “Big Science” (Furner, 2003 into a new data-driven definition and rephrase it as “The science that deals with Big Data is Big Science.”

  12. Crowd-Funded Micro-Grants for Genomics and “Big Data”: An Actionable Idea Connecting Small (Artisan) Science, Infrastructure Science, and Citizen Philanthropy

    Science.gov (United States)

    Badr, Kamal F.; Dove, Edward S.; Endrenyi, Laszlo; Geraci, Christy Jo; Hotez, Peter J.; Milius, Djims; Neves-Pereira, Maria; Pang, Tikki; Rotimi, Charles N.; Sabra, Ramzi; Sarkissian, Christineh N.; Srivastava, Sanjeeva; Tims, Hesther; Zgheib, Nathalie K.; Kickbusch, Ilona

    2013-01-01

    Abstract Biomedical science in the 21st century is embedded in, and draws from, a digital commons and “Big Data” created by high-throughput Omics technologies such as genomics. Classic Edisonian metaphors of science and scientists (i.e., “the lone genius” or other narrow definitions of expertise) are ill equipped to harness the vast promises of the 21st century digital commons. Moreover, in medicine and life sciences, experts often under-appreciate the important contributions made by citizen scholars and lead users of innovations to design innovative products and co-create new knowledge. We believe there are a large number of users waiting to be mobilized so as to engage with Big Data as citizen scientists—only if some funding were available. Yet many of these scholars may not meet the meta-criteria used to judge expertise, such as a track record in obtaining large research grants or a traditional academic curriculum vitae. This innovation research article describes a novel idea and action framework: micro-grants, each worth $1000, for genomics and Big Data. Though a relatively small amount at first glance, this far exceeds the annual income of the “bottom one billion”—the 1.4 billion people living below the extreme poverty level defined by the World Bank ($1.25/day). We describe two types of micro-grants. Type 1 micro-grants can be awarded through established funding agencies and philanthropies that create micro-granting programs to fund a broad and highly diverse array of small artisan labs and citizen scholars to connect genomics and Big Data with new models of discovery such as open user innovation. Type 2 micro-grants can be funded by existing or new science observatories and citizen think tanks through crowd-funding mechanisms described herein. Type 2 micro-grants would also facilitate global health diplomacy by co-creating crowd-funded micro-granting programs across nation-states in regions facing political and financial instability, while

  13. The martensitic transformation in Ti-rich TiNi shape memory alloys

    International Nuclear Information System (INIS)

    Lin, H.C.; Wu, S.K.; Lin, J.C.

    1994-01-01

    The martensitic (Ms) transformation temperatures and their ΔH values of Ti 51 Ni 49 and Ti 50.5 Ni 49.5 alloys are higher than those of equiatomic or Ni-rich TiNi alloys. The Ti-rich TiNi alloys exhibit good shape recovery in spite of a great deal of second phase Ti 2 Ni or Ti 4 Ni 2 O existing around B2 grain boundaries. The nearly identical transformation temperatures indicate that the absorbed oxygen in Ti-rich TiNi alloys may react with Ti 2 Ni particles, instead of the TiNi matrix, to form Ti 4 Ni 2 O. Martensite stabilization can be induced by cold rolling at room temperature. Thermal cycling can depress the transformation temperatures significantly, especially in the initial 20 cycles. The R-phase transformation can be promoted by both cold rolling and thermal cycling in Ti-rich TiNi alloys due to introduced dislocations depressing the Ms temperature. The strengthening effects of cold rolling and thermal cycling on the Ms temperature of Ti-rich TiNi alloys are found to follow the expression Ms = To - KΔσ y . The K values are affected by different strengthening processes and related to the as-annealed transformation temperatures. The higher the as-annealed Ms (or As), the larger the K value. (orig.)

  14. Nursing Management Minimum Data Set: Cost-Effective Tool To Demonstrate the Value of Nurse Staffing in the Big Data Science Era.

    Science.gov (United States)

    Pruinelli, Lisiane; Delaney, Connie W; Garciannie, Amy; Caspers, Barbara; Westra, Bonnie L

    2016-01-01

    There is a growing body of evidence of the relationship of nurse staffing to patient, nurse, and financial outcomes. With the advent of big data science and developing big data analytics in nursing, data science with the reuse of big data is emerging as a timely and cost-effective approach to demonstrate nursing value. The Nursing Management Minimum Date Set (NMMDS) provides standard administrative data elements, definitions, and codes to measure the context where care is delivered and, consequently, the value of nursing. The integration of the NMMDS elements in the current health system provides evidence for nursing leaders to measure and manage decisions, leading to better patient, staffing, and financial outcomes. It also enables the reuse of data for clinical scholarship and research.

  15. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  16. Who Owns Educational Theory? Big Data, Algorithms and the Expert Power of Education Data Science

    Science.gov (United States)

    Williamson, Ben

    2017-01-01

    "Education data science" is an emerging methodological field which possesses the algorithm-driven technologies required to generate insights and knowledge from educational big data. This article consists of an analysis of the Lytics Lab, Stanford University's laboratory for research and development in learning analytics, and the Center…

  17. Solid-state reaction in Ti/Ni multilayered films studied by using magneto-optical spectroscopy

    CERN Document Server

    Lee, Y P; Kim, K W; Kim, C G; Kudryavtsev, Y V; Nemoshkalenko, V V; Szymanski, B

    2000-01-01

    A comparative study of the solid-state reaction (SSR) in a series of Ti/Ni multilayered films (MLDs) with bilayer periods of 0.65-22.2 nm and a constant Ti to Ni sublayer thickness ratio was performed by using experimental and computer-simulated magneto-optical (MO) spectroscopy based on different models of MLFs, as well as x-ray diffraction (XRD). The spectral and sublayer thickness dependences of the MO properties of the Ti/Ni MLFs were explained on the basis of the electromagnetic theory. The existence of a threshold nominal Ni-sublayer thickness of about 3 nm for the as-deposited Ti/Ni MLF to observe of the equatorial Kerr effect was explained by a solid-state reaction which formed nonmagnetic alloyed regions between pure components during the MLF deposition. The SSR in the Ti/Ni MLFs, which was caused by the low temperature annealing, led to the formation of an amorphous Ti-Ni alloy and took place mainly in the Ti/Ni MLFs with ''thick'' sublayers. For the caes of Ti/Ni MLFs, the MO approach turned out to...

  18. Data science and big data an environment of computational intelligence

    CERN Document Server

    Chen, Shyi-Ming

    2017-01-01

    This book presents a comprehensive and up-to-date treatise of a range of methodological and algorithmic issues. It also discusses implementations and case studies, identifies the best design practices, and assesses data analytics business models and practices in industry, health care, administration and business. Data science and big data go hand in hand and constitute a rapidly growing area of research and have attracted the attention of industry and business alike. The area itself has opened up promising new directions of fundamental and applied research and has led to interesting applications, especially those addressing the immediate need to deal with large repositories of data and building tangible, user-centric models of relationships in data. Data is the lifeblood of today’s knowledge-driven economy. Numerous data science models are oriented towards end users and along with the regular requirements for accuracy (which are present in any modeling), come the requirements for ability to process huge and...

  19. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  20. Spark plasma sintering of TiNi nano-powders for biological application

    International Nuclear Information System (INIS)

    Fu, Y Q; Gu, Y W; Shearwood, C; Luo, J K; Flewitt, A J; Milne, W I

    2006-01-01

    Nano-sized TiNi powder with an average size of 50 nm was consolidated using spark plasma sintering (SPS) at 800 deg. C for 5 min. A layer of anatase TiO 2 coating was formed on the sintered TiNi by chemical reaction with a hydrogen peroxide (H 2 O 2 ) solution at 60 deg. C followed by heat treatment at 400 deg. C to enhance the bioactivity of the metal surface. Cell culture using osteoblast cells and a biomimetic test in simulated body fluid proved the biocompatibility of the chemically treated SPS TiNi

  1. Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile

    Science.gov (United States)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco

    2014-05-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.

  2. Panel session: Part 1, In flux -- Science Policy and the social structure of Big Laboratories, 1964--1979

    Energy Technology Data Exchange (ETDEWEB)

    Westfall, C. [Michigan State Univ., East Lansing, MI (United States)]|[CEBAF, Newport News, VA (United States)]|[Fermilab History Collaboration, Batavia, IL (United States)

    1993-09-01

    This report discusses the in flux of science policy and the social structure of big laboratories during the period of 1964 to 1979 and some sociological consequences of high energy physicists` development of the standard model during the same period.

  3. Big Data, Computational Science, Economics, Finance, Marketing, Management, and Psychology: Connections

    Directory of Open Access Journals (Sweden)

    Chia-Lin Chang

    2018-03-01

    Full Text Available The paper provides a review of the literature that connects Big Data, Computational Science, Economics, Finance, Marketing, Management, and Psychology, and discusses research issues that are related to the various disciplines. Academics could develop theoretical models and subsequent econometric and statistical models to estimate the parameters in the associated models, as well as conduct simulation to examine whether the estimators in their theories on estimation and hypothesis testing have good size and high power. Thereafter, academics and practitioners could apply theory to analyse some interesting issues in the seven disciplines and cognate areas.

  4. The Big Challenge in Big Earth Science Data: Maturing to Transdisciplinary Data Platforms that are Relevant to Government, Research and Industry

    Science.gov (United States)

    Wyborn, Lesley; Evans, Ben

    2016-04-01

    Collecting data for the Earth Sciences has a particularly long history going back centuries. Initially scientific data came only from simple human observations recorded by pen on paper. Scientific instruments soon supplemented data capture, and as these instruments became more capable (e.g, automation, more information captured, generation of digitally-born outputs), Earth Scientists entered the 'Big Data' era where progressively data became too big to store and process locally in the old style vaults. To date, most funding initiatives for collection and storage of large volume data sets in the Earth Sciences have been specialised within a single discipline (e.g., climate, geophysics, and Earth Observation) or specific to an individual institution. To undertake interdisciplinary research, it is hard for users to integrate data from these individual repositories mainly due to limitations on physical access to/movement of the data, and/or data being organised without enough information to make sense of it without discipline specialised knowledge. Smaller repositories have also gradually been seen as inefficient in terms of the cost to manage and access (including scarce skills) and effective implementation of new technology and techniques. Within the last decade, the trend is towards fewer and larger data repositories that increasingly are collocated with HPC/cloud resources. There has also been a growing recognition that digital data can be a valuable resource that can be reused and repurposed - publicly funded data from either the academic of government sector is seen as a shared resource, and that efficiencies can be gained by co-location. These new, highly capable, 'transdisciplinary' data repositories are emerging as a fundamental 'infrastructure' both for research and other innovation. The sharing of academic and government data resources on the same infrastructures is enabling new research programmes that will enable integration beyond the traditional physical

  5. Alloying process of sputter-deposited Ti/Ni multilayer thin films

    International Nuclear Information System (INIS)

    Cho, H.; Kim, H.Y.; Miyazaki, S.

    2006-01-01

    Alloying process of a Ti/Ni multilayer thin film was investigated in detail by differential scanning calorimetry (DSC), X-ray diffractometry (XRD) and transmission electron microscopy (TEM). The Ti/Ni multilayer thin film was prepared by depositing Ti and Ni layers alternately on a SiO 2 /Si substrate. The number of each metal layer was 100, and the total thickness was 3 μm. The alloy composition was determined as Ti-51 at.%Ni by electron probe micro analysis (EPMA). The DSC curve exhibited three exothermic peaks at 621, 680 and 701 K during heating the as-sputtered multilayer thin film. In order to investigate the alloying process, XRD and TEM observation was carried out for the specimens heated up to various temperatures with the heating rate same as the DSC measurement. The XRD profile of the as-sputtered film revealed only diffraction peaks of Ti and Ni. But reaction layers of 3 nm in thickness were observed at the interfaces of Ti and Ni layers in cross-sectional TEM images. The reaction layer was confirmed as an amorphous phase by the nano beam diffraction analysis. The XRD profiles exhibited that the intensity of Ti diffraction peak decreased in the specimen heat-treated above 600 K. The peak from Ni became broad and shifted to lower diffraction angle. The amorphous layer thickened up to 6 nm in the specimen heated up to 640 K. The diffraction peak corresponding to Ti-Ni B2 phase appeared and the peak from Ni disappeared for the specimen heated up to 675 K. The Ti-Ni B2 crystallized from the amorphous reaction layer. After further heating above the third exothermic peak, the intensity of the peak from the Ti-Ni B2 phase increased, the peak from Ti disappeared and the peaks corresponding to Ti 2 Ni appeared. The Ti 2 Ni phase was formed by the reaction of the Ti-Ni B2 and Ti

  6. Crystal structure of TiNi nanoparticles obtained by Ar ion beam deposition

    International Nuclear Information System (INIS)

    Castro, A. Torres; Cuellar, E. Lopez; Mendez, U. Ortiz; Yacaman, M. Jose

    2008-01-01

    Nanoparticles are a state of matter that have properties different from either molecules or bulk solids, turning them into a very interesting class of materials to study. In the present work, the crystal structure of TiNi nanoparticles obtained by ion beam deposition is characterized. TiNi nanoparticles were obtained from TiNi wire samples by sputtering with Ar ions using a Gatan precision ion polishing system. The TiNi nanoparticles were deposited on a Lacey carbon film that was used for characterization by transmission electron microscopy. The nanoparticles were characterized by high-resolution transmission electron microscopy, high-angle annular dark-field imaging, electron diffraction, scanning transmission electron microscopy and energy-dispersive X-ray spectroscopy. Results of nanodiffraction seem to indicate that the nanoparticles keep the same B2 crystal structure as the bulk material but with a decreased lattice parameter

  7. Big Data and Intellectual Property Rights in the Health and Life Sciences

    DEFF Research Database (Denmark)

    Minssen, Timo; Pierce, Justin

    2018-01-01

    , especially in the life science sectors where competitive innovation and research and development (R&D) resources are persistent considerations. For private actors, the like of pharmaceutical companies, health care providers, laboratories and insurance companies, it is becoming common practice to accumulate R......Undeniably “Big Data” plays a crucial role in the ongoing evolution of health care and life science sector innovations. In recent years U.S. and European authorities have developed public platforms and infrastructures providing access to vast stores of health-care knowledge, including data from......&D data making it searchable through medical databases. This trend is advanced and supported by recent initiatives and legislation that are increasing the transparency of various forms of data, such as clinical trials data. As a result, researchers, companies, patients and health care providers gain...

  8. Prospect of Ti-Ni shape memory alloy applied in reactor structures

    International Nuclear Information System (INIS)

    Duan Yuangang

    1995-01-01

    Shape memory effect mechanism, physical property, composition, manufacturing process and application in mechanical structure of Ti-Ni shape memory alloy are introduced. Applications of Ti-Ni shape memory alloy in reactor structure are prospected and some necessary technical conditions of shape memory alloy applied in the reactor structure are put forward initially

  9. Physicists tackles questions of tiny dimensions

    CERN Multimedia

    Moran, Barbara

    2003-01-01

    Today's physicists have a dilemna: they are using two separate theories to describe the universe. General relativity, which describes gravity, works for large objects like planets. Quantum mechanics, which involves the other forces, works for tiny objects like atoms. Unfortunately, the two theories don't match up.

  10. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  11. Advances in developing TiNi nanoparticles

    International Nuclear Information System (INIS)

    Castro, A. Torres; Cuellar, E. Lopez; Mendez, U. Ortiz; Yacaman, M. Jose

    2006-01-01

    The elaboration of nanoparticles has become a field of great interest for many scientists. Nanoparticles possess different properties than those ones shown in bulk materials. Shape memory alloys have the exceptional ability to recuperate its original shape by simple heating after being 'plastically' deformed. When this process is originated, important changes in properties, as mechanical and electrical, are developed in bulk material. If there is possible to obtain nanoparticles with shape memory effects, these nanoparticles could be used in the elaboration of nanofluids with the ability to change their electrical and thermal conductivity with temperature changes, i.e., smart nanofluids. In this work, some recent results and discussion of TiNi nanoparticles obtained by ion beam milling directly from a TiNi wire with shape memory are presented. The nanoparticles obtained by this process are about 2 nm of diameter with a composition of Ti-41.0 at.% Ni. Synthesized nanoparticles elaborated by this method have an ordered structure

  12. Communicating the Nature of Science through "The Big Bang Theory": Evidence from a Focus Group Study

    Science.gov (United States)

    Li, Rashel; Orthia, Lindy A.

    2016-01-01

    In this paper, we discuss a little-studied means of communicating about or teaching the nature of science (NOS)--through fiction television. We report some results of focus group research which suggest that the American sitcom "The Big Bang Theory" (2007-present), whose main characters are mostly working scientists, has influenced…

  13. The Sounds of the Little and Big Bangs

    Science.gov (United States)

    Shuryak, Edward

    2017-11-01

    Studies of heavy ion collisions have discovered that tiny fireballs of new phase of matter -- quark gluon plasma (QGP) -- undergoes explosion, called the Little Bang. In spite of its small size, it is not only well described by hydrodynamics, but even small perturbations on top of the explosion turned to be well described by hydrodynamical sound modes. The cosmological Big Bang also went through phase transitions, the QCD and electroweak ones, which are expected to produce sounds as well. We discuss their subsequent evolution and hypothetical inverse acoustic cascade, amplifying the amplitude. Ultimately, collision of two sound waves leads to formation of gravity waves, with the smallest wavelength. We briefly discuss how those can be detected.

  14. A 'tiny-orange' spectrometer for electrons

    International Nuclear Information System (INIS)

    Silva, N.C. da.

    1990-01-01

    An tiny-orange electron spectrometer was designed and constructed using flat permanent magnets and a surface barrier detector. The transmission functions of different system configurations were determined for energies in the 200-1100 KeV range. A mathematical model for the system was developed. (L.C.J.A.)

  15. Addressing big data issues in Scientific Data Infrastructure

    NARCIS (Netherlands)

    Demchenko, Y.; Membrey, P.; Grosso, P.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2013-01-01

    Big Data are becoming a new technology focus both in science and in industry. This paper discusses the challenges that are imposed by Big Data on the modern and future Scientific Data Infrastructure (SDI). The paper discusses a nature and definition of Big Data that include such features as Volume,

  16. Victoria Stodden: Scholarly Communication in the Era of Big Data and Big Computation

    OpenAIRE

    Stodden, Victoria

    2015-01-01

    Victoria Stodden gave the keynote address for Open Access Week 2015. "Scholarly communication in the era of big data and big computation" was sponsored by the University Libraries, Computational Modeling and Data Analytics, the Department of Computer Science, the Department of Statistics, the Laboratory for Interdisciplinary Statistical Analysis (LISA), and the Virginia Bioinformatics Institute. Victoria Stodden is an associate professor in the Graduate School of Library and Information Scien...

  17. Leros: A Tiny Microcontroller for FPGAs

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2011-01-01

    Leros is a tiny microcontroller that is optimized for current low-cost FPGAs. Leros is designed with a balanced logic to on-chip memory relation. The design goal is a microcontroller that can be clocked in about half of the speed a pipelined on-chip memory and consuming less than 300 logic cells...

  18. IBM Watson: How Cognitive Computing Can Be Applied to Big Data Challenges in Life Sciences Research.

    Science.gov (United States)

    Chen, Ying; Elenee Argentinis, J D; Weber, Griff

    2016-04-01

    Life sciences researchers are under pressure to innovate faster than ever. Big data offer the promise of unlocking novel insights and accelerating breakthroughs. Ironically, although more data are available than ever, only a fraction is being integrated, understood, and analyzed. The challenge lies in harnessing volumes of data, integrating the data from hundreds of sources, and understanding their various formats. New technologies such as cognitive computing offer promise for addressing this challenge because cognitive solutions are specifically designed to integrate and analyze big datasets. Cognitive solutions can understand different types of data such as lab values in a structured database or the text of a scientific publication. Cognitive solutions are trained to understand technical, industry-specific content and use advanced reasoning, predictive modeling, and machine learning techniques to advance research faster. Watson, a cognitive computing technology, has been configured to support life sciences research. This version of Watson includes medical literature, patents, genomics, and chemical and pharmacological data that researchers would typically use in their work. Watson has also been developed with specific comprehension of scientific terminology so it can make novel connections in millions of pages of text. Watson has been applied to a few pilot studies in the areas of drug target identification and drug repurposing. The pilot results suggest that Watson can accelerate identification of novel drug candidates and novel drug targets by harnessing the potential of big data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Machine learning for Big Data analytics in plants.

    Science.gov (United States)

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. "Air Toxics under the Big Sky": Examining the Effectiveness of Authentic Scientific Research on High School Students' Science Skills and Interest

    Science.gov (United States)

    Ward, Tony J.; Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-01-01

    "Air Toxics Under the Big Sky" is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. This research explored: (1)…

  1. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  2. Principales parámetros para el estudio de la colaboración científica en Big Science

    OpenAIRE

    Ortoll, Eva; Canals, Agustí; Garcia, Montserrat; Cobarsí, Josep

    2014-01-01

    In several scientific disciplines research has shifted from experiments of a reduced scale to large and complex collaborations. Many recent scientific achievements like the human genome sequencing or the discovery of the Higgs boson have taken place within the “big science” paradigm. The study of scientific collaboration needs to take into account all the diverse factors that have an influence on it. In the case of big science experiments, some of those aspects are particularly important: num...

  3. The Sounds of the Little and Big Bangs

    Directory of Open Access Journals (Sweden)

    Edward Shuryak

    2017-11-01

    Full Text Available Studies on heavy ion collisions have discovered that tiny fireballs of a new phase of matter—quark gluon plasma (QGP—undergo an explosion, called the Little Bang. In spite of its small size, not only is it well described by hydrodynamics, but even small perturbations on top of the explosion turned out to be well described by hydrodynamical sound modes. The cosmological Big Bang also went through phase transitions, related with Quantum Chromodynamics (QCD and electroweak/Higgs symmetry breaking, which are also expected to produce sounds. We discuss their subsequent evolution and hypothetical inverse acoustic cascade, amplifying the amplitude. Ultimately, the collision of two sound waves leads to the formation of one gravity waves. We briefly discuss how these gravity waves can be detected.

  4. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  5. Data Science and its Relationship to Big Data and Data-Driven Decision Making.

    Science.gov (United States)

    Provost, Foster; Fawcett, Tom

    2013-03-01

    Companies have realized they need to hire data scientists, academic institutions are scrambling to put together data-science programs, and publications are touting data science as a hot-even "sexy"-career choice. However, there is confusion about what exactly data science is, and this confusion could lead to disillusionment as the concept diffuses into meaningless buzz. In this article, we argue that there are good reasons why it has been hard to pin down exactly what is data science. One reason is that data science is intricately intertwined with other important concepts also of growing importance, such as big data and data-driven decision making. Another reason is the natural tendency to associate what a practitioner does with the definition of the practitioner's field; this can result in overlooking the fundamentals of the field. We believe that trying to define the boundaries of data science precisely is not of the utmost importance. We can debate the boundaries of the field in an academic setting, but in order for data science to serve business effectively, it is important (i) to understand its relationships to other important related concepts, and (ii) to begin to identify the fundamental principles underlying data science. Once we embrace (ii), we can much better understand and explain exactly what data science has to offer. Furthermore, only once we embrace (ii) should we be comfortable calling it data science. In this article, we present a perspective that addresses all these concepts. We close by offering, as examples, a partial list of fundamental principles underlying data science.

  6. Revenge of tiny Miranda

    International Nuclear Information System (INIS)

    Goldreich, P.; Nicholson, P.

    1977-01-01

    Reference is made to Dermott and Gold (Nature 267: 590 (1977)) who proposed a resonance model for the rings of Uranus. They assumed that the rings are composed of small particles librating about stable resonances determined by pairs of satellites, either Ariel and Titania or Ariel and Oberon. They dismissed as insignificant resonances involving 'tiny Miranda'. It is reported here that, by a wide margin, the strongest resonances are all associated with Miranda. It is also shown that the hypothesis that the rings are made up of librating particles, whilst original and ingenious, is incorrect. (author)

  7. Examining the Big-Fish-Little-Pond Effect on Students' Self-Concept of Learning Science in Taiwan Based on the TIMSS Databases

    Science.gov (United States)

    Liou, Pey-Yan

    2014-01-01

    The purpose of this study is to examine the relationship between student self-concept and achievement in science in Taiwan based on the big-fish-little-pond effect (BFLPE) model using the Trends in International Mathematics and Science Study (TIMSS) 2003 and 2007 databases. Hierarchical linear modeling was used to examine the effects of the…

  8. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  9. Big Data Challenges in Climate Science: Improving the Next-Generation Cyberinfrastructure

    Science.gov (United States)

    Schnase, John L.; Lee, Tsengdar J.; Mattmann, Chris A.; Lynnes, Christopher S.; Cinquini, Luca; Ramirez, Paul M.; Hart, Andre F.; Williams, Dean N.; Waliser, Duane; Rinsland, Pamela; hide

    2016-01-01

    The knowledge we gain from research in climate science depends on the generation, dissemination, and analysis of high-quality data. This work comprises technical practice as well as social practice, both of which are distinguished by their massive scale and global reach. As a result, the amount of data involved in climate research is growing at an unprecedented rate. Climate model intercomparison (CMIP) experiments, the integration of observational data and climate reanalysis data with climate model outputs, as seen in the Obs4MIPs, Ana4MIPs, and CREATE-IP activities, and the collaborative work of the Intergovernmental Panel on Climate Change (IPCC) provide examples of the types of activities that increasingly require an improved cyberinfrastructure for dealing with large amounts of critical scientific data. This paper provides an overview of some of climate science's big data problems and the technical solutions being developed to advance data publication, climate analytics as a service, and interoperability within the Earth System Grid Federation (ESGF), the primary cyberinfrastructure currently supporting global climate research activities.

  10. Big physics quartet win government backing

    Science.gov (United States)

    Banks, Michael

    2014-09-01

    Four major physics-based projects are among 10 to have been selected by Japan’s Ministry of Education, Culture, Sports, Science and Technology for funding in the coming decade as part of its “roadmap” of big-science projects.

  11. Artificial intelligence and big data management: the dynamic duo for moving forward data centric sciences

    OpenAIRE

    Vargas Solar, Genoveva

    2017-01-01

    After vivid discussions led by the emergence of the buzzword “Big Data”, it seems that industry and academia have reached an objective understanding about data properties (volume, velocity, variety, veracity and value), the resources and “know how” it requires, and the opportunities it opens. Indeed, new applications promising fundamental changes in society, industry and science, include face recognition, machine translation, digital assistants, self-driving cars, ad-serving, chat-bots, perso...

  12. Big Data in Health: a Literature Review from the Year 2005.

    Science.gov (United States)

    de la Torre Díez, Isabel; Cosgaya, Héctor Merino; Garcia-Zapirain, Begoña; López-Coronado, Miguel

    2016-09-01

    The information stored in healthcare systems has increased over the last ten years, leading it to be considered Big Data. There is a wealth of health information ready to be analysed. However, the sheer volume raises a challenge for traditional methods. The aim of this article is to conduct a cutting-edge study on Big Data in healthcare from 2005 to the present. This literature review will help researchers to know how Big Data has developed in the health industry and open up new avenues for research. Information searches have been made on various scientific databases such as Pubmed, Science Direct, Scopus and Web of Science for Big Data in healthcare. The search criteria were "Big Data" and "health" with a date range from 2005 to the present. A total of 9724 articles were found on the databases. 9515 articles were discarded as duplicates or for not having a title of interest to the study. 209 articles were read, with the resulting decision that 46 were useful for this study. 52.6 % of the articles used were found in Science Direct, 23.7 % in Pubmed, 22.1 % through Scopus and the remaining 2.6 % through the Web of Science. Big Data has undergone extremely high growth since 2011 and its use is becoming compulsory in developed nations and in an increasing number of developing nations. Big Data is a step forward and a cost reducer for public and private healthcare.

  13. A Big Bang Lab

    Science.gov (United States)

    Scheider, Walter

    2005-01-01

    The February 2005 issue of The Science Teacher (TST) reminded everyone that by learning how scientists study stars, students gain an understanding of how science measures things that can not be set up in lab, either because they are too big, too far away, or happened in a very distant past. The authors of "How Far are the Stars?" show how the…

  14. The Role of Big Data in the Social Sciences

    Science.gov (United States)

    Ovadia, Steven

    2013-01-01

    Big Data is an increasingly popular term across scholarly and popular literature but lacks a formal definition (Lohr 2012). This is beneficial in that it keeps the term flexible. For librarians, Big Data represents a few important ideas. One idea is the idea of balancing accessibility with privacy. Librarians tend to want information to be as open…

  15. Beyond Big Science

    CERN Multimedia

    Boyle, Alan

    2007-01-01

    "Billion-dollar science projects end up being about much more than the science, whether we're talking about particle physics, or fusion research, or the international space station, or missions to the moon and beyond, or the next-generation radio telescope." (3 pages)

  16. Surface characterization of TiNi deformed by high-pressure torsion

    Energy Technology Data Exchange (ETDEWEB)

    Awang Shri, Dayangku Noorfazidah [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Structural Materials Unit, National Institute for Materials Science, Tsukuba, Ibaraki 305-0047 (Japan); Tsuchiya, Koichi, E-mail: tsuchiya.koichi@nims.go.jp [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Structural Materials Unit, National Institute for Materials Science, Tsukuba, Ibaraki 305-0047 (Japan); Yamamoto, Akiko [Biomaterials Unit, International Center for Material Nanoarchitectonics (WPI-MANA), National Institute for Materials Science, Namiki 1-1, Tsukuba, Ibaraki 305-0044 (Japan)

    2014-01-15

    Effect of grain refinements and amorphization by high-pressure torsion (HPT) on surface chemistry was investigated on TiNi. X-ray diffraction and micro-Vickers tests were used to check the phase changes and hardness before and after HPT. X-ray photoelectron spectroscopy was used to observe the changes in the natural passive film formation on the surface. Phase analysis reveals the change of crystalline TiNi to nanostructured one with increased hardness with straining by HPT. Grain refinement and amorphization caused by HPT reduce the amount of metallic Ni in the passive films and also increase the thickness of the film.

  17. Speaking sociologically with big data: symphonic social science and the future for big data research

    OpenAIRE

    Halford, Susan; Savage, Mike

    2017-01-01

    Recent years have seen persistent tension between proponents of big data analytics, using new forms of digital data to make computational and statistical claims about ‘the social’, and many sociologists sceptical about the value of big data, its associated methods and claims to knowledge. We seek to move beyond this, taking inspiration from a mode of argumentation pursued by Putnam (2000), Wilkinson and Pickett (2009) and Piketty (2014) that we label ‘symphonic social science’. This bears bot...

  18. Opportunities and challenges of big data for the social sciences: The case of genomic data.

    Science.gov (United States)

    Liu, Hexuan; Guo, Guang

    2016-09-01

    In this paper, we draw attention to one unique and valuable source of big data, genomic data, by demonstrating the opportunities they provide to social scientists. We discuss different types of large-scale genomic data and recent advances in statistical methods and computational infrastructure used to address challenges in managing and analyzing such data. We highlight how these data and methods can be used to benefit social science research. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Big Data Science Cafés: High School Students Experiencing Real Research with Scientists

    Science.gov (United States)

    Walker, C. E.; Pompea, S. M.

    2017-12-01

    The Education and Public Outreach group at the National Optical Astronomy Observatory has designed an outside-of-school education program to excite the interest of talented youth in future projects like the Large Synoptic Survey Telescope (LSST) and the NOAO (archival) Data Lab - their data approaches and key science projects. Originally funded by the LSST Corporation, the program cultivates talented youth to enter STEM disciplines and serves as a model to disseminate to the 40+ institutions involved in LSST. One Saturday a month during the academic year, high school students have the opportunity to interact with expert astronomers who work with large astronomical data sets in their scientific work. Students learn about killer asteroids, the birth and death of stars, colliding galaxies, the structure of the universe, gravitational waves, dark energy, dark matter, and more. The format for the Saturday science cafés has been a short presentation, discussion (plus food), computer lab activity and more discussion. They last about 2.5 hours and have been planned by a group of interested local high school students, an undergraduate student coordinator, the presenting astronomers, the program director and an evaluator. High school youth leaders help ensure an enjoyable and successful program for fellow students. They help their fellow students with the activities and help evaluate how well the science café went. Their remarks shape the next science café and improve the program. The experience offers youth leaders ownership of the program, opportunities to take on responsibilities and learn leadership and communication skills, as well as foster their continued interests in STEM. The prototype Big Data Science Academy was implemented successfully in the Spring 2017 and engaged almost 40 teens from greater Tucson in the fundamentals of astronomy concepts and research. As with any first implementation there were bumps. However, staff, scientists, and student leaders all

  20. Data Prospecting Framework - a new approach to explore "big data" in Earth Science

    Science.gov (United States)

    Ramachandran, R.; Rushing, J.; Lin, A.; Kuo, K.

    2012-12-01

    Due to advances in sensors, computation and storage, cost and effort required to produce large datasets have been significantly reduced. As a result, we are seeing a proliferation of large-scale data sets being assembled in almost every science field, especially in geosciences. Opportunities to exploit the "big data" are enormous as new hypotheses can be generated by combining and analyzing large amounts of data. However, such a data-driven approach to science discovery assumes that scientists can find and isolate relevant subsets from vast amounts of available data. Current Earth Science data systems only provide data discovery through simple metadata and keyword-based searches and are not designed to support data exploration capabilities based on the actual content. Consequently, scientists often find themselves downloading large volumes of data, struggling with large amounts of storage and learning new analysis technologies that will help them separate the wheat from the chaff. New mechanisms of data exploration are needed to help scientists discover the relevant subsets We present data prospecting, a new content-based data analysis paradigm to support data-intensive science. Data prospecting allows the researchers to explore big data in determining and isolating data subsets for further analysis. This is akin to geo-prospecting in which mineral sites of interest are determined over the landscape through screening methods. The resulting "data prospects" only provide an interaction with and feel for the data through first-look analytics; the researchers would still have to download the relevant datasets and analyze them deeply using their favorite analytical tools to determine if the datasets will yield new hypotheses. Data prospecting combines two traditional categories of data analysis, data exploration and data mining within the discovery step. Data exploration utilizes manual/interactive methods for data analysis such as standard statistical analysis and

  1. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  2. Taking a 'Big Data' approach to data quality in a citizen science project.

    Science.gov (United States)

    Kelling, Steve; Fink, Daniel; La Sorte, Frank A; Johnston, Alison; Bruns, Nicholas E; Hochachka, Wesley M

    2015-11-01

    Data from well-designed experiments provide the strongest evidence of causation in biodiversity studies. However, for many species the collection of these data is not scalable to the spatial and temporal extents required to understand patterns at the population level. Only data collected from citizen science projects can gather sufficient quantities of data, but data collected from volunteers are inherently noisy and heterogeneous. Here we describe a 'Big Data' approach to improve the data quality in eBird, a global citizen science project that gathers bird observations. First, eBird's data submission design ensures that all data meet high standards of completeness and accuracy. Second, we take a 'sensor calibration' approach to measure individual variation in eBird participant's ability to detect and identify birds. Third, we use species distribution models to fill in data gaps. Finally, we provide examples of novel analyses exploring population-level patterns in bird distributions.

  3. MERRA Analytic Services: Meeting the Big Data Challenges of Climate Science through Cloud-Enabled Climate Analytics-as-a-Service

    Science.gov (United States)

    Schnase, J. L.; Duffy, D.; Tamkin, G. S.; Nadeau, D.; Thompson, J. H.; Grieg, C. M.; McInerney, M.; Webster, W. P.

    2013-12-01

    Climate science is a Big Data domain that is experiencing unprecedented growth. In our efforts to address the Big Data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). We focus on analytics, because it is the knowledge gained from our interactions with Big Data that ultimately produce societal benefits. We focus on CAaaS because we believe it provides a useful way of thinking about the problem: a specialization of the concept of business process-as-a-service, which is an evolving extension of IaaS, PaaS, and SaaS enabled by Cloud Computing. Within this framework, Cloud Computing plays an important role; however, we see it as only one element in a constellation of capabilities that are essential to delivering climate analytics as a service. These elements are essential because in the aggregate they lead to generativity, a capacity for self-assembly that we feel is the key to solving many of the Big Data challenges in this domain. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS built on this principle. MERRA/AS enables MapReduce analytics over NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of 26 key climate variables. It represents a type of data product that is of growing importance to scientists doing climate change research and a wide range of decision support applications. MERRA/AS brings together the following generative elements in a full, end-to-end demonstration of CAaaS capabilities: (1) high-performance, data proximal analytics, (2) scalable data management, (3) software appliance virtualization, (4) adaptive analytics, and (5) a domain-harmonized API. The effectiveness of MERRA/AS has been demonstrated in several applications. In our experience, Cloud Computing lowers the barriers and risk to

  4. MERRA Analytic Services: Meeting the Big Data Challenges of Climate Science Through Cloud-enabled Climate Analytics-as-a-service

    Science.gov (United States)

    Schnase, John L.; Duffy, Daniel Quinn; Tamkin, Glenn S.; Nadeau, Denis; Thompson, John H.; Grieg, Christina M.; McInerney, Mark A.; Webster, William P.

    2014-01-01

    Climate science is a Big Data domain that is experiencing unprecedented growth. In our efforts to address the Big Data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). We focus on analytics, because it is the knowledge gained from our interactions with Big Data that ultimately produce societal benefits. We focus on CAaaS because we believe it provides a useful way of thinking about the problem: a specialization of the concept of business process-as-a-service, which is an evolving extension of IaaS, PaaS, and SaaS enabled by Cloud Computing. Within this framework, Cloud Computing plays an important role; however, we it see it as only one element in a constellation of capabilities that are essential to delivering climate analytics as a service. These elements are essential because in the aggregate they lead to generativity, a capacity for self-assembly that we feel is the key to solving many of the Big Data challenges in this domain. MERRA Analytic Services (MERRAAS) is an example of cloud-enabled CAaaS built on this principle. MERRAAS enables MapReduce analytics over NASAs Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of 26 key climate variables. It represents a type of data product that is of growing importance to scientists doing climate change research and a wide range of decision support applications. MERRAAS brings together the following generative elements in a full, end-to-end demonstration of CAaaS capabilities: (1) high-performance, data proximal analytics, (2) scalable data management, (3) software appliance virtualization, (4) adaptive analytics, and (5) a domain-harmonized API. The effectiveness of MERRAAS has been demonstrated in several applications. In our experience, Cloud Computing lowers the barriers and risk to

  5. Big Data and reality

    Directory of Open Access Journals (Sweden)

    Ryan Shaw

    2015-11-01

    Full Text Available DNA sequencers, Twitter, MRIs, Facebook, particle accelerators, Google Books, radio telescopes, Tumblr: what do these things have in common? According to the evangelists of “data science,” all of these are instruments for observing reality at unprecedentedly large scales and fine granularities. This perspective ignores the social reality of these very different technological systems, ignoring how they are made, how they work, and what they mean in favor of an exclusive focus on what they generate: Big Data. But no data, big or small, can be interpreted without an understanding of the process that generated them. Statistical data science is applicable to systems that have been designed as scientific instruments, but is likely to lead to confusion when applied to systems that have not. In those cases, a historical inquiry is preferable.

  6. The SIKS/BiGGrid Big Data Tutorial

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Lammerts, Evert; de Vries, A.P.

    2011-01-01

    The School for Information and Knowledge Systems SIKS and the Dutch e-science grid BiG Grid organized a new two-day tutorial on Big Data at the University of Twente on 30 November and 1 December 2011, just preceding the Dutch-Belgian Database Day. The tutorial is on top of some exciting new

  7. Discourse, Power, and Knowledge in the Management of "Big Science": The Production of Consensus in a Nuclear Fusion Research Laboratory.

    Science.gov (United States)

    Kinsella, William J.

    1999-01-01

    Extends a Foucauldian view of power/knowledge to the archetypical knowledge-intensive organization, the scientific research laboratory. Describes the discursive production of power/knowledge at the "big science" laboratory conducting nuclear fusion research and illuminates a critical incident in which the fusion research…

  8. Big Data, data integrity, and the fracturing of the control zone

    Directory of Open Access Journals (Sweden)

    Carl Lagoze

    2014-11-01

    Full Text Available Despite all the attention to Big Data and the claims that it represents a “paradigm shift” in science, we lack understanding about what are the qualities of Big Data that may contribute to this revolutionary impact. In this paper, we look beyond the quantitative aspects of Big Data (i.e. lots of data and examine it from a sociotechnical perspective. We argue that a key factor that distinguishes “Big Data” from “lots of data” lies in changes to the traditional, well-established “control zones” that facilitated clear provenance of scientific data, thereby ensuring data integrity and providing the foundation for credible science. The breakdown of these control zones is a consequence of the manner in which our network technology and culture enable and encourage open, anonymous sharing of information, participation regardless of expertise, and collaboration across geographic, disciplinary, and institutional barriers. We are left with the conundrum—how to reap the benefits of Big Data while re-creating a trust fabric and an accountable chain of responsibility that make credible science possible.

  9. Big data has big potential for applications to climate change adaptation

    NARCIS (Netherlands)

    Ford, James D.; Tilleard, Simon E.; Berrang-Ford, Lea; Araos, Malcolm; Biesbroek, Robbert; Lesnikowski, Alexandra C.; MacDonald, Graham K.; Hsu, Angel; Chen, Chen; Bizikova, Livia

    2016-01-01

    The capacity to collect and analyze massive amounts
    of data is transforming research in the natural and social
    sciences (1). And yet, the climate change adaptation
    community has largely overlooked these developments.
    Here, we examine how “big data” can inform adaptation
    research

  10. Big Biomedical data as the key resource for discovery science

    Energy Technology Data Exchange (ETDEWEB)

    Toga, Arthur W.; Foster, Ian; Kesselman, Carl; Madduri, Ravi; Chard, Kyle; Deutsch, Eric W.; Price, Nathan D.; Glusman, Gustavo; Heavner, Benjamin D.; Dinov, Ivo D.; Ames, Joseph; Van Horn, John; Kramer, Roger; Hood, Leroy

    2015-07-21

    Modern biomedical data collection is generating exponentially more data in a multitude of formats. This flood of complex data poses significant opportunities to discover and understand the critical interplay among such diverse domains as genomics, proteomics, metabolomics, and phenomics, including imaging, biometrics, and clinical data. The Big Data for Discovery Science Center is taking an “-ome to home” approach to discover linkages between these disparate data sources by mining existing databases of proteomic and genomic data, brain images, and clinical assessments. In support of this work, the authors developed new technological capabilities that make it easy for researchers to manage, aggregate, manipulate, integrate, and model large amounts of distributed data. Guided by biological domain expertise, the Center’s computational resources and software will reveal relationships and patterns, aiding researchers in identifying biomarkers for the most confounding conditions and diseases, such as Parkinson’s and Alzheimer’s.

  11. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A study on the shape memory characteristics of Ti-Ni50-x-Pdx alloys

    International Nuclear Information System (INIS)

    Lee, H. W.; Chun, B. S.; Oh, S. J.; Kuk, I.H.

    1991-01-01

    The shape memory characteristics in TiNi alloys are greatly effected by the alloy composition and heat treatment condition. The present work was aimed to investigate the effect of Pd x (x=5,10,15,20) addition on the shape memory chracteristics of TiNi alloys by means of electrical resistance measurement. X-ray diffraction, differential scanning calorimetry and electron dispersive analysis X-ray measurement. The results obtained from this study are as follows; 1. The martensitic transformation start temperature, Ms of Ti-Ni 50-x -Pd x alloys decreased considerably with the increase of Pd content up to 10at%, whereas increased largely with the increase of Pd content in the alloys with Pd content more than 15at%. 2. The Ms temperature of Ti-Ni 50-x -Pd x alloys with cold working was significantly lower than that of the fully annealed alloys because high density dislocation has been introduced by the cold working which suppressed the martensitic transformation. (Author)

  13. Big Java late objects

    CERN Document Server

    Horstmann, Cay S

    2012-01-01

    Big Java: Late Objects is a comprehensive introduction to Java and computer programming, which focuses on the principles of programming, software engineering, and effective learning. It is designed for a two-semester first course in programming for computer science students.

  14. A Multidisciplinary Perspective of Big Data in Management Research

    OpenAIRE

    Sheng, Jie; Amankwah-Amoah, J.; Wang, X.

    2017-01-01

    In recent years, big data has emerged as one of the prominent buzzwords in business and management. In spite of the mounting body of research on big data across the social science disciplines, scholars have offered little synthesis on the current state of knowledge. To take stock of academic research that contributes to the big data revolution, this paper tracks scholarly work's perspectives on big data in the management domain over the past decade. We identify key themes emerging in manageme...

  15. Fabrication, microstructure and stress effects in sputtered TiNi thin films

    International Nuclear Information System (INIS)

    Grummon, D.S.

    2000-01-01

    Sputtered thin films of equiatomic TiNi and TiNiX ternary alloys have excellent mechanical properties and exhibit robust shape-memory and transformational superelasticity. Furthermore, the energetic nature of the sputter deposition process allows the creation of highly refined microstructures that are difficult to achieve by melt-solidification. The present paper will present recent work on the relationship between processing, microstructure and properties of binary TiNi thin films, focusing primarily on residual stresses, kinetics of stress-relaxation and crystallization, and fine grain sizes achievable using hot-substrate direct crystallization. (orig.)

  16. The kinetics of Cr layer coated on TiNi films for hydrogen absorption

    Indian Academy of Sciences (India)

    The effect of hydrogen absorption on electrical resistance with temperature for TiNi and TiNi–Cr thin films was investigated. The TiNi thin films of thickness 800 Å were deposited at different angles ( = 0°, 30°, 45°, 60° and 75°) under 10−5 Torr pressure by thermal evaporation on the glass substrate at room temperature.

  17. The BIG Data Center: from deposition to integration to translation.

    Science.gov (United States)

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Managing globally distributed expertise with new competence management solutions: a big-science collaboration as a pilot case.

    OpenAIRE

    Ferguson, J; Koivula, T; Livan, M; Nordberg, M; Salmia, T; Vuola, O

    2003-01-01

    In today's global organisations and networks, a critical factor for effective innovation and project execution is appropriate competence and skills management. The challenges include selection of strategic competences, competence development, and leveraging the competences and skills to drive innovation and collaboration for shared goals. This paper presents a new industrial web-enabled competence management and networking solution and its implementation and piloting in a complex big-science ...

  19. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  20. The role of big laboratories

    International Nuclear Information System (INIS)

    Heuer, R-D

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward. (paper)

  1. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness.

    Science.gov (United States)

    Dove, Edward S; Özdemir, Vural

    2015-09-01

    The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science); and consortia ethics (Big Ethics). These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, "extreme centrism", and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics-separate and together-have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness . By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit) to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit.

  2. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness

    Science.gov (United States)

    Dove, Edward S.; Özdemir, Vural

    2015-01-01

    The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science); and consortia ethics (Big Ethics). These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, “extreme centrism”, and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics—separate and together—have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness. By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit) to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit. PMID:26345196

  3. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness

    Directory of Open Access Journals (Sweden)

    Edward S. Dove

    2015-08-01

    Full Text Available The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science; and consortia ethics (Big Ethics. These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, “extreme centrism”, and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics—separate and together—have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness. By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit.

  4. Struggling to Hear? Tiny Devices Can Keep You Connected

    Science.gov (United States)

    ... Human Services Search form Search Site Menu Home Latest Issue Past Issues Special Issues Subscribe May 2018 Print this issue Struggling to Hear? Tiny Devices Can Keep You Connected En español Send us ...

  5. Big Sib Students' Perceptions of the Educational Environment at the School of Medical Sciences, Universiti Sains Malaysia, using Dundee Ready Educational Environment Measure (DREEM) Inventory.

    Science.gov (United States)

    Arzuman, Hafiza; Yusoff, Muhamad Saiful Bahri; Chit, Som Phong

    2010-07-01

    A cross-sectional descriptive study was conducted among Big Sib students to explore their perceptions of the educational environment at the School of Medical Sciences, Universiti Sains Malaysia (USM) and its weak areas using the Dundee Ready Educational Environment Measure (DREEM) inventory. The DREEM inventory is a validated global instrument for measuring educational environments in undergraduate medical and health professional education. The English version of the DREEM inventory was administered to all Year 2 Big Sib students (n = 67) at a regular Big Sib session. The purpose of the study as well as confidentiality and ethical issues were explained to the students before the questionnaire was administered. The response rate was 62.7% (42 out of 67 students). The overall DREEM score was 117.9/200 (SD 14.6). The DREEM indicated that the Big Sib students' perception of educational environment of the medical school was more positive than negative. Nevertheless, the study also revealed some problem areas within the educational environment. This pilot study revealed that Big Sib students perceived a positive learning environment at the School of Medical Sciences, USM. It also identified some low-scored areas that require further exploration to pinpoint the exact problems. The relatively small study population selected from a particular group of students was the major limitation of the study. This small sample size also means that the study findings cannot be generalised.

  6. Processes meet big data : connecting data science with process science

    NARCIS (Netherlands)

    van der Aalst, W.; Damiani, E.

    2015-01-01

    As more and more companies are embracing Big data, it has become apparent that the ultimate challenge is to relate massive amounts of event data to processes that are highly dynamic. To unleash the value of event data, events need to be tightly connected to the control and management of operational

  7. Cavitation erosion of Ti-Ni shape memory alloy deposited coatings and Fe base shape memory alloy solid

    International Nuclear Information System (INIS)

    Hattori, Shuji; Fujisawa, Seiji; Owa, Tomonobu

    2007-01-01

    In this study, cavitation erosion tests were carried out by using thermal spraying and deposition of Ti-Ni shape memory alloy for the surface coating. The results show the test speciment of Ti-Ni thermal spraying has many initial defects, so that the erosion resistance is very low. The erosion resistance of Ti-Ni deposit is about 5-10 times higher than that of SUS 304, thus erosion resistance of Ti-Ni deposit is better than that of Ti-Ni thermal spraying. The cavitation erosion tests were carried out by using Fe-Mn-Si with shape memory and gunmetal with low elastic modulus. The erosion resistance of Fe-Mn-Si shape memory alloy solid is about 9 times higher than that of SUS 304. The erosion resistance of gunmetal is almost the same as SUS 304, because the test specimen of gunmetal has many small defects on the original surface. (author)

  8. Research in an emerging 'big science' discipline. The case of neutron scattering in Spain

    International Nuclear Information System (INIS)

    Borja Gonzalez-Albo; Maria Bordons; Pedro Gorria

    2010-01-01

    Neutron scattering (NS) is a 'big science' discipline whose research spans over a wide spectrum of fields, from fundamental or basic science to technological applications. The objective of this paper is to track the evolution of Spanish research in NS from a bibliometric perspective and to place it in the international context. Scientific publications of Spanish authors included in the Web of Science (WoS 1970-2006) are analysed with respect to five relevant dimensions: volume of research output, impact, disciplinary diversity, structural field features and internationalisation. NS emerges as a highly internationalised fast-growing field whose research is firmly rooted in Physics, Chemistry and Engineering, but with applications in a wide range of fields. International collaboration links -present in around 70% of the documents- and national links have largely contributed to mould the existing structure of research in the area, which evolves around major neutron scattering facilities abroad. The construction of a new European neutron source (ESS) would contribute to the consolidation of the field within the EU, since it will strengthen research and improve current activity. (author)

  9. In situ crystallization of sputter-deposited TiNi by ion irradiation

    International Nuclear Information System (INIS)

    Ikenaga, Noriaki; Kishi, Yoichi; Yajima, Zenjiro; Sakudo, Noriyuki

    2013-01-01

    Highlights: ► We developed a sputtering deposition process equipped with an ion irradiation system. ► Ion irradiation enables crystallization at lower substrate temperature. ► Ion fluence has an effective range for low-temperature crystallization. ► Crystallized films made on polyimide by the process show the shape memory effect. -- Abstract: TiNi is well known as a typical shape-memory alloy, and the shape-memory property appears only when the structure is crystalline. Until recently, the material has been formed as amorphous film by single-target sputtering deposition at first and then crystallized by being annealed at high temperature over 500 °C. Therefore, it has been difficult to make crystalline TiNi film directly on a substrate of polymer-based material because of the low heat resistance of substrate. In order to realize an actuator from the crystallized TiNi film on polymer substrates, the substrate temperature should be kept below 200 °C throughout the whole process. In our previous studies we have found that deposited film can be crystallized at very low temperature without annealing but with simultaneous irradiation of Ar ions during sputter-deposition. And we have also demonstrated the shape-memory effect with the TiNi film made by the new process. In order to investigate what parameters of the process contribute to the low-temperature crystallization, we have focused to the ion fluence of the ion irradiation. Resultantly, it was found that the transition from amorphous structure to crystal one has a threshold range of ion fluence

  10. XPS characterization of surface and interfacial structure of sputtered TiNi films on Si substrate

    International Nuclear Information System (INIS)

    Fu Yongqing; Du Hejun; Zhang, Sam; Huang Weimin

    2005-01-01

    TiNi films were prepared by co-sputtering TiNi and Ti targets. X-ray photoelectron spectroscopy (XPS) was employed to study surface chemistry of the films and interfacial structure of Si/TiNi system. Exposure of the TiNi film to the ambient atmosphere (23 deg. C and 80% relatively humidity) facilitated quick adsorption of oxygen and carbon on the surface. With time, carbon and oxygen content increased drastically at the surface, while oxygen diffused further into the layer. After a year, carbon content at the surface became as high as 65.57% and Ni dropped below the detection limit of XPS. Depth profiling revealed that significant inter-diffusion occurred between TiNi film and Si substrate with a layer of 90-100 nm. The detailed bond changes of different elements with depth were obtained using XPS and the formation of titanium silicides at the interface were identified

  11. The Whole Shebang: How Science Produced the Big Bang Model.

    Science.gov (United States)

    Ferris, Timothy

    2002-01-01

    Offers an account of the accumulation of evidence that has led scientists to have confidence in the big bang theory of the creation of the universe. Discusses the early work of Ptolemy, Copernicus, Kepler, Galileo, and Newton, noting the rise of astrophysics, and highlighting the birth of the big bang model (the cosmic microwave background theory…

  12. From tiny microalgae to huge biorefineries

    OpenAIRE

    Gouveia, L.

    2014-01-01

    Microalgae are an emerging research field due to their high potential as a source of several biofuels in addition to the fact that they have a high-nutritional value and contain compounds that have health benefits. They are also highly used for water stream bioremediation and carbon dioxide mitigation. Therefore, the tiny microalgae could lead to a huge source of compounds and products, giving a good example of a real biorefinery approach. This work shows and presents examples of experimental...

  13. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  14. Towards efficient data exchange and sharing for big-data driven materials science: metadata and data formats

    Science.gov (United States)

    Ghiringhelli, Luca M.; Carbogno, Christian; Levchenko, Sergey; Mohamed, Fawzi; Huhs, Georg; Lüders, Martin; Oliveira, Micael; Scheffler, Matthias

    2017-11-01

    With big-data driven materials research, the new paradigm of materials science, sharing and wide accessibility of data are becoming crucial aspects. Obviously, a prerequisite for data exchange and big-data analytics is standardization, which means using consistent and unique conventions for, e.g., units, zero base lines, and file formats. There are two main strategies to achieve this goal. One accepts the heterogeneous nature of the community, which comprises scientists from physics, chemistry, bio-physics, and materials science, by complying with the diverse ecosystem of computer codes and thus develops "converters" for the input and output files of all important codes. These converters then translate the data of each code into a standardized, code-independent format. The other strategy is to provide standardized open libraries that code developers can adopt for shaping their inputs, outputs, and restart files, directly into the same code-independent format. In this perspective paper, we present both strategies and argue that they can and should be regarded as complementary, if not even synergetic. The represented appropriate format and conventions were agreed upon by two teams, the Electronic Structure Library (ESL) of the European Center for Atomic and Molecular Computations (CECAM) and the NOvel MAterials Discovery (NOMAD) Laboratory, a European Centre of Excellence (CoE). A key element of this work is the definition of hierarchical metadata describing state-of-the-art electronic-structure calculations.

  15. Finding the big bang

    CERN Document Server

    Page, Lyman A; Partridge, R Bruce

    2009-01-01

    Cosmology, the study of the universe as a whole, has become a precise physical science, the foundation of which is our understanding of the cosmic microwave background radiation (CMBR) left from the big bang. The story of the discovery and exploration of the CMBR in the 1960s is recalled for the first time in this collection of 44 essays by eminent scientists who pioneered the work. Two introductory chapters put the essays in context, explaining the general ideas behind the expanding universe and fossil remnants from the early stages of the expanding universe. The last chapter describes how the confusion of ideas and measurements in the 1960s grew into the present tight network of tests that demonstrate the accuracy of the big bang theory. This book is valuable to anyone interested in how science is done, and what it has taught us about the large-scale nature of the physical universe.

  16. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  17. Big Data: You Are Adding to . . . and Using It

    Science.gov (United States)

    Makela, Carole J.

    2016-01-01

    "Big data" prompts a whole lexicon of terms--data flow; analytics; data mining; data science; smart you name it (cars, houses, cities, wearables, etc.); algorithms; learning analytics; predictive analytics; data aggregation; data dashboards; digital tracks; and big data brokers. New terms are being coined frequently. Are we paying…

  18. Managing globally distributed expertise with new competence management solutions a big-science collaboration as a pilot case.

    CERN Document Server

    Ferguson, J; Livan, M; Nordberg, M; Salmia, T; Vuola, O

    2003-01-01

    In today's global organisations and networks, a critical factor for effective innovation and project execution is appropriate competence and skills management. The challenges include selection of strategic competences, competence development, and leveraging the competences and skills to drive innovation and collaboration for shared goals. This paper presents a new industrial web-enabled competence management and networking solution and its implementation and piloting in a complex big-science environment of globally distributed competences.

  19. Database Resources of the BIG Data Center in 2018.

    Science.gov (United States)

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Big data and visual analytics in anaesthesia and health care.

    Science.gov (United States)

    Simpao, A F; Ahumada, L M; Rehman, M A

    2015-09-01

    Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. THE ROLE OF TINY GRAINS ON THE ACCRETION PROCESS IN PROTOPLANETARY DISKS

    International Nuclear Information System (INIS)

    Bai Xuening

    2011-01-01

    Tiny grains such as polycyclic aromatic hydrocarbons (PAHs) have been thought to dramatically reduce the coupling between the gas and magnetic fields in weakly ionized gas such as in protoplanetary disks (PPDs) because they provide a tremendous surface area to recombine free electrons. The presence of tiny grains in PPDs thus raises the question of whether the magnetorotational instability (MRI) is able to drive rapid accretion consistent with observations. Charged tiny grains have similar conduction properties as ions, whose presence leads to qualitatively new behaviors in the conductivity tensor, characterized by n-bar /n e >1, where n e and n-bar denote the number densities of free electrons and all other charged species, respectively. In particular, Ohmic conductivity becomes dominated by charged grains rather than by electrons when n-bar /n e exceeds about 10 3 , and Hall and ambipolar diffusion (AD) coefficients are reduced by a factor of ( n-bar /n e ) 2 in the AD-dominated regime relative to that in the Ohmic regime. Applying the methodology of Bai, we find that in PPDs, when PAHs are sufficiently abundant (∼> 10 -9 per H 2 molecule), there exists a transition radius r trans of about 10-20 AU, beyond which the MRI active layer extends to the disk midplane. At r trans , the optimistically predicted MRI-driven accretion rate M-dot is one to two orders of magnitude smaller than that in the grain-free case, which is too small compared with the observed rates, but is in general no smaller than the predicted M-dot with solar-abundance 0.1 μm grains. At r > r trans , we find that, remarkably, the predicted M-dot exceeds the grain-free case due to a net reduction of AD by charged tiny grains and reaches a few times 10 -8 M sun yr -1 . This is sufficient to account for the observed M-dot in transitional disks. Larger grains (∼> 0.1 μm) are too massive to reach such high abundance as tiny grains and to facilitate the accretion process.

  2. Big Data and Regional Science: Opportunities, Challenges, and Directions for Future Research

    OpenAIRE

    Schintler, Laurie A.; Fischer, Manfred M.

    2018-01-01

    Recent technological, social, and economic trends and transformations are contributing to the production of what is usually referred to as Big Data. Big Data, which is typically defined by four dimensions -- Volume, Velocity, Veracity, and Variety -- changes the methods and tactics for using, analyzing, and interpreting data, requiring new approaches for data provenance, data processing, data analysis and modeling, and knowledge representation. The use and analysis of Big Data involves severa...

  3. The scientific production on data quality in big data: a study in the Web of Science database

    Directory of Open Access Journals (Sweden)

    Priscila Basto Fagundes

    2017-11-01

    Full Text Available More and more, the big data theme has attracted interest in researchers from different areas of knowledge, among them information scientists who need to understand their concepts and applications in order to contribute with new proposals for the management of the information generated from the data stored in these environments. The objective of this article is to present a survey of publications about data quality in big data in the Web of Science database until the year 2016. Will be presented the total number of publications indexed in the database, the number of publications per year, the location the origin of the research and a synthesis of the studies found. The survey in the database was conducted in July 2017 and resulted in a total of 23 publications. In order to make it possible to present a summary of the publications in this article, searches were made of the full texts of all the publications on the Internet and read the ones that were available. With this survey it was possible to conclude that the studies on data quality in big data had their publications starting in 2013, most of which present literature reviews and few effective proposals for the monitoring and management of data quality in environments with large volumes of data. Therefore, it is intended with this survey to contribute and foster new research on the context of data quality in big data environments.

  4. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  5. Micromechanical Analysis of Crack Closure Mechanism for Intelligent Material Containing TiNi Fibers

    Science.gov (United States)

    Araki, Shigetoshi; Ono, Hiroyuki; Saito, Kenji

    In our previous study, the micromechanical modeling of an intelligent material containing TiNi fibers was performed and the stress intensity factor KI at the tip of the crack in the material was expressed in terms of the magnitude of the shape memory shrinkage of the fibers and the thermal expansion strain in the material. In this study, the value of KI at the tip of the crack in the TiNi/epoxy material is calculated numerically by using analytical expressions obtained in our first report. As a result, we find that the KI value decreases with increasing shrink strain of the fibers, and this tendency agrees with that of the experimental result obtained by Shimamoto etal.(Trans. Jpn. Soc. Mech. Eng., Vol. 65, No. 634 (1999), pp. 1282-1286). Moreover, there exists an optimal value of the shrink strain of the fibers to make the KI value zero. The change in KI with temperature during the heating process from the reference temperature to the inverse austenitic finishing temperature of TiNi fiber is also consistent with the experimental result. These results can be explained by the changes in the shrink strain, the thermal expansion strain, and the elastic moduli of TiNi fiber with temperature. These results may be useful in designing intelligent materials containing TiNi fibers from the viewpoint of crack closure.

  6. Analysis of Big Data technologies for use in agro-environmental science

    NARCIS (Netherlands)

    Lokers, Rob; Knapen, Rob; Janssen, Sander; Randen, van Yke; Jansen, Jacques

    2016-01-01

    Recent developments like the movements of open access and open data and the unprecedented growth of data, which has come forward as Big Data, have shifted focus to methods to effectively handle such data for use in agro-environmental research. Big Data technologies, together with the increased

  7. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  8. Modeling and Analysis in Marine Big Data: Advances and Challenges

    Directory of Open Access Journals (Sweden)

    Dongmei Huang

    2015-01-01

    Full Text Available It is aware that big data has gathered tremendous attentions from academic research institutes, governments, and enterprises in all aspects of information sciences. With the development of diversity of marine data acquisition techniques, marine data grow exponentially in last decade, which forms marine big data. As an innovation, marine big data is a double-edged sword. On the one hand, there are many potential and highly useful values hidden in the huge volume of marine data, which is widely used in marine-related fields, such as tsunami and red-tide warning, prevention, and forecasting, disaster inversion, and visualization modeling after disasters. There is no doubt that the future competitions in marine sciences and technologies will surely converge into the marine data explorations. On the other hand, marine big data also brings about many new challenges in data management, such as the difficulties in data capture, storage, analysis, and applications, as well as data quality control and data security. To highlight theoretical methodologies and practical applications of marine big data, this paper illustrates a broad view about marine big data and its management, makes a survey on key methods and models, introduces an engineering instance that demonstrates the management architecture, and discusses the existing challenges.

  9. PANGAEA® - Data Publisher for Earth & Environmental Science - Research data enters scholarly communication and big data analysis

    Science.gov (United States)

    Diepenbroek, Michael; Schindler, Uwe; Riedel, Morris; Huber, Robert

    2014-05-01

    The ISCU World Data Center PANGAEA is an information system for acquisition, processing, long term storage, and publication of geo-referenced data related to earth science fields. Storing more than 350.000 data sets from all fields of geosciences it belongs to the largest archives for observational earth science data. Standard conform interfaces (ISO, OGC, W3C, OAI) enable access from a variety of data and information portals, among them the search engine of PANGAEA itself ((www.pangaea.de) and e.g. GBIF. All data sets in PANGAEA are citable, fully documented, and can be referenced via persistent identifiers (Digital Object Identifier - DOI) - a premise for data publication. Together with other ICSU World Data Centers (www.icsu-wds.org) and the Technical Information Library in Germany (TIB) PANGAEA had a share in the implementation of a DOI based registry for scientific data, which by now is supported by a worldwide consortium of libraries (www.datacite.org). A further milestone was building up strong co-operations with science publishers as Elsevier, Springer, Wiley, AGU, Nature and others. A common web service allows to reference supplementary data in PANGAEA directly from an articles abstract page (e.g. Science Direct). The next step with science publishers is to further integrate the editorial process for the publication of supplementary data with the publication procedures on the journal side. Data centric research efforts such as environmental modelling or big data analysing approaches represent new challenges for PANGAEA. Integrated data warehouse technologies are used for highly efficient retrievals and compilations of time slices or surface data matrixes on any measurement parameters out of the whole data continuum. Further, new and emerging big data approaches are currently investigated within PANGAEA to e.g. evaluate its usability for quality control or data clustering. PANGAEA is operated as a joint long term facility by MARUM at the University Bremen

  10. Fast Response Shape Memory Effect Titanium Nickel (TiNi) Foam Torque Tubes

    Science.gov (United States)

    Jardine, Peter

    2014-01-01

    Shape Change Technologies has developed a process to manufacture net-shaped TiNi foam torque tubes that demonstrate the shape memory effect. The torque tubes dramatically reduce response time by a factor of 10. This Phase II project matured the actuator technology by rigorously characterizing the process to optimize the quality of the TiNi and developing a set of metrics to provide ISO 9002 quality assurance. A laboratory virtual instrument engineering workbench (LabVIEW'TM')-based, real-time control of the torsional actuators was developed. These actuators were developed with The Boeing Company for aerospace applications.

  11. Galled by the Gallbladder?: Your Tiny, Hard-Working Digestive Organ

    Science.gov (United States)

    ... Galled by the Gallbladder? Your Tiny, Hard-Working Digestive Organ En español Send us your comments Most ... among the most common and costly of all digestive system diseases. By some estimates, up to 20 ...

  12. Opening the Black Box: Understanding the Science Behind Big Data and Predictive Analytics.

    Science.gov (United States)

    Hofer, Ira S; Halperin, Eran; Cannesson, Maxime

    2018-05-25

    Big data, smart data, predictive analytics, and other similar terms are ubiquitous in the lay and scientific literature. However, despite the frequency of usage, these terms are often poorly understood, and evidence of their disruption to clinical care is hard to find. This article aims to address these issues by first defining and elucidating the term big data, exploring the ways in which modern medical data, both inside and outside the electronic medical record, meet the established definitions of big data. We then define the term smart data and discuss the transformations necessary to make big data into smart data. Finally, we examine the ways in which this transition from big to smart data will affect what we do in research, retrospective work, and ultimately patient care.

  13. Big biomedical data as the key resource for discovery science.

    Science.gov (United States)

    Toga, Arthur W; Foster, Ian; Kesselman, Carl; Madduri, Ravi; Chard, Kyle; Deutsch, Eric W; Price, Nathan D; Glusman, Gustavo; Heavner, Benjamin D; Dinov, Ivo D; Ames, Joseph; Van Horn, John; Kramer, Roger; Hood, Leroy

    2015-11-01

    Modern biomedical data collection is generating exponentially more data in a multitude of formats. This flood of complex data poses significant opportunities to discover and understand the critical interplay among such diverse domains as genomics, proteomics, metabolomics, and phenomics, including imaging, biometrics, and clinical data. The Big Data for Discovery Science Center is taking an "-ome to home" approach to discover linkages between these disparate data sources by mining existing databases of proteomic and genomic data, brain images, and clinical assessments. In support of this work, the authors developed new technological capabilities that make it easy for researchers to manage, aggregate, manipulate, integrate, and model large amounts of distributed data. Guided by biological domain expertise, the Center's computational resources and software will reveal relationships and patterns, aiding researchers in identifying biomarkers for the most confounding conditions and diseases, such as Parkinson's and Alzheimer's. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  15. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  16. Laser welding of Ti-Ni type shape memory alloy

    International Nuclear Information System (INIS)

    Hirose, Akio; Araki, Takao; Uchihara, Masato; Honda, Keizoh; Kondoh, Mitsuaki.

    1990-01-01

    The present study was undertaken to apply the laser welding to the joining of a shape memory alloy. Butt welding of a Ti-Ni type shape memory alloy was performed using 10 kW CO 2 laser. The laser welded specimens showed successfully the shape memory effect and super elasticity. These properties were approximately identical with those of the base metal. The change in super elasticity of the welded specimen during tension cycling was investigated. Significant changes in stress-strain curves and residual strain were not observed in the laser welded specimen after the 50-time cyclic test. The weld metal exhibited the celler dendrite. It was revealed by electron diffraction analysis that the phase of the weld metal was the TiNi phase of B2 structure which is the same as the parent phase of base metal and oxide inclusions crystallized at the dendrite boundary. However, oxygen contamination in the weld metal by laser welding did not occur because there was almost no difference in oxygen content between the base metal and the weld metal. The transformation temperatures of the weld metal were almost the same as those of the base metal. From these results, laser welding is applicable to the joining of the Ti-Ni type shape memory alloy. As the application of laser welding to new shape memory devices, the multiplex shape memory device of welded Ti-50.5 at % Ni and Ti-51.0 at % Ni was produced. The device showed two-stage shape memory effects due to the difference in transformation temperature between the two shape memory alloys. (author)

  17. A Proposed Concentration Curriculum Design for Big Data Analytics for Information Systems Students

    Science.gov (United States)

    Molluzzo, John C.; Lawler, James P.

    2015-01-01

    Big Data is becoming a critical component of the Information Systems curriculum. Educators are enhancing gradually the concentration curriculum for Big Data in schools of computer science and information systems. This paper proposes a creative curriculum design for Big Data Analytics for a program at a major metropolitan university. The design…

  18. Big Data and Intelligence: Applications, Human Capital, and Education

    Directory of Open Access Journals (Sweden)

    Michael Landon-Murray

    2016-06-01

    Full Text Available The potential for big data to contribute to the US intelligence mission goes beyond bulk collection, social media and counterterrorism. Applications will speak to a range of issues of major concern to intelligence agencies, from military operations to climate change to cyber security. There are challenges too: procurement lags, data stovepiping, separating signal from noise, sources and methods, a range of normative issues, and central to managing these challenges, human capital. These potential applications and challenges are discussed and a closer look at what data scientists do in the Intelligence Community (IC is offered. Effectively filling the ranks of the IC’s data science workforce will depend on the provision of well-trained data scientists from the higher education system. Program offerings at America’s top fifty universities will thus be surveyed (just a few years ago there were reportedly no degrees in data science. One Master’s program that has melded data science with intelligence is examined as well as a university big data research center focused on security and intelligence. This discussion goes a long way to clarify the prospective uses of data science in intelligence while probing perhaps the key challenge to optimal application of big data in the IC.

  19. Examining the Big-Fish-Little-Pond Effect on Students' Self-Concept of Learning Science in Taiwan Based on the TIMSS Databases

    Science.gov (United States)

    Liou, Pey-Yan

    2014-08-01

    The purpose of this study is to examine the relationship between student self-concept and achievement in science in Taiwan based on the big-fish-little-pond effect (BFLPE) model using the Trends in International Mathematics and Science Study (TIMSS) 2003 and 2007 databases. Hierarchical linear modeling was used to examine the effects of the student-level and school-level science achievement on student self-concept of learning science. The results indicated that student science achievement was positively associated with individual self-concept of learning science in both TIMSS 2003 and 2007. On the contrary, while school-average science achievement was negatively related to student self-concept in TIMSS 2003, it had no statistically significant relationship with student self-concept in TIMSS 2007. The findings of this study shed light on possible explanations for the existence of BFLPE and also lead to an international discussion on the generalization of BFLPE.

  20. LLNL's Big Science Capabilities Help Spur Over $796 Billion in U.S. Economic Activity Sequencing the Human Genome

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-28

    LLNL’s successful history of taking on big science projects spans beyond national security and has helped create billions of dollars per year in new economic activity. One example is LLNL’s role in helping sequence the human genome. Over $796 billion in new economic activity in over half a dozen fields has been documented since LLNL successfully completed this Grand Challenge.

  1. The Role of Social Responsibility in Big Business Practics

    OpenAIRE

    V A Gurinov

    2010-01-01

    The study of corporate social responsibility has become especially relevant in national science in the context of the development of big business able to assume significant social responsibilities. The article focuses on the issues of the nature and specificity of social responsibility of big business in Russia. The levels of social responsibility and the arrangements for social programmes implementation are also highlighted.

  2. Classical propagation of strings across a big crunch/big bang singularity

    International Nuclear Information System (INIS)

    Niz, Gustavo; Turok, Neil

    2007-01-01

    One of the simplest time-dependent solutions of M theory consists of nine-dimensional Euclidean space times 1+1-dimensional compactified Milne space-time. With a further modding out by Z 2 , the space-time represents two orbifold planes which collide and re-emerge, a process proposed as an explanation of the hot big bang [J. Khoury, B. A. Ovrut, P. J. Steinhardt, and N. Turok, Phys. Rev. D 64, 123522 (2001).][P. J. Steinhardt and N. Turok, Science 296, 1436 (2002).][N. Turok, M. Perry, and P. J. Steinhardt, Phys. Rev. D 70, 106004 (2004).]. When the two planes are near, the light states of the theory consist of winding M2-branes, describing fundamental strings in a particular ten-dimensional background. They suffer no blue-shift as the M theory dimension collapses, and their equations of motion are regular across the transition from big crunch to big bang. In this paper, we study the classical evolution of fundamental strings across the singularity in some detail. We also develop a simple semiclassical approximation to the quantum evolution which allows one to compute the quantum production of excitations on the string and implement it in a simplified example

  3. 5th Annual Pan-European Science and Big Physics Symposium on March 5th, 2012, Zurich, Switzerland

    CERN Multimedia

    Balle, Ch

    2012-01-01

    The 5th Annual Pan-European Science and Big Physics Symposium on March 5th is a technical workshop that covers topics in the areas of control, measurement and diagnostics for accelerators, cyclotrons, tokamaks and telescopes. The symposium brings together over 60 scientists and engineers from major research labs around the world such as CERN, PSI, INFN, NPL, ESRF and other research institutions. Attend this event to share ideas and results and to learn from the presentations of your peers from different labs and experiments worldwide.

  4. Implications of Big Data for cell biology

    OpenAIRE

    Dolinski, Kara; Troyanskaya, Olga G.

    2015-01-01

    Big Data” has surpassed “systems biology” and “omics” as the hottest buzzword in the biological sciences, but is there any substance behind the hype? Certainly, we have learned about various aspects of cell and molecular biology from the many individual high-throughput data sets that have been published in the past 15–20 years. These data, although useful as individual data sets, can provide much more knowledge when interrogated with Big Data approaches, such as applying integrative methods ...

  5. From ecological records to big data: the invention of global biodiversity.

    Science.gov (United States)

    Devictor, Vincent; Bensaude-Vincent, Bernadette

    2016-12-01

    This paper is a critical assessment of the epistemological impact of the systematic quantification of nature with the accumulation of big datasets on the practice and orientation of ecological science. We examine the contents of big databases and argue that it is not just accumulated information; records are translated into digital data in a process that changes their meanings. In order to better understand what is at stake in the 'datafication' process, we explore the context for the emergence and quantification of biodiversity in the 1980s, along with the concept of the global environment. In tracing the origin and development of the global biodiversity information facility (GBIF) we describe big data biodiversity projects as a techno-political construction dedicated to monitoring a new object: the global diversity. We argue that, biodiversity big data became a powerful driver behind the invention of the concept of the global environment, and a way to embed ecological science in the political agenda.

  6. Commentary: Epidemiology in the era of big data.

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  7. The International Big History Association

    Science.gov (United States)

    Duffy, Michael; Duffy, D'Neil

    2013-01-01

    IBHA, the International Big History Association, was organized in 2010 and "promotes the unified, interdisciplinary study and teaching of history of the Cosmos, Earth, Life, and Humanity." This is the vision that Montessori embraced long before the discoveries of modern science fleshed out the story of the evolving universe. "Big…

  8. Big data in psychology: Introduction to the special issue.

    Science.gov (United States)

    Harlow, Lisa L; Oswald, Frederick L

    2016-12-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: (a) The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. (b) Availability of large data sets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. (c) Identifying, addressing, and being sensitive to ethical considerations when analyzing large data sets gained from public or private sources. (d) The unavoidable necessity of validating predictive models in big data by applying a model developed on 1 dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Virginia Tech researchers find tiny bubbles a storehouse of knowledge

    OpenAIRE

    Trulove, Susan

    2005-01-01

    Fluid inclusions -- tiny bubbles of fluid or vapor trapped inside rock as it forms-- are clues to the location of ores and even petroleum; and they are time capsules that contain insights on the power of volcanos and hints of life in the universe.

  10. Small decisions with big impact on data analytics

    OpenAIRE

    Jana Diesner

    2015-01-01

    Big social data have enabled new opportunities for evaluating the applicability of social science theories that were formulated decades ago and were often based on small- to medium-sized samples. Big Data coupled with powerful computing has the potential to replace the statistical practice of sampling and estimating effects by measuring phenomena based on full populations. Preparing these data for analysis and conducting analytics involves a plethora of decisions, some of which are already em...

  11. Challenges in data science

    DEFF Research Database (Denmark)

    Carbone, Anna; Jensen, M.; Sato, Aki-Hiro

    2016-01-01

    of global properties from locally interacting data entities and clustering phenomena demand suitable approaches and methodologies recently developed in the foundational area of Data Science by taking a Complex Systems standpoint. Here, we deal with challenges that can be summarized by the question: "What...... can Complex Systems Science contribute to Big Data? ". Such question can be reversed and brought to a superior level of abstraction by asking "What Knowledge can be drawn from Big Data?" These aspects constitute the main motivation behind this article to introduce a volume containing a collection...... of papers presenting interdisciplinary advances in the Big Data area by methodologies and approaches typical of the Complex Systems Science, Nonlinear Systems Science and Statistical Physics. (C) 2016 Elsevier Ltd. All rights reserved....

  12. The Role of Social Responsibility in Big Business Practics

    Directory of Open Access Journals (Sweden)

    V A Gurinov

    2010-06-01

    Full Text Available The study of corporate social responsibility has become especially relevant in national science in the context of the development of big business able to assume significant social responsibilities. The article focuses on the issues of the nature and specificity of social responsibility of big business in Russia. The levels of social responsibility and the arrangements for social programmes implementation are also highlighted.

  13. Tiny Ultraviolet Polarimeter for Earth Stratosphere from Space Investigation

    Science.gov (United States)

    Nevodovskyi, P. V.; Morozhenko, O. V.; Vidmachenko, A. P.; Ivakhiv, O.; Geraimchuk, M.; Zbrutskyi, O.

    2015-09-01

    One of the reasons for climate change (i.e., stratospheric ozone concentrations) is connected with the variations in optical thickness of aerosols in the upper sphere of the atmosphere (at altitudes over 30 km). Therefore, aerosol and gas components of the atmosphere are crucial in the study of the ultraviolet (UV) radiation passing upon the Earth. Moreover, a scrupulous study of aerosol components of the Earth atmosphere at an altitude of 30 km (i.e., stratospheric aerosol), such as the size of particles, the real part of refractive index, optical thickness and its horizontal structure, concentration of ozone or the upper border of the stratospheric ozone layer is an important task in the research of the Earth climate change. At present, the Main Astronomical Observatory of the National Academy of Sciences (NAS) of Ukraine, the National Technical University of Ukraine "KPI"and the Lviv Polytechnic National University are engaged in the development of methodologies for the study of stratospheric aerosol by means of ultraviolet polarimeter using a microsatellite. So fare, there has been created a sample of a tiny ultraviolet polarimeter (UVP) which is considered to be a basic model for carrying out space experiments regarding the impact of the changes in stratospheric aerosols on both global and local climate.

  14. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  15. Shape memory characteristics of sputter-deposited Ti-Ni thin films

    International Nuclear Information System (INIS)

    Miyazaki, Shuichi; Ishida, Akira.

    1994-01-01

    Ti-Ni shape memory alloy thin films were deposited using an RF magnetron sputtering apparatus. The as-sputtered films were heat-treated in order to crystallize and memorize. After the heat treatment, the shape memory characteristics have been investigated using DSC and thermomechanical tests. Upon cooling the thin films, the solution-treated films showed a single peak in the DSC curve indicating a single stage transformation occurring from B2 to the martensitic phase, while the age-treated films showed double peaks indicating a two-stage transformation, i.e., from B2 to the R-phase, then to the martensitic phase. A perfect shape memory effect was achieved in these sputter-deposited Ti-Ni thin films in association both with the R-phase and martensitic transformations. Transformation temperatures increased linearly with increasing applied stress. The transformation strain also increased with increasing stress. The shape memory characteristics were strongly affected by heat-treatment conditions. (author)

  16. Effect of phase formation on valence band photoemission and photoresonance study of Ti/Ni multilayers using synchrotron radiation

    International Nuclear Information System (INIS)

    Bhatt, Pramod; Chaudhari, S.M.

    2006-01-01

    This paper presents investigation of Ti-Ni alloy phase formation and its effect on valence band (VB) photoemission and photoresonance study of as-deposited as well as annealed Ti/Ni multilayers (MLs) up to 600 deg. C using synchrotron radiation. For this purpose [Ti (50 A)/Ni (50 A)]X 10 ML structures were deposited by using electron-beam evaporation technique under ultra-high vacuum (UHV) conditions. Formation of different phases of Ti-Ni alloy due to annealing treatment has been confirmed by the X-ray diffraction (XRD) technique. The XRD pattern corresponding as-deposited ML sample shows crystalline nature of both Ti and Ni deposited layers, whereas 300 deg. C annealed ML sample show solid-state reaction (SSR) leading to amorphization and subsequent recrystallisation at higher temperatures of annealing (≥400 deg. C) with the formation of TiNi, TiNi 3 and Ti 2 Ni alloy phases. The survey scans corresponding to 400, 500 and 600 deg. C annealed ML sample shows interdiffusion and intermixing of Ni atoms into Ti layers leading to chemical Ti-Ni alloys phase formation at interface. The corresponding recorded VB spectra using synchrotron radiation at 134 eV on as-deposited ML sample with successive sputtering shows alternately photoemission bands due to Ti 3d and Ni 3d, respectively, indicating there is no mixing of the consequent layers and any phase formation at the interface during deposition. However, ML samples annealed at higher temperatures of annealing, particularly at 400, 500 and 600 deg. C show a clear shift in Ni 3d band and its satellite peak position to higher BE side indicates Ti-Ni alloy phase formation. In addition to this, reduction of satellite peak intensity and Ni 3d density of states (DOS) near Fermi level is also observed due to Ti-Ni phase formation with higher annealing temperatures. The variable photon energy VB measurements on as-deposited and ML samples annealed at 400 deg. C confirms existence and BE position of observed Ni 3d satellite

  17. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  18. How Big Science Came to Long Island: the Birth of Brookhaven Lab (429th Brookhaven Lecture)

    International Nuclear Information System (INIS)

    Crease, Robert P.

    2007-01-01

    Robert P. Crease, historian for the U.S. Department of Energy's Brookhaven National Laboratory and Chair of the Philosophy Department at Stony Brook University, will give two talks on the Laboratory's history on October 31 and December 12. Crease's October 31 talk, titled 'How Big Science Came to Long Island: The Birth of Brookhaven Lab,' will cover the founding of the Laboratory soon after World War II as a peacetime facility to construct and maintain basic research facilities, such as nuclear reactors and particle accelerators, that were too large for single institutions to build and operate. He will discuss the key figures involved in starting the Laboratory, including Nobel laureates I.I. Rabi and Norman Ramsey, as well as Donald Dexter Van Slyke, one of the most renowned medical researchers in American history. Crease also will focus on the many problems that had to be overcome in creating the Laboratory and designing its first big machines, as well as the evolving relations of the Laboratory with the surrounding Long Island community and news media. Throughout his talk, Crease will tell fascinating stories about Brookhaven's scientists and their research.

  19. Big Sites, Big Questions, Big Data, Big Problems: Scales of Investigation and Changing Perceptions of Archaeological Practice in the Southeastern United States

    Directory of Open Access Journals (Sweden)

    Cameron B Wesson

    2014-08-01

    Full Text Available Since at least the 1930s, archaeological investigations in the southeastern United States have placed a priority on expansive, near-complete, excavations of major sites throughout the region. Although there are considerable advantages to such large–scale excavations, projects conducted at this scale are also accompanied by a series of challenges regarding the comparability, integrity, and consistency of data recovery, analysis, and publication. We examine the history of large–scale excavations in the southeast in light of traditional views within the discipline that the region has contributed little to the ‘big questions’ of American archaeology. Recently published analyses of decades old data derived from Southeastern sites reveal both the positive and negative aspects of field research conducted at scales much larger than normally undertaken in archaeology. Furthermore, given the present trend toward the use of big data in the social sciences, we predict an increased use of large pre–existing datasets developed during the New Deal and other earlier periods of archaeological practice throughout the region.

  20. Fabrication of TiNi/CFRP smart composite using cold drawn TiNi wires

    Science.gov (United States)

    Xu, Ya; Otsuka, Kazuhiro; Toyama, Nobuyuki; Yoshida, Hitoshi; Jang, Byung-Koog; Nagai, Hideki; Oishi, Ryutaro; Kishi, Teruo

    2002-07-01

    In recent years, pre-strained TiNi shape memory alloys (SMA) have been used for fabricating smart structure with carbon fibers reinforced plastics (CFRP) in order to suppress microscopic mechanical damages. However, since the cure temperature of CFRP is higher than the reverse transformation temperatures of TiNi SMA, special fixture jigs have to be used for keeping the pre-strain during fabrication, which restricted its practical application. In order to overcome this difficulty, we developed a new method to fabricate SMA/CFRP smart composites without using special fixture jigs by controlling the transformation temperatures of SMA during fabrication. This method consists of using heavily cold-worked wires to increase the reverse transformation temperatures, and of using flash electrical heating of the wires after fabrication in order to decrease the reverse transformation temperatures to a lower temperature range again without damaging the epoxy resin around SMA wires. By choosing proper cold-working rate and composition of TiNi alloys, the reverse transformation temperatures were well controlled, and the TiNi/CFRP hybrid smart composite was fabricated without using special fixture jigs. The damage suppressing effect of cold drawn wires embedded in CFRP was confirmed.

  1. Time, space, stars and man the story of the Big Bang

    CERN Document Server

    Woolfson, Michael M

    2013-01-01

    The three greatest scientific mysteries, which remain poorly understood, are the origin of the universe, the origin of life and the development of consciousness. This book describes the processes preceding the Big Bang, the creation of matter, the concentration of that matter into stars and planets, the development of simple life forms and the theory of evolution that has given higher life forms, including mankind. Readership: Members of the general public who have an interest in popular science. There are many popular and excellent science books that present various aspects of science. However, this book follows a narrow scientific pathway from the Big Bang to mankind, and depicts the causal relationship between each step and the next. The science covered will be enough to satisfy most readers. Many important areas of science are dealt with, and these include cosmology, particle physics, atomic physics, galaxy and star formation, planet formation and aspects of evolution. The necessary science is described i...

  2. The application of Tiny Triplet Finder (TTF) in BTeV pixel trigger

    International Nuclear Information System (INIS)

    Wu, Jin-Yuan; Wang, M.; Gottschalk, E.; Shi, Z.; Fermilab

    2006-01-01

    We describe a track segment recognition scheme called the Tiny Triplet Finder (TTF) that involves grouping of three hits satisfying a constraint such as forming of a straight line. The TTF performs this O(n 3 ) function in O(n) time, where n is number of hits in each detector plane. The word ''tiny'' reflects the fact that the FPGA resource usage is small. The number of logic elements needed for the TTF is O(Nlog(N)), where N is the number of bins in the coordinate considered, which for large N, is significantly smaller than O(N 2 ) needed for typical implementations of similar functions. The TTF is also suitable for software implementations as well as many other pattern recognition problems

  3. Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology

    Directory of Open Access Journals (Sweden)

    Brittany M. Salazar

    2016-12-01

    Full Text Available Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring “big data” applications in pediatric oncology. Computational strategies derived from big data science–network- and machine learning-based modeling and drug repositioning—hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which “big data” and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases.

  4. Science Fiction and the Big Questions

    Science.gov (United States)

    O'Keefe, M.

    Advocates of space science promote investment in science education and the development of new technologies necessary for space travel. Success in these areas requires an increase of interest and support among the general public. What role can entertainment media play in inspiring the public ­ especially young people ­ to support the development of space science? Such inspiration is badly needed. Science education and funding in the United States are in a state of crisis. This bleak situation exists during a boom in the popularity of science-oriented television shows and science fiction movies. This paper draws on interviews with professionals in science, technology, engineering and mathematics (STEM) fields, as well as students interested in those fields. The interviewees were asked about their lifelong media-viewing habits. Analysis of these interviews, along with examples from popular culture, suggests that science fiction can be a valuable tool for space advocates. Specifically, the aspects of character, story, and special effects can provide viewers with inspiration and a sense of wonder regarding space science and the prospect of long-term human space exploration.

  5. Effect of Substrate Roughness on Adhesion and Structural Properties of Ti-Ni Shape Memory Alloy Thin Film.

    Science.gov (United States)

    Kim, Donghwan; Lee, Hyunsuk; Bae, Joohyeon; Jeong, Hyomin; Choi, Byeongkeun; Nam, Taehyun; Noh, Jungpil

    2018-09-01

    Ti-Ni shape memory alloy (SMA) thin films are very attractive material for industrial and medical applications such as micro-actuator, micro-sensors, and stents for blood vessels. An important property besides shape memory effect in the application of SMA thin films is the adhesion between the film and the substrate. When using thin films as micro-actuators or micro-sensors in MEMS, the film must be strongly adhered to the substrate. On the other hand, when using SMA thin films in medical devices such as stents, the deposited alloy thin film must be easily separable from the substrate for efficient processing. In this study, we investigated the effect of substrate roughness on the adhesion of Ti-Ni SMA thin films, as well as the structural properties and phase-transformation behavior of the fabricated films. Ti-Ni SMA thin films were deposited onto etched glass substrates with magnetron sputtering. Radio frequency plasma was used for etching the substrate. The adhesion properties were investigated through progressive scratch test. Structural properties of the films were determined via Feld emission scanning electron microscopy, X-ray diffraction measurements (XRD) and Energy-dispersive X-ray spectroscopy analysis. Phase transformation behaviors were observed with differential scanning calorimetry and low temperature-XRD. Ti-Ni SMA thin film deposited onto rough substrate provides higher adhesive strength than smooth substrate. However the roughness of the substrate has no influence on the growth and crystallization of the Ti-Ni SMA thin films.

  6. The Promise and Potential Perils of Big Data for Advancing Symptom Management Research in Populations at Risk for Health Disparities.

    Science.gov (United States)

    Bakken, Suzanne; Reame, Nancy

    2016-01-01

    Symptom management research is a core area of nursing science and one of the priorities for the National Institute of Nursing Research, which specifically focuses on understanding the biological and behavioral aspects of symptoms such as pain and fatigue, with the goal of developing new knowledge and new strategies for improving patient health and quality of life. The types and volume of data related to the symptom experience, symptom management strategies, and outcomes are increasingly accessible for research. Traditional data streams are now complemented by consumer-generated (i.e., quantified self) and "omic" data streams. Thus, the data available for symptom science can be considered big data. The purposes of this chapter are to (a) briefly summarize the current drivers for the use of big data in research; (b) describe the promise of big data and associated data science methods for advancing symptom management research; (c) explicate the potential perils of big data and data science from the perspective of the ethical principles of autonomy, beneficence, and justice; and (d) illustrate strategies for balancing the promise and the perils of big data through a case study of a community at high risk for health disparities. Big data and associated data science methods offer the promise of multidimensional data sources and new methods to address significant research gaps in symptom management. If nurse scientists wish to apply big data and data science methods to advance symptom management research and promote health equity, they must carefully consider both the promise and perils.

  7. BIG: a large-scale data integration tool for renal physiology.

    Science.gov (United States)

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A

    2016-10-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.

  8. How Big is Earth?

    Science.gov (United States)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  9. Big data analysis new algorithms for a new society

    CERN Document Server

    Stefanowski, Jerzy

    2016-01-01

    This edited volume is devoted to Big Data Analysis from a Machine Learning standpoint as presented by some of the most eminent researchers in this area. It demonstrates that Big Data Analysis opens up new research problems which were either never considered before, or were only considered within a limited range. In addition to providing methodological discussions on the principles of mining Big Data and the difference between traditional statistical data analysis and newer computing frameworks, this book presents recently developed algorithms affecting such areas as business, financial forecasting, human mobility, the Internet of Things, information networks, bioinformatics, medical systems and life science. It explores, through a number of specific examples, how the study of Big Data Analysis has evolved and how it has started and will most likely continue to affect society. While the benefits brought upon by Big Data Analysis are underlined, the book also discusses some of the warnings that have been issued...

  10. Big data, advanced analytics and the future of comparative effectiveness research.

    Science.gov (United States)

    Berger, Marc L; Doban, Vitalii

    2014-03-01

    The intense competition that accompanied the growth of internet-based companies ushered in the era of 'big data' characterized by major innovations in processing of very large amounts of data and the application of advanced analytics including data mining and machine learning. Healthcare is on the cusp of its own era of big data, catalyzed by the changing regulatory and competitive environments, fueled by growing adoption of electronic health records, as well as efforts to integrate medical claims, electronic health records and other novel data sources. Applying the lessons from big data pioneers will require healthcare and life science organizations to make investments in new hardware and software, as well as in individuals with different skills. For life science companies, this will impact the entire pharmaceutical value chain from early research to postcommercialization support. More generally, this will revolutionize comparative effectiveness research.

  11. DEVELOPING THE TRANSDISCIPLINARY AGING RESEARCH AGENDA: NEW DEVELOPMENTS IN BIG DATA.

    Science.gov (United States)

    Callaghan, Christian William

    2017-07-19

    In light of dramatic advances in big data analytics and the application of these advances in certain scientific fields, new potentialities exist for breakthroughs in aging research. Translating these new potentialities to research outcomes for aging populations, however, remains a challenge, as underlying technologies which have enabled exponential increases in 'big data' have not yet enabled a commensurate era of 'big knowledge,' or similarly exponential increases in biomedical breakthroughs. Debates also reveal differences in the literature, with some arguing big data analytics heralds a new era associated with the 'end of theory' or which makes the scientific method obsolete, where correlation supercedes causation, whereby science can advance without theory and hypotheses testing. On the other hand, others argue theory cannot be subordinate to data, no matter how comprehensive data coverage can ultimately become. Given these two tensions, namely between exponential increases in data absent exponential increases in biomedical research outputs, and between the promise of comprehensive data coverage and data-driven inductive versus theory-driven deductive modes of enquiry, this paper seeks to provide a critical review of certain theory and literature that offers useful perspectives of certain developments in big data analytics and their theoretical implications for aging research. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. Lowering the barriers for accessing distributed geospatial big data to advance spatial data science: the PolarHub solution

    Science.gov (United States)

    Li, W.

    2017-12-01

    Data is the crux of science. The widespread availability of big data today is of particular importance for fostering new forms of geospatial innovation. This paper reports a state-of-the-art solution that addresses a key cyberinfrastructure research problem—providing ready access to big, distributed geospatial data resources on the Web. We first formulate this data-access problem and introduce its indispensable elements, including identifying the cyber-location, space and time coverage, theme, and quality of the dataset. We then propose strategies to tackle each data-access issue and make the data more discoverable and usable for geospatial data users and decision makers. Among these strategies is large-scale web crawling as a key technique to support automatic collection of online geospatial data that are highly distributed, intrinsically heterogeneous, and known to be dynamic. To better understand the content and scientific meanings of the data, methods including space-time filtering, ontology-based thematic classification, and service quality evaluation are incorporated. To serve a broad scientific user community, these techniques are integrated into an operational data crawling system, PolarHub, which is also an important cyberinfrastructure building block to support effective data discovery. A series of experiments were conducted to demonstrate the outstanding performance of the PolarHub system. We expect this work to contribute significantly in building the theoretical and methodological foundation for data-driven geography and the emerging spatial data science.

  13. Big Data, epistemology and causality: Knowledge in and knowledge out in EXPOsOMICS

    Directory of Open Access Journals (Sweden)

    Stefano Canali

    2016-09-01

    Full Text Available Recently, it has been argued that the use of Big Data transforms the sciences, making data-driven research possible and studying causality redundant. In this paper, I focus on the claim on causal knowledge by examining the Big Data project EXPOsOMICS, whose research is funded by the European Commission and considered capable of improving our understanding of the relation between exposure and disease. While EXPOsOMICS may seem the perfect exemplification of the data-driven view, I show how causal knowledge is necessary for the project, both as a source for handling complexity and as an output for meeting the project’s goals. Consequently, I argue that data-driven claims about causality are fundamentally flawed and causal knowledge should be considered a necessary aspect of Big Data science. In addition, I present the consequences of this result on other data-driven claims, concerning the role of theoretical considerations. I argue that the importance of causal knowledge and other kinds of theoretical engagement in EXPOsOMICS undermine theory-free accounts and suggest alternative ways of framing science based on Big Data.

  14. [Application of big data analyses for musculoskeletal cell differentiation].

    Science.gov (United States)

    Imai, Yuuki

    2016-04-01

    Next generation sequencer has strongly progress big data analyses in life science. Among various kinds of sequencing data sets, epigenetic platform has just been important key to clarify the questions on broad and detail phenomenon in various forms of life. In this report, it is introduced that the research on identification of novel transcription factors in osteoclastogenesis using DNase-seq. Big data on musculoskeletal research will be organized by IFMRS and is getting more crucial.

  15. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  16. The application of Tiny Triplet Finder (TTF) in BTeV pixel trigger

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jin-Yuan; Wang, M.; Gottschalk, E.; Shi, Z.; /Fermilab

    2006-03-01

    We describe a track segment recognition scheme called the Tiny Triplet Finder (TTF) that involves grouping of three hits satisfying a constraint such as forming of a straight line. The TTF performs this O(n{sup 3}) function in O(n) time, where n is number of hits in each detector plane. The word ''tiny'' reflects the fact that the FPGA resource usage is small. The number of logic elements needed for the TTF is O(Nlog(N)), where N is the number of bins in the coordinate considered, which for large N, is significantly smaller than O(N{sup 2}) needed for typical implementations of similar functions. The TTF is also suitable for software implementations as well as many other pattern recognition problems.

  17. Homogenization of stationary Navier–Stokes equations in domains with tiny holes

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Lu, Y.

    2015-01-01

    Roč. 17, č. 2 (2015), s. 381-392 ISSN 1422-6928 Keywords : compressible Navier - Stokes system * homogenization * tiny holes Subject RIV: BA - General Mathematics Impact factor: 1.023, year: 2015 http://link.springer.com/article/10.1007%2Fs00021-015-0200-2

  18. Cardiovascular proteomics in the era of big data: experimental and computational advances.

    Science.gov (United States)

    Lam, Maggie P Y; Lau, Edward; Ng, Dominic C M; Wang, Ding; Ping, Peipei

    2016-01-01

    Proteomics plays an increasingly important role in our quest to understand cardiovascular biology. Fueled by analytical and computational advances in the past decade, proteomics applications can now go beyond merely inventorying protein species, and address sophisticated questions on cardiac physiology. The advent of massive mass spectrometry datasets has in turn led to increasing intersection between proteomics and big data science. Here we review new frontiers in technological developments and their applications to cardiovascular medicine. The impact of big data science on cardiovascular proteomics investigations and translation to medicine is highlighted.

  19. Cytocompatibility evaluation and surface characterization of TiNi deformed by high-pressure torsion

    Energy Technology Data Exchange (ETDEWEB)

    Awang Shri, Dayangku Noorfazidah, E-mail: AWANGSHRI.Dayangku@nims.go.jp [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Structural Materials Unit, National Institute for Materials Science, Tsukuba, Ibaraki 305-0047 (Japan); Tsuchiya, Koichi [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Structural Materials Unit, National Institute for Materials Science, Tsukuba, Ibaraki 305-0047 (Japan); Yamamoto, Akiko [Biomaterials Unit, International Center for Material Nanoarchitectonics (WPI-MANA), National Institute for Materials Science, Namiki 1-1, Tsukuba, Ibaraki 305-0044 (Japan)

    2014-10-01

    Effect of high-pressure torsion (HPT) deformation on biocompatibility and surface chemistry of TiNi was systematically investigated. Ti–50 mol% Ni was subjected to HPT straining for different numbers of turns, N = 0.25, 0.5, 1, 5 and 10 at a rotation speed of 1 rpm. X-ray photoelectron spectroscopy observations after 7 days of cell culture revealed the changes in the surface oxide composition, enrichment of Ti and detection of nitrogen derived from organic molecules in the culture medium. Plating efficiency of L929 cells was slightly increased by HPT deformation though no significant difference was observed. Albumin adsorption was higher in HPT-deformed samples, while vitronectin adsorption was peaked at N = 1. HPT deformation was also found to effectively suppress the Ni ion release from the TiNi samples into the cell culture medium even after the low degree of deformation at N = 0.25. - Highlights: • Nanostructured Ti–50 mol%Ni alloy was produced using high-pressure torsion. • HPT deformation improved L929 growth on TiNi samples. • Changes in surface chemistry were observed in HPT deformed samples. • Protein adsorption behavior was influenced by the surface chemistry. • Ni ion release was suppressed in HPT deformed samples.

  20. Research Ethics in Big Data.

    Science.gov (United States)

    Hammer, Marilyn J

    2017-05-01

    The ethical conduct of research includes, in part, patient agreement to participate in studies and the protection of health information. In the evolving world of data science and the accessibility of large quantities of web-based data created by millions of individuals, novel methodologic approaches to answering research questions are emerging. This article explores research ethics in the context of big data.

  1. Adapting bioinformatics curricula for big data

    Science.gov (United States)

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  2. Delivering Science from Big Data

    Science.gov (United States)

    Quinn, Peter Joseph

    2015-08-01

    The SKA will be capable of producing a stream of science data products that are Exa-scale in terms of their storage and processing requirements. This Google-scale enterprise is attracting considerable international interest and excitement from within the industrial and academic communities. In this paper we examine the data flow, storage and processing requirements of a number of key SKA survey science projects to be executed on the baseline SKA1 configuration. Based on a set of conservative assumptions about trends for HPC and storage costs, and the data flow process within the SKA Observatory, it is apparent that survey projects of the scale proposed will potentially drive construction and operations costs beyond the current anticipated SKA1 budget. This implies a sharing of the resources and costs to deliver SKA science between the community and what is contained within the SKA Observatory. A similar situation was apparent to the designers of the LHC more than 10 years ago. We propose that it is time for the SKA project and broader community to consider the effort and process needed to design and implement a distributed science data system that leans on the lessons of other projects and looks to recent developments in Cloud technologies to ensure an affordable, effective and global achievement of science goals.

  3. Tiny Integrated Network Analyzer for Noninvasive Measurements of Electrically Small Antennas

    DEFF Research Database (Denmark)

    Buskgaard, Emil Feldborg; Krøyer, Ben; Tatomirescu, Alexandru

    2016-01-01

    the system. The tiny integrated network analyzer is a stand-alone Arduino-based measurement system that utilizes the transmit signal of the system under test as its reference. It features a power meter with triggering ability, on-board memory, universal serial bus, and easy extendibility with general...

  4. Implementing the “Big Data” Concept in Official Statistics

    Directory of Open Access Journals (Sweden)

    О. V.

    2017-02-01

    Full Text Available Big data is a huge resource that needs to be used at all levels of economic planning. The article is devoted to the study of the development of the concept of “Big Data” in the world and its impact on the transformation of statistical simulation of economic processes. Statistics at the current stage should take into account the complex system of international economic relations, which functions in the conditions of globalization and brings new forms of economic development in small open economies. Statistical science should take into account such phenomena as gig-economy, common economy, institutional factors, etc. The concept of “Big Data” and open data are analyzed, problems of implementation of “Big Data” in the official statistics are shown. The ways of implementation of “Big Data” in the official statistics of Ukraine through active use of technological opportunities of mobile operators, navigation systems, surveillance cameras, social networks, etc. are presented. The possibilities of using “Big Data” in different sectors of the economy, also on the level of companies are shown. The problems of storage of large volumes of data are highlighted. The study shows that “Big Data” is a huge resource that should be used across the Ukrainian economy.

  5. Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    OpenAIRE

    Zhuge, Hai

    2015-01-01

    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing ...

  6. Cartography in the Age of Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    WANG Jiayao

    2017-10-01

    Full Text Available Cartography is an ancient science with almost the same long history as the world's oldest culture.Since ancient times,the movement and change of anything and any phenomena,including human activities,have been carried out in a certain time and space.The development of science and technology and the progress of social civilization have made social management and governance more and more dependent on time and space.The information source,theme,content,carrier,form,production methods and application methods of map are different in different historical periods,so that its all-round value is different. With the arrival of the big data age,the scientific paradigm has now entered the era of "data-intensive" paradigm,so is the cartography,with obvious characteristics of big data science.All big data are caused by movement and change of all things and phenomena in the geographic world,so they have space and time characteristics and thus cannot be separated from the spatial reference and time reference.Therefore,big data is big spatio-temporal data essentially.Since the late 1950s and early 1960s,modern cartography,that is,the cartography in the information age,takes spatio-temporal data as the object,and focuses on the processing and expression of spatio-temporal data,but not in the face of the large scale multi-source heterogeneous and multi-dimensional dynamic data flow(or flow datafrom sky to the sea.The real-time dynamic nature,the theme pertinence,the content complexity,the carrier diversification,the expression form personalization,the production method modernization,the application ubiquity of the map,is incomparable in the past period,which leads to the great changes of the theory,technology and application system of cartography.And all these changes happen to occur in the 60 years since the late 1950s and early 1960s,so this article was written to commemorate the 60th anniversary of the "Acta Geodaetica et Cartographica Sinica".

  7. Reviews Book: Extended Project Student Guide Book: My Inventions Book: ASE Guide to Research in Science Education Classroom Video: The Science of Starlight Software: SPARKvue Book: The Geek Manifesto Ebook: A Big Ball of Fire Apps

    Science.gov (United States)

    2014-05-01

    WE RECOMMEND Level 3 Extended Project Student Guide A non-specialist, generally useful and nicely put together guide to project work ASE Guide to Research in Science Education Few words wasted in this handy introduction and reference The Science of Starlight Slow but steady DVD covers useful ground SPARKvue Impressive software now available as an app WORTH A LOOK My Inventions and Other Writings Science, engineering, autobiography, visions and psychic phenomena mixed in a strange but revealing concoction The Geek Manifesto: Why Science Matters More enthusiasm than science, but a good motivator and interesting A Big Ball of Fire: Your questions about the Sun answered Free iTunes download made by and for students goes down well APPS Collider visualises LHC experiments ... Science Museum app enhances school trips ... useful information for the Cambridge Science Festival

  8. Complementary Social Science?

    DEFF Research Database (Denmark)

    Blok, Anders; Pedersen, Morten Axel

    2014-01-01

    of measurement device deployed. At the same time, however, we also expect new interferences and polyphonies to arise at the intersection of Big and Small Data, provided that these are, so to speak, mixed with care. These questions, we stress, are important not only for the future of social science methods......The rise of Big Data in the social realm poses significant questions at the intersection of science, technology, and society, including in terms of how new large-scale social databases are currently changing the methods, epistemologies, and politics of social science. In this commentary, we address...

  9. Is big data risk assessment a novelty?

    NARCIS (Netherlands)

    Swuste, P.H.J.J.

    2016-01-01

    Objective: What metaphors, models and theories were developed in the safety science domain? And which research was based upon ‘big data’? Method: The study was confined to original articles and documents, written in English or Dutch from the period under consideration. Results and conclusions: From

  10. The phytotronist and the phenotype: plant physiology, Big Science, and a Cold War biology of the whole plant.

    Science.gov (United States)

    Munns, David P D

    2015-04-01

    This paper describes how, from the early twentieth century, and especially in the early Cold War era, the plant physiologists considered their discipline ideally suited among all the plant sciences to study and explain biological functions and processes, and ranked their discipline among the dominant forms of the biological sciences. At their apex in the late-1960s, the plant physiologists laid claim to having discovered nothing less than the "basic laws of physiology." This paper unwraps that claim, showing that it emerged from the construction of monumental big science laboratories known as phytotrons that gave control over the growing environment. Control meant that plant physiologists claimed to be able to produce a standard phenotype valid for experimental biology. Invoking the standards of the physical sciences, the plant physiologists heralded basic biological science from the phytotronic produced phenotype. In the context of the Cold War era, the ability to pursue basic science represented the highest pinnacle of standing within the scientific community. More broadly, I suggest that by recovering the history of an underappreciated discipline, plant physiology, and by establishing the centrality of the story of the plant sciences in the history of biology can historians understand the massive changes wrought to biology by the conceptual emergence of the molecular understanding of life, the dominance of the discipline of molecular biology, and the rise of biotechnology in the 1980s. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  12. Passport to the Big Bang moves across the road

    CERN Document Server

    Corinne Pralavorio

    2015-01-01

    The ATLAS platform of the Passport to the Big Bang circuit has been relocated in front of the CERN Reception.   The ATLAS platform of the Passport to the Big Bang, outside the CERN Reception building. The Passport to the Big Bang platform of the ATLAS Experiment has been moved in front of the CERN Reception to make it more visible and accessible. It had to be dismantled and moved from its previous location in the garden of the Globe of Science and Innovation due to the major refurbishment work in progress on the Globe, and is now fully operational in its new location on the other side of the road, in the Main Reception car-park. The Passport to the Big Bang circuit, inaugurated in 2013, comprises ten platforms installed in front of ten CERN sites and aims to help local residents and visitors to the region understand CERN's research. Dedicated Passport to the Big Bang flyers, containing all necessary information and riddles for you to solve, are available at the CERN Rec...

  13. Towards cloud based big data analytics for smart future cities

    OpenAIRE

    Khan, Zaheer; Anjum, Ashiq; Soomro, Kamran; Tahir, Muhammad

    2015-01-01

    A large amount of land-use, environment, socio-economic, energy and transport data is generated in cities. An integrated perspective of managing and analysing such big data can answer a number of science, policy, planning, governance and business questions and support decision making in enabling a smarter environment. This paper presents a theoretical and experimental perspective on the smart cities focused big data management and analysis by proposing a cloud-based analytics service. A proto...

  14. Enhancing Teachers' Awareness About Relations Between Science and Religion. The Debate Between Steady State and Big Bang Theories

    Science.gov (United States)

    Bagdonas, Alexandre; Silva, Cibelle Celestino

    2015-11-01

    Educators advocate that science education can help the development of more responsible worldviews when students learn not only scientific concepts, but also about science, or "nature of science". Cosmology can help the formation of worldviews because this topic is embedded in socio-cultural and religious issues. Indeed, during the Cold War period, the cosmological controversy between Big Bang and Steady State theory was tied up with political and religious arguments. The present paper discusses a didactic sequence developed for and applied in a pre-service science teacher-training course on history of science. After studying the historical case, pre-service science teachers discussed how to deal with possible conflicts between scientific views and students' personal worldviews related to religion. The course focused on the study of primary and secondary sources about cosmology and religion written by cosmologists such as Georges Lemaître, Fred Hoyle and the Pope Pius XII. We used didactic strategies such as short seminars given by groups of pre-service teachers, videos, computer simulations, role-play, debates and preparation of written essays. Along the course, most pre-service teachers emphasized differences between science and religion and pointed out that they do not feel prepared to conduct classroom discussions about this topic. Discussing the relations between science and religion using the history of cosmology turned into an effective way to teach not only science concepts but also to stimulate reflections about nature of science. This topic may contribute to increasing students' critical stance on controversial issues, without the need to explicitly defend certain positions, or disapprove students' cultural traditions. Moreover, pre-service teachers practiced didactic strategies to deal with this kind of unusual content.

  15. A Guided Inquiry on Hubble Plots and the Big Bang

    Science.gov (United States)

    Forringer, Ted

    2014-01-01

    In our science for non-science majors course "21st Century Physics," we investigate modern "Hubble plots" (plots of velocity versus distance for deep space objects) in order to discuss the Big Bang, dark matter, and dark energy. There are two potential challenges that our students face when encountering these topics for the…

  16. A Big Data Task Force Review of Advances in Data Access and Discovery Within the Science Disciplines of the NASA Science Mission Directorate (SMD)

    Science.gov (United States)

    Walker, R. J.; Beebe, R. F.

    2017-12-01

    One of the basic problems the NASA Science Mission Directorate (SMD) faces when dealing with preservation of scientific data is the variety of the data. This stems from the fact that NASA's involvement in the sciences spans a broad range of disciplines across the Science Mission Directorate: Astrophysics, Earth Sciences, Heliophysics and Planetary Science. As the ability of some missions to produce large data volumes has accelerated, the range of problems associated with providing adequate access to the data has demanded diverse approaches for data access. Although mission types, complexity and duration vary across the disciplines, the data can be characterized by four characteristics: velocity, veracity, volume, and variety. The rate of arrival of the data (velocity) must be addressed at the individual mission level, validation and documentation of the data (veracity), data volume and the wide variety of data products present huge challenges as the science disciplines strive to provide transparent access to their available data. Astrophysics, supports an integrated system of data archives based on frequencies covered (UV, visible, IR, etc.) or subject areas (extrasolar planets, extra galactic, etc.) and is accessed through the Astrophysics Data Center (https://science.nasa.gov/astrophysics/astrophysics-data-centers/). Earth Science supports the Earth Observing System (https://earthdata.nasa.gov/) that manages the earth science satellite data. The discipline supports 12 Distributed Active Archive Centers. Heliophysics provides the Space Physics Data Facility (https://spdf.gsfc.nasa.gov/) that supports the heliophysics community and Solar Data Analysis Center (https://umbra.nascom.nasa.gov/index.html) that allows access to the solar data. The Planetary Data System (https://pds.nasa.gov) is the main archive for planetary science data. It consists of science discipline nodes (Atmospheres, Geosciences, Cartography and Imaging Sciences, Planetary Plasma Interactions

  17. Challenges of Big Data in Educational Assessment

    Science.gov (United States)

    Gibson, David C.; Webb, Mary; Ifenthaler, Dirk

    2015-01-01

    This paper briefly discusses four measurement challenges of data science or "big data" in educational assessments that are enabled by technology: 1. Dealing with change over time via time-based data. 2. How a digital performance space's relationships interact with learner actions, communications and products. 3. How layers of…

  18. Technical challenges for big data in biomedicine and health: data sources, infrastructure, and analytics.

    Science.gov (United States)

    Peek, N; Holmes, J H; Sun, J

    2014-08-15

    To review technical and methodological challenges for big data research in biomedicine and health. We discuss sources of big datasets, survey infrastructures for big data storage and big data processing, and describe the main challenges that arise when analyzing big data. The life and biomedical sciences are massively contributing to the big data revolution through secondary use of data that were collected during routine care and through new data sources such as social media. Efficient processing of big datasets is typically achieved by distributing computation over a cluster of computers. Data analysts should be aware of pitfalls related to big data such as bias in routine care data and the risk of false-positive findings in high-dimensional datasets. The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.

  19. Agrupamentos epistemológicos de artigos publicados sobre big data analytics

    OpenAIRE

    FURLAN, Patricia Kuzmenko; LAURINDO, Fernando José Barbin

    2017-01-01

    Resumo A era do big data já é realidade para empresas e indivíduos, e a literatura acadêmica sobre o tema tem crescido rapidamente nos últimos anos. Neste artigo, pretendeu-se identificar quais são os principais nichos e vertentes de publicação sobre o big data analytics. A opção metodológica foi realizar pesquisa bibliométrica na base de dados ISI Web of Science, utilizando-se aquele termo para focar as práticas de gestão de big data. Foi possível identificar cinco grupos distintos dentre os...

  20. The effect of the electronic structure, phase transition, and localized dynamics of atoms in the formation of tiny particles of gold

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Mubarak, E-mail: mubarak74@comsats.edu.pk, E-mail: mubarak74@mail.com [COMSATS Institute of Information Technology, Department of Physics (Pakistan); Lin, I-Nan [Tamkang University, Department of Physics (China)

    2017-01-15

    In addition to self-governing properties, tiny-sized particles of metallic colloids are the building blocks of large-sized particles; thus, their study has been the subject of a large number of publications. In the present work, it has been discussed that geometry structure of tiny particle made through atom-to-atom amalgamation depends on attained dynamics of gold atoms along with protruded orientations. The localized process conditions direct two-dimensional structure of a tiny particle at atomically flat air-solution interface while heating locally dynamically approached atoms, thus, negate the role of van der Waals interactions. At electronphoton-solution interface, impinging electrons stretch or deform atoms of tiny particles depending on the mechanism of impingement. In addition, to strike regular grid of electrons ejected on split of atoms not executing excitations and de-excitations of their electrons, atoms of tiny particles also deform or stretch while occupying various sites depending on the process of synergy. Under suitable impinging electron streams, those tiny particles in monolayer two-dimensional structure electron states of their atoms are diffused in the direction of transferred energy, thus, coincide to the next adjacent atoms in each one-dimensional array dealing the same sort of behavior. Instantaneously, photons of adequate energy propagate on the surfaces of such electronic structures and modify those into smooth elements, thus, disregard the phenomenon of localized surface plasmons. This study highlights the fundamental process of formation of tiny particles where the role of localized dynamics of atoms and their electronic structure along with interaction to light are discussed. Such a tool of processing materials, in nonequilibrium pulse-based process, opens a number of possibilities to develop engineered materials with specific chemical, optical, and electronic properties.

  1. Creation a Geo Big Data Outreach and Training Collaboratory for Wildfire Community

    Science.gov (United States)

    Altintas, I.; Sale, J.; Block, J.; Cowart, C.; Crawl, D.

    2015-12-01

    A major challenge for the geoscience community is the training and education of current and next generation big data geoscientists. In wildfire research, there are an increasing number of tools, middleware and techniques to use for data science related to wildfires. The necessary computing infrastructures are often within reach and most of the software tools for big data are freely available. But what has been lacking is a transparent platform and training program to produce data science experts who can use these integrated tools effectively. Scientists well versed to take advantage of big data technologies in geoscience applications is of critical importance to the future of research and knowledge advancement. To address this critical need, we are developing learning modules to teach process-based thinking to capture the value of end-to-end systems of reusable blocks of knowledge and integrate the tools and technologies used in big data analysis in an intuitive manner. WIFIRE is an end-to-end cyberinfrastructure for dynamic data-driven simulation, prediction and visualization of wildfire behavior.To this end, we are openly extending an environment we have built for "big data training" (biobigdata.ucsd.edu) to similar MOOC-based approaches to the wildfire community. We are building an environment that includes training modules for distributed platforms and systems, Big Data concepts, and scalable workflow tools, along with other basics of data science including data management, reproducibility and sharing of results. We also plan to provide teaching modules with analytical and dynamic data-driven wildfire behavior modeling case studies which address the needs not only of standards-based K-12 science education but also the needs of a well-educated and informed citizenry.Another part our outreach mission is to educate our community on all aspects of wildfire research. One of the most successful ways of accomplishing this is through high school and undergraduate

  2. Addressing big data challenges for scientific data infrastructure

    NARCIS (Netherlands)

    Demchenko, Y.; Zhao, Z.; Grosso, P.; Wibisono, A.; de Laat, C.

    2012-01-01

    This paper discusses the challenges that are imposed by Big Data Science on the modern and future Scientific Data Infrastructure (SDI). The paper refers to different scientific communities to define requirements on data management, access control and security. The paper introduces the Scientific

  3. Making a Big Bang on the small screen

    Science.gov (United States)

    Thomas, Nick

    2010-01-01

    While the quality of some TV sitcoms can leave viewers feeling cheated out of 30 minutes of their lives, audiences and critics are raving about the science-themed US comedy The Big Bang Theory. First shown on the CBS network in 2007, the series focuses on two brilliant postdoc physicists, Leonard and Sheldon, who are totally absorbed by science. Adhering to the stereotype, they also share a fanatical interest in science fiction, video-gaming and comic books, but unfortunately lack the social skills required to connect with their 20-something nonacademic contemporaries.

  4. Small data in the era of big data

    OpenAIRE

    Kitchin, Rob; Lauriault, Tracey P.

    2015-01-01

    Academic knowledge building has progressed for the past few centuries using small data studies characterized by sampled data generated to answer specific questions. It is a strategy that has been remarkably successful, enabling the sciences, social sciences and humanities to advance in leaps and bounds. This approach is presently being challenged by the development of big data. Small data studies will however, we argue, continue to be popular and valuable in the fut...

  5. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  6. A Study of the Application of Big Data in a Rural Comprehensive Information Service

    Directory of Open Access Journals (Sweden)

    Leifeng Guo

    2015-05-01

    Full Text Available Big data has attracted extensive interest due to its potential tremendous social and scientific value. Researchers are also trying to extract potential value from agriculture big data. This paper presents a study of information services based on big data from the perspective of a rural comprehensive information service. First, we introduce the background of the rural comprehensive information service, and then we present in detail the National Rural Comprehensive Information Service Platform (NRCISP, which is supported by the national science and technology support program. Next, we discuss big data in the NRCISP according to data characteristics, data sources, and data processing. Finally, we discuss a service model and services based on big data in the NRCISP.

  7. Steering with big words: articulating ideographs in nanotechnology

    NARCIS (Netherlands)

    Bos, Colette; Walhout, Albert; Peine, Alex; van Lente, Harro

    2014-01-01

    Nowadays, science should address societal challenges, such as ‘sustainability’, or ‘responsible research and innovation’. This emerging form of steering toward broad and generic goals involves the use of ‘big words’: encompassing concepts that are uncontested themselves, but that allow for multiple

  8. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  9. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    Directory of Open Access Journals (Sweden)

    Mike W.-L. Cheung

    2016-05-01

    Full Text Available Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists – and probably the most crucial one – is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  10. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    Science.gov (United States)

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  11. EKALAVYA MODEL OF HIGHER EDUCATION – AN INNOVATION OF IBM’S BIG DATA UNIVERSITY

    OpenAIRE

    Dr. P. S. Aithal; Shubhrajyotsna Aithal

    2016-01-01

    Big Data Science is a new multi-disciplinary subject in the society, comprising of business intelligence, data analytics, and the related fields have become increasingly important in both the academic and the business communities during the 21st century. Many organizations and business intelligence experts have foreseen the significant development in the big data field as next big wave in future research arena in many industry sectors and the society. To become an expert and skilled in this n...

  12. The kinetics of Cr layer coated on TiNi films for hydrogen absorption

    Indian Academy of Sciences (India)

    Abstract. The effect of hydrogen absorption on electrical resistance with temperature ... pressure by thermal evaporation on the glass substrate at room temperature. ... and charging rate becomes faster in comparison to FeTi and TiNi thin films.

  13. Long the fixation of physicists worldwide, a tiny particle is found

    CERN Multimedia

    2006-01-01

    "After decades of intensive effort by both experimental and theoretical physicists worldwide, a tiny particle with no charge, a very low mass and a lifetime much shorter than a nanosecond, dubbed the "axion", has now been detected by the University at Buffalo physicist who first suggested its existence in a little-read paper as early as 194." (2 pages)

  14. The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery.

    Science.gov (United States)

    Swan, Melanie

    2013-06-01

    A key contemporary trend emerging in big data science is the quantified self (QS)-individuals engaged in the self-tracking of any kind of biological, physical, behavioral, or environmental information as n=1 individuals or in groups. There are opportunities for big data scientists to develop new models to support QS data collection, integration, and analysis, and also to lead in defining open-access database resources and privacy standards for how personal data is used. Next-generation QS applications could include tools for rendering QS data meaningful in behavior change, establishing baselines and variability in objective metrics, applying new kinds of pattern recognition techniques, and aggregating multiple self-tracking data streams from wearable electronics, biosensors, mobile phones, genomic data, and cloud-based services. The long-term vision of QS activity is that of a systemic monitoring approach where an individual's continuous personal information climate provides real-time performance optimization suggestions. There are some potential limitations related to QS activity-barriers to widespread adoption and a critique regarding scientific soundness-but these may be overcome. One interesting aspect of QS activity is that it is fundamentally a quantitative and qualitative phenomenon since it includes both the collection of objective metrics data and the subjective experience of the impact of these data. Some of this dynamic is being explored as the quantified self is becoming the qualified self in two new ways: by applying QS methods to the tracking of qualitative phenomena such as mood, and by understanding that QS data collection is just the first step in creating qualitative feedback loops for behavior change. In the long-term future, the quantified self may become additionally transformed into the extended exoself as data quantification and self-tracking enable the development of new sense capabilities that are not possible with ordinary senses. The

  15. TinyONet: A Cache-Based Sensor Network Bridge Enabling Sensing Data Reusability and Customized Wireless Sensor Network Services

    Science.gov (United States)

    Jung, Eui-Hyun; Park, Yong-Jin

    2008-01-01

    In recent years, a few protocol bridge research projects have been announced to enable a seamless integration of Wireless Sensor Networks (WSNs) with the TCP/IP network. These studies have ensured the transparent end-to-end communication between two network sides in the node-centric manner. Researchers expect this integration will trigger the development of various application domains. However, prior research projects have not fully explored some essential features for WSNs, especially the reusability of sensing data and the data-centric communication. To resolve these issues, we suggested a new protocol bridge system named TinyONet. In TinyONet, virtual sensors play roles as virtual counterparts of physical sensors and they dynamically group to make a functional entity, Slice. Instead of direct interaction with individual physical sensors, each sensor application uses its own WSN service provided by Slices. If a new kind of service is required in TinyONet, the corresponding function can be dynamically added at runtime. Beside the data-centric communication, it also supports the node-centric communication and the synchronous access. In order to show the effectiveness of the system, we implemented TinyONet on an embedded Linux machine and evaluated it with several experimental scenarios. PMID:27873968

  16. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  17. Big Data analytics in the Geo-Spatial Domain

    NARCIS (Netherlands)

    R.A. Goncalves (Romulo); M.G. Ivanova (Milena); M.L. Kersten (Martin); H. Scholten; S. Zlatanova; F. Alvanaki (Foteini); P. Nourian (Pirouz); E. Dias

    2014-01-01

    htmlabstractBig data collections in many scientific domains have inherently rich spatial and geo-spatial features. Spatial location is among the core aspects of data in Earth observation sciences, astronomy, and seismology to name a few. The goal of our project is to design an efficient data

  18. Recreating big Ban to learn more about universe

    CERN Multimedia

    2005-01-01

    A multi-nation effort at Gemeva-based CERN laboratory to recreate conditions existing just after the Big Ban could give vital clues to the creation of the universe and help overcome prejudices against this widely held scientific theory, an eminent science writer said in Kolkata on Tuesday

  19. The Rise of Big Data in Oncology.

    Science.gov (United States)

    Fessele, Kristen L

    2018-05-01

    To describe big data and data science in the context of oncology nursing care. Peer-reviewed and lay publications. The rapid expansion of real-world evidence from sources such as the electronic health record, genomic sequencing, administrative claims and other data sources has outstripped the ability of clinicians and researchers to manually review and analyze it. To promote high-quality, high-value cancer care, big data platforms must be constructed from standardized data sources to support extraction of meaningful, comparable insights. Nurses must advocate for the use of standardized vocabularies and common data elements that represent terms and concepts that are meaningful to patient care. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. The (Big Data-security assemblage: Knowledge and critique

    Directory of Open Access Journals (Sweden)

    Claudia Aradau

    2015-10-01

    Full Text Available The Snowden revelations and the emergence of ‘Big Data’ have rekindled questions about how security practices are deployed in a digital age and with what political effects. While critical scholars have drawn attention to the social, political and legal challenges to these practices, the debates in computer and information science have received less analytical attention. This paper proposes to take seriously the critical knowledge developed in information and computer science and reinterpret their debates to develop a critical intervention into the public controversies concerning data-driven security and digital surveillance. The paper offers a two-pronged contribution: on the one hand, we challenge the credibility of security professionals’ discourses in light of the knowledge that they supposedly mobilize; on the other, we argue for a series of conceptual moves around data, human–computer relations, and algorithms to address some of the limitations of existing engagements with the Big Data-security assemblage.

  1. Vectors into the Future of Mass and Interpersonal Communication Research: Big Data, Social Media, and Computational Social Science.

    Science.gov (United States)

    Cappella, Joseph N

    2017-10-01

    Simultaneous developments in big data, social media, and computational social science have set the stage for how we think about and understand interpersonal and mass communication. This article explores some of the ways that these developments generate 4 hypothetical "vectors" - directions - into the next generation of communication research. These vectors include developments in network analysis, modeling interpersonal and social influence, recommendation systems, and the blurring of distinctions between interpersonal and mass audiences through narrowcasting and broadcasting. The methods and research in these arenas are occurring in areas outside the typical boundaries of the communication discipline but engage classic, substantive questions in mass and interpersonal communication.

  2. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  3. The SAMI Galaxy Survey: A prototype data archive for Big Science exploration

    Science.gov (United States)

    Konstantopoulos, I. S.; Green, A. W.; Foster, C.; Scott, N.; Allen, J. T.; Fogarty, L. M. R.; Lorente, N. P. F.; Sweet, S. M.; Hopkins, A. M.; Bland-Hawthorn, J.; Bryant, J. J.; Croom, S. M.; Goodwin, M.; Lawrence, J. S.; Owers, M. S.; Richards, S. N.

    2015-11-01

    We describe the data archive and database for the SAMI Galaxy Survey, an ongoing observational program that will cover ≈3400 galaxies with integral-field (spatially-resolved) spectroscopy. Amounting to some three million spectra, this is the largest sample of its kind to date. The data archive and built-in query engine use the versatile Hierarchical Data Format (HDF5), which precludes the need for external metadata tables and hence the setup and maintenance overhead those carry. The code produces simple outputs that can easily be translated to plots and tables, and the combination of these tools makes for a light system that can handle heavy data. This article acts as a contextual companion to the SAMI Survey Database source code repository, samiDB, which is freely available online and written entirely in Python. We also discuss the decisions related to the selection of tools and the creation of data visualisation modules. It is our aim that the work presented in this article-descriptions, rationale, and source code-will be of use to scientists looking to set up a maintenance-light data archive for a Big Science data load.

  4. Swimming of a Tiny Subtropical Sea Butterfly with Coiled Shell

    Science.gov (United States)

    Murphy, David; Karakas, Ferhat; Maas, Amy

    2017-11-01

    Sea butterflies, also known as pteropods, include a variety of small, zooplanktonic marine snails. Thecosomatous pteropods possess a shell and swim at low Reynolds numbers by beating their wing-like parapodia in a manner reminiscent of insect flight. In fact, previous studies of the pteropod Limacina helicina have shown that pteropod swimming hydrodynamics and tiny insect flight aerodynamics are dynamically similar. Studies of L. helicina swimming have been performed in polar (0 degrees C) and temperate conditions (12 degrees C). Here we present measurements of the swimming of Heliconoides inflatus, a smaller yet morphologically similar pteropod that lives in warm Bermuda seawater (21 degrees C) with a viscosity almost half that of the polar seawater. The collected H. inflatus have shell sizes less than 1.5 mm in diameter, beat their wings at frequencies up to 11 Hz, and swim upwards in sawtooth trajectories at speeds up to approximately 25 mm/s. Using three-dimensional wing and body kinematics collected with two orthogonal high speed cameras and time-resolved, 2D flow measurements collected with a micro-PIV system, we compare the effects of smaller body size and lower water viscosity on the flow physics underlying flapping-based swimming by pteropods and flight by tiny insects.

  5. Challenges in data science: a complex systems perspective

    International Nuclear Information System (INIS)

    Carbone, Anna; Jensen, Meiko; Sato, Aki-Hiro

    2016-01-01

    The ability to process and manage large data volumes has been proven to be not enough to tackle the current challenges presented by “Big Data”. Deep insight is required for understanding interactions among connected systems, space- and time- dependent heterogeneous data structures. Emergence of global properties from locally interacting data entities and clustering phenomena demand suitable approaches and methodologies recently developed in the foundational area of Data Science by taking a Complex Systems standpoint. Here, we deal with challenges that can be summarized by the question: “What can Complex Systems Science contribute to Big Data? ”. Such question can be reversed and brought to a superior level of abstraction by asking “What Knowledge can be drawn from Big Data?” These aspects constitute the main motivation behind this article to introduce a volume containing a collection of papers presenting interdisciplinary advances in the Big Data area by methodologies and approaches typical of the Complex Systems Science, Nonlinear Systems Science and Statistical Physics.

  6. Heterogeneous tiny energy: An appealing opportunity to power wireless sensor motes in a corrosive environment

    International Nuclear Information System (INIS)

    Qiao, Guofu; Sun, Guodong; Li, Hui; Ou, Jinping

    2014-01-01

    Highlights: • Ultra-low ambient energy was scavenged to power the first of its kind wireless corrosion sensors. • Three feasible tiny-energy sources were exploited for long-term corrosion monitoring. • Automatic recharging control of heterogeneous tiny energy was proposed for human-free monitoring. • Corrosion itself was applied as an energy source to power the wireless corrosion-monitoring motes. - Abstract: Reinforcing steel corrosion is a significant factor leading to the durability deterioration of reinforced concrete (RC) structures. The on-line monitoring of the corrosion of RC structures in a long-term, human-free manner is not only valuable in industry, but also a significant challenge in academia. This paper presents the first of its kind corrosion-monitoring approach that only exploits three heterogeneous tiny energy sources to power commercial-off-the-shelf wireless sensor motes such that the corrosion-related data are automatically and autonomously captured and sent to users via wireless channels. We first investigated the availability of these three tiny energy sources: corrosion energy, a cement battery, and a weak solar energy. In particular, the two former energy sources inherently exist in RC structures and can be generated continually in the service-life of RC structures, which beneficial for the prospects of long-term corrosion monitoring. We then proposed a proof-of-concept prototype, which consisted of a Telosb wireless sensor mote and an energy harvester in order to evaluate the feasibility and effectiveness of the ultralow-power ambient energy as a type of power supply in corrosion monitoring applications. The critical metrics for the holographic monitoring of RC structures, including electrochemical noise, humidity and temperature, were successfully acquired and analysed using a post-processing program. This paper describes a unique and novel approach towards the realisation of smart structural monitoring and control system in the

  7. Big data and tactical analysis in elite soccer: future challenges and opportunities for sports science.

    Science.gov (United States)

    Rein, Robert; Memmert, Daniel

    2016-01-01

    Until recently tactical analysis in elite soccer were based on observational data using variables which discard most contextual information. Analyses of team tactics require however detailed data from various sources including technical skill, individual physiological performance, and team formations among others to represent the complex processes underlying team tactical behavior. Accordingly, little is known about how these different factors influence team tactical behavior in elite soccer. In parts, this has also been due to the lack of available data. Increasingly however, detailed game logs obtained through next-generation tracking technologies in addition to physiological training data collected through novel miniature sensor technologies have become available for research. This leads however to the opposite problem where the shear amount of data becomes an obstacle in itself as methodological guidelines as well as theoretical modelling of tactical decision making in team sports is lacking. The present paper discusses how big data and modern machine learning technologies may help to address these issues and aid in developing a theoretical model for tactical decision making in team sports. As experience from medical applications show, significant organizational obstacles regarding data governance and access to technologies must be overcome first. The present work discusses these issues with respect to tactical analyses in elite soccer and propose a technological stack which aims to introduce big data technologies into elite soccer research. The proposed approach could also serve as a guideline for other sports science domains as increasing data size is becoming a wide-spread phenomenon.

  8. What Difference Does Quantity Make? On the Epistemology of Big Data in Biology

    Science.gov (United States)

    Leonelli, Sabina

    2015-01-01

    Is big data science a whole new way of doing research? And what difference does data quantity make to knowledge production strategies and their outputs? I argue that the novelty of big data science does not lie in the sheer quantity of data involved, but rather in (1) the prominence and status acquired by data as commodity and recognised output, both within and outside of the scientific community; and (2) the methods, infrastructures, technologies, skills and knowledge developed to handle data. These developments generate the impression that data-intensive research is a new mode of doing science, with its own epistemology and norms. To assess this claim, one needs to consider the ways in which data are actually disseminated and used to generate knowledge. Accordingly, this paper reviews the development of sophisticated ways to disseminate, integrate and re-use data acquired on model organisms over the last three decades of work in experimental biology. I focus on online databases as prominent infrastructures set up to organise and interpret such data; and examine the wealth and diversity of expertise, resources and conceptual scaffolding that such databases draw upon. This illuminates some of the conditions under which big data need to be curated to support processes of discovery across biological subfields, which in turn highlights the difficulties caused by the lack of adequate curation for the vast majority of data in the life sciences. In closing, I reflect on the difference that data quantity is making to contemporary biology, the methodological and epistemic challenges of identifying and analyzing data given these developments, and the opportunities and worries associated to big data discourse and methods. PMID:25729586

  9. What Difference Does Quantity Make? On the Epistemology of Big Data in Biology.

    Science.gov (United States)

    Leonelli, Sabina

    2014-06-01

    Is big data science a whole new way of doing research? And what difference does data quantity make to knowledge production strategies and their outputs? I argue that the novelty of big data science does not lie in the sheer quantity of data involved, but rather in (1) the prominence and status acquired by data as commodity and recognised output, both within and outside of the scientific community; and (2) the methods, infrastructures, technologies, skills and knowledge developed to handle data. These developments generate the impression that data-intensive research is a new mode of doing science, with its own epistemology and norms. To assess this claim, one needs to consider the ways in which data are actually disseminated and used to generate knowledge. Accordingly, this paper reviews the development of sophisticated ways to disseminate, integrate and re-use data acquired on model organisms over the last three decades of work in experimental biology. I focus on online databases as prominent infrastructures set up to organise and interpret such data; and examine the wealth and diversity of expertise, resources and conceptual scaffolding that such databases draw upon. This illuminates some of the conditions under which big data need to be curated to support processes of discovery across biological subfields, which in turn highlights the difficulties caused by the lack of adequate curation for the vast majority of data in the life sciences. In closing, I reflect on the difference that data quantity is making to contemporary biology, the methodological and epistemic challenges of identifying and analyzing data given these developments, and the opportunities and worries associated to big data discourse and methods.

  10. What difference does quantity make? On the epistemology of Big Data in biology

    Directory of Open Access Journals (Sweden)

    S Leonelli

    2014-07-01

    Full Text Available Is Big Data science a whole new way of doing research? And what difference does data quantity make to knowledge production strategies and their outputs? I argue that the novelty of Big Data science does not lie in the sheer quantity of data involved, but rather in (1 the prominence and status acquired by data as commodity and recognised output, both within and outside of the scientific community and (2 the methods, infrastructures, technologies, skills and knowledge developed to handle data. These developments generate the impression that data-intensive research is a new mode of doing science, with its own epistemology and norms. To assess this claim, one needs to consider the ways in which data are actually disseminated and used to generate knowledge. Accordingly, this article reviews the development of sophisticated ways to disseminate, integrate and re-use data acquired on model organisms over the last three decades of work in experimental biology. I focus on online databases as prominent infrastructures set up to organise and interpret such data and examine the wealth and diversity of expertise, resources and conceptual scaffolding that such databases draw upon. This illuminates some of the conditions under which Big Data needs to be curated to support processes of discovery across biological subfields, which in turn highlights the difficulties caused by the lack of adequate curation for the vast majority of data in the life sciences. In closing, I reflect on the difference that data quantity is making to contemporary biology, the methodological and epistemic challenges of identifying and analysing data given these developments, and the opportunities and worries associated with Big Data discourse and methods.

  11. Air Toxics Under the Big Sky: Examining the Effectiveness of Authentic Scientific Research on High School Students' Science Skills and Interest.

    Science.gov (United States)

    Ward, Tony J; Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-01-01

    Air Toxics Under the Big Sky is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. A quasi-experimental design was used in order to understand: 1) how the program affects student understanding of scientific inquiry and research and 2) how the open inquiry learning opportunities provided by the program increase student interest in science as a career path . Treatment students received instruction related to air pollution (airborne particulate matter), associated health concerns, and training on how to operate air quality testing equipment. They then participated in a yearlong scientific research project in which they developed and tested hypotheses through research of their own design regarding the sources and concentrations of air pollution in their homes and communities. Results from an external evaluation revealed that treatment students developed a deeper understanding of scientific research than did comparison students, as measured by their ability to generate good hypotheses and research designs, and equally expressed an increased interest in pursuing a career in science. These results emphasize the value of and need for authentic science learning opportunities in the modern science classroom.

  12. Air Toxics Under the Big Sky: Examining the Effectiveness of Authentic Scientific Research on High School Students’ Science Skills and Interest

    Science.gov (United States)

    Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-01-01

    Air Toxics Under the Big Sky is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. A quasi-experimental design was used in order to understand: 1) how the program affects student understanding of scientific inquiry and research and 2) how the open inquiry learning opportunities provided by the program increase student interest in science as a career path. Treatment students received instruction related to air pollution (airborne particulate matter), associated health concerns, and training on how to operate air quality testing equipment. They then participated in a yearlong scientific research project in which they developed and tested hypotheses through research of their own design regarding the sources and concentrations of air pollution in their homes and communities. Results from an external evaluation revealed that treatment students developed a deeper understanding of scientific research than did comparison students, as measured by their ability to generate good hypotheses and research designs, and equally expressed an increased interest in pursuing a career in science. These results emphasize the value of and need for authentic science learning opportunities in the modern science classroom. PMID:28286375

  13. Air Toxics Under the Big Sky: examining the effectiveness of authentic scientific research on high school students' science skills and interest

    Science.gov (United States)

    Ward, Tony J.; Delaloye, Naomi; Adams, Earle Raymond; Ware, Desirae; Vanek, Diana; Knuth, Randy; Hester, Carolyn Laurie; Marra, Nancy Noel; Holian, Andrij

    2016-04-01

    Air Toxics Under the Big Sky is an environmental science outreach/education program that incorporates the Next Generation Science Standards (NGSS) 8 Practices with the goal of promoting knowledge and understanding of authentic scientific research in high school classrooms through air quality research. This research explored: (1) how the program affects student understanding of scientific inquiry and research and (2) how the open-inquiry learning opportunities provided by the program increase student interest in science as a career path. Treatment students received instruction related to air pollution (airborne particulate matter), associated health concerns, and training on how to operate air quality testing equipment. They then participated in a yearlong scientific research project in which they developed and tested hypotheses through research of their own design regarding the sources and concentrations of air pollution in their homes and communities. Results from an external evaluation revealed that treatment students developed a deeper understanding of scientific research than did comparison students, as measured by their ability to generate good hypotheses and research designs, and equally expressed an increased interest in pursuing a career in science. These results emphasize the value of and need for authentic science learning opportunities in the modern science classroom.

  14. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  15. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  16. High temperature annealing effect on structural and magnetic properties of Ti/Ni multilayers

    International Nuclear Information System (INIS)

    Bhatt, Pramod; Ganeshan, V.; Reddy, V.R.; Chaudhari, S.M.

    2006-01-01

    High temperature annealing effect on structural and magnetic properties of Ti/Ni multilayer (ML) up to 600 deg. C have been studied and reported in this paper. Ti/Ni multilayer samples having constant layer thicknesses of 50 A each are deposited on float glass and Si(1 1 1) substrates using electron-beam evaporation technique under ultra-high vacuum (UHV) conditions at room temperatures. The micro-structural parameters and their evolution with temperature for as-deposited as well as annealed multilayer samples up to 600 deg. C in a step of 100 deg. C for 1 h are determined by using X-ray diffraction (XRD) and grazing incidence X-ray reflectivity techniques. The X-ray diffraction pattern recorded at 300 deg. C annealed multilayer sample shows interesting structural transformation (from crystalline to amorphous) because of the solid-state reaction (SSR) and subsequent re-crystallization at higher temperatures of annealing, particularly at ≥400 deg. C due to the formation of TiNi 3 and Ti 2 Ni alloy phases. Sample quality and surface morphology are examined by using atomic force microscopy (AFM) technique for both as-deposited as well as annealed multilayer samples. In addition to this, a temperature dependent dc resistivity measurement is also used to study the structural transformation and subsequent alloy phase formation due to annealing treatment. The corresponding magnetization behavior of multilayer samples after each stage of annealing has been investigated by using Magneto-Optical Kerr Effect (MOKE) technique and results are interpreted in terms of observed micro-structural changes

  17. Fixing the Big Bang Theory's Lithium Problem

    Science.gov (United States)

    Kohler, Susanna

    2017-02-01

    How did our universe come into being? The Big Bang theory is a widely accepted and highly successful cosmological model of the universe, but it does introduce one puzzle: the cosmological lithium problem. Have scientists now found a solution?Too Much LithiumIn the Big Bang theory, the universe expanded rapidly from a very high-density and high-temperature state dominated by radiation. This theory has been validated again and again: the discovery of the cosmic microwave background radiation and observations of the large-scale structure of the universe both beautifully support the Big Bang theory, for instance. But one pesky trouble-spot remains: the abundance of lithium.The arrows show the primary reactions involved in Big Bang nucleosynthesis, and their flux ratios, as predicted by the authors model, are given on the right. Synthesizing primordial elements is complicated! [Hou et al. 2017]According to Big Bang nucleosynthesis theory, primordial nucleosynthesis ran wild during the first half hour of the universes existence. This produced most of the universes helium and small amounts of other light nuclides, including deuterium and lithium.But while predictions match the observed primordial deuterium and helium abundances, Big Bang nucleosynthesis theory overpredicts the abundance of primordial lithium by about a factor of three. This inconsistency is known as the cosmological lithium problem and attempts to resolve it using conventional astrophysics and nuclear physics over the past few decades have not been successful.In a recent publicationled by Suqing Hou (Institute of Modern Physics, Chinese Academy of Sciences) and advisorJianjun He (Institute of Modern Physics National Astronomical Observatories, Chinese Academy of Sciences), however, a team of scientists has proposed an elegant solution to this problem.Time and temperature evolution of the abundances of primordial light elements during the beginning of the universe. The authors model (dotted lines

  18. Analyzing Big Data in Medicine with Virtual Research Environments and Microservices

    OpenAIRE

    Ola, Spjuth

    2016-01-01

    Presentation by Ola Spjuth, Deputy director at Department of Information Technology, Uppsala Multidisciplinary Centre for Advanced Computational Science, at Big Data in Medicine, Uppsala, Sweden.

  19. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  20. Steering with big words: articulating ideographs in research programs

    NARCIS (Netherlands)

    Bos, Colette; Walhout, Bart; Walhout, Bart; Peine, Alexander; van Lente, Harro

    2014-01-01

    Nowadays, science should address societal challenges, such as ‘sustainability’, or ‘responsible research and innovation’. This emerging form of steering toward broad and generic goals involves the use of ‘big words’: encompassing concepts that are uncontested themselves, but that allow for multiple

  1. Advanced Research and Data Methods in Women's Health: Big Data Analytics, Adaptive Studies, and the Road Ahead.

    Science.gov (United States)

    Macedonia, Christian R; Johnson, Clark T; Rajapakse, Indika

    2017-02-01

    Technical advances in science have had broad implications in reproductive and women's health care. Recent innovations in population-level data collection and storage have made available an unprecedented amount of data for analysis while computational technology has evolved to permit processing of data previously thought too dense to study. "Big data" is a term used to describe data that are a combination of dramatically greater volume, complexity, and scale. The number of variables in typical big data research can readily be in the thousands, challenging the limits of traditional research methodologies. Regardless of what it is called, advanced data methods, predictive analytics, or big data, this unprecedented revolution in scientific exploration has the potential to dramatically assist research in obstetrics and gynecology broadly across subject matter. Before implementation of big data research methodologies, however, potential researchers and reviewers should be aware of strengths, strategies, study design methods, and potential pitfalls. Examination of big data research examples contained in this article provides insight into the potential and the limitations of this data science revolution and practical pathways for its useful implementation.

  2. Big data and information management: modeling the context decisional supported by sistemography

    Directory of Open Access Journals (Sweden)

    William Barbosa Vianna

    2016-04-01

    Full Text Available Introduction: The study justified by the scarcity of studies in the field of information science that addressing the phenomenon of big data from the perspective of information management. that will allow further development of computer simulation. Objective: The objective is to identify and represent the general elements of the decision-making process in the context of big data. Methodology: It is an exploratory study and theoretical and deductive nature. Results: It resulted in the identification of the main elements involved in decision-making on big data environment and its sistemografic representation. Conclusions: It was possible to develop a representation which will allow further development of computer simulation.

  3. Tiny intracranial aneurysms: Endovascular treatment by coil embolisation or sole stent deployment

    International Nuclear Information System (INIS)

    Lu Jun; Liu Jiachun; Wang Lijun; Qi Peng; Wang Daming

    2012-01-01

    Purpose: Tiny intracranial aneurysms pose a significant therapeutic challenge for interventional neuroradiologists. The authors report their preliminary results of endovascular treatment of these aneurysms. Methods: Between January 2002 and December 2009, 52 tiny intracranial aneurysms (defined as ≤3 mm in maximum diameter) in 46 patients (22 men; mean age, 57.9 years) were treated by endosaccular coil embolisation or sole stent deployment in the parent artery. Of 52 aneurysms, 29 had ruptured and 23 remained unruptured. The initial angiographic results, procedural complications, and clinical outcomes were assessed at discharge. Imaging follow-up was performed with cerebral angiography. Results: One aneurysm coiling procedure failed because of unsuccessful micro-catheterization. Forty-three aneurysms were successfully coil embolized, of which complete occlusion was obtained in 14, subtotal occlusion in 18 and incomplete occlusion in 11. The other 8 aneurysms were treated by sole stent deployment in the parent artery. Procedural complications (2 intraprocedural ruptures and 3 thromboembolic events) occurred in 5 (9.6%) of 52 aneurysms, resulting in permanent morbidity in only 1 (2.2%, 1/46) patient. No rebleeding occurred during clinical follow-up (mean duration, 46.7 months). Of the 16 coiled aneurysms that receiving repetitive angiography, 6 initially completely and 3 subtotally occluded aneurysms remained unchanged, 4 initially subtotally and 3 incompletely occluded aneurysms progressed to total occlusion. Five sole stent deployed aneurysms received angiographic follow-up (mean duration, 10.0 months), of which 3 remained unchanged, 1 became smaller and 1 progressed to total occlusion. Conclusion: Endovascular treatment of tiny intracranial aneurysms is technical feasible and relatively safe. Coil embolisation seems to be effective in preventing early recanalisation, whereas sole stenting technique needs further investigation to determine its effectiveness.

  4. Effects of Surface Dipole Lengths on Evaporation of Tiny Water Aggregation

    International Nuclear Information System (INIS)

    Wang Shen; Wan Rongzheng; Fang Haiping; Tu Yusong

    2013-01-01

    Using molecular dynamics simulation, we compared evaporation behavior of a tiny amount of water molecules adsorbed on solid surfaces with different dipole lengths, including surface dipole lengths of 1 fold, 2 folds, 4 folds, 6 folds and 8 folds of 0.14 nm and different charges from 0.1e to 0.9e. Surfaces with short dipole lengths (1-fold system) can always maintain hydrophobic character and the evaporation speeds are not influenced, whether the surface charges are enhanced or weakened; but when surface dipole lengths get to 8 folds, surfaces become more hydrophilic as the surface charge increases, and the evaporation speeds increase gradually and monotonically. By tuning dipole lengths from 1-fold to 8-fold systems, we confirmed non-monotonic variation of the evaporation flux (first increases, then decreases) in 4 fold system with charges (0.1e–0.7e), reported in our previous paper [S. Wang, et al., J. Phys. Chem. B 116 (2012) 13863], and also show the process from the enhancement of this unexpected non-monotonic variation to its vanishment with surface dipole lengths increasing. Herein, we demonstrated two key factors to influence the evaporation flux of a tiny amount of water molecules adsorbed on solid surfaces: the exposed surficial area of water aggregation from where the water molecules can evaporate directly and the attraction potential from the substrate hindering the evaporation. In addition, more interestingly, we showed extra steric effect of surface dipoles on further increase of evaporation flux for 2-folds, 4-folds, 6-folds and 8-folds systems with charges around larger than 0.7e. (The steric effect is first reported by parts of our authors [C. Wang, et al., Sci. Rep. 2 (2012) 358]). This study presents a complete physical picture of the influence of surface dipole lengths on the evaporation behavior of the adsorbed tiny amount of water. (condensed matter: structural, mechanical, and thermal properties)

  5. Data science, learning, and applications to biomedical and health sciences.

    Science.gov (United States)

    Adam, Nabil R; Wieder, Robert; Ghosh, Debopriya

    2017-01-01

    The last decade has seen an unprecedented increase in the volume and variety of electronic data related to research and development, health records, and patient self-tracking, collectively referred to as Big Data. Properly harnessed, Big Data can provide insights and drive discovery that will accelerate biomedical advances, improve patient outcomes, and reduce costs. However, the considerable potential of Big Data remains unrealized owing to obstacles including a limited ability to standardize and consolidate data and challenges in sharing data, among a variety of sources, providers, and facilities. Here, we discuss some of these challenges and potential solutions, as well as initiatives that are already underway to take advantage of Big Data. © 2017 New York Academy of Sciences.

  6. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  7. Understanding Big Data for Industrial Innovation and Design: The Missing Information Systems Perspective

    Directory of Open Access Journals (Sweden)

    Miguel Baptista Nunes

    2017-12-01

    Full Text Available This paper identifies a need to complement the current rich technical and mathematical research agenda on big data with a more information systems and information science strand, which focuses on the business value of big data. An agenda of research for information systems would explore motives for using big data in real organizational contexts, and consider proposed benefits, such as increased effectiveness and efficiency, production of high-quality products/services, creation of added business value, and stimulation of innovation and design. Impacts of such research on the academic community, the industrial and business world, and policy-makers are discussed.

  8. Ultrahigh Sensitivity Piezoresistive Pressure Sensors for Detection of Tiny Pressure.

    Science.gov (United States)

    Li, Hongwei; Wu, Kunjie; Xu, Zeyang; Wang, Zhongwu; Meng, Yancheng; Li, Liqiang

    2018-05-31

    High sensitivity pressure sensors are crucial for the ultra-sensitive touch technology and E-skin, especially at the tiny pressure range below 100 Pa. However, it is highly challenging to substantially promote sensitivity beyond the current level at several to two hundred kPa -1 , and to improve the detection limit lower than 0.1 Pa, which is significant for the development of pressure sensors toward ultrasensitive and highly precise detection. Here, we develop an efficient strategy to greatly improve the sensitivity near to 2000 kPa -1 by using short channel coplanar device structure and sharp microstructure, which is systematically proposed for the first time and rationalized by the mathematic calculation and analysis. Significantly, benefiting from the ultrahigh sensitivity, the detection limit is improved to be as small as 0.075 Pa. The sensitivity and detection limit are both superior to the current levels, and far surpass the function of human skin. Furthermore, the sensor shows fast response time (50 μs), excellent reproducibility and stability, and low power consumption. Remarkably, the sensor shows excellent detection capacity in the tiny pressure range including LED switching with a pressure of 7 Pa, ringtone (2-20 Pa) recognition, and ultrasensitive (0.1 Pa) electronic glove. This work represents a performance and strategic progress in the field of pressure sensing.

  9. Evaluation of Big Data Containers for Popular Storage, Retrieval, and Computation Primitives in Earth Science Analysis

    Science.gov (United States)

    Das, K.; Clune, T.; Kuo, K. S.; Mattmann, C. A.; Huang, T.; Duffy, D.; Yang, C. P.; Habermann, T.

    2015-12-01

    Data containers are infrastructures that facilitate storage, retrieval, and analysis of data sets. Big data applications in Earth Science require a mix of processing techniques, data sources and storage formats that are supported by different data containers. Some of the most popular data containers used in Earth Science studies are Hadoop, Spark, SciDB, AsterixDB, and RasDaMan. These containers optimize different aspects of the data processing pipeline and are, therefore, suitable for different types of applications. These containers are expected to undergo rapid evolution and the ability to re-test, as they evolve, is very important to ensure the containers are up to date and ready to be deployed to handle large volumes of observational data and model output. Our goal is to develop an evaluation plan for these containers to assess their suitability for Earth Science data processing needs. We have identified a selection of test cases that are relevant to most data processing exercises in Earth Science applications and we aim to evaluate these systems for optimal performance against each of these test cases. The use cases identified as part of this study are (i) data fetching, (ii) data preparation for multivariate analysis, (iii) data normalization, (iv) distance (kernel) computation, and (v) optimization. In this study we develop a set of metrics for performance evaluation, define the specifics of governance, and test the plan on current versions of the data containers. The test plan and the design mechanism are expandable to allow repeated testing with both new containers and upgraded versions of the ones mentioned above, so that we can gauge their utility as they evolve.

  10. Observatories, think tanks, and community models in the hydrologic and environmental sciences: How does it affect me?

    Science.gov (United States)

    Torgersen, Thomas

    2006-06-01

    Multiple issues in hydrologic and environmental sciences are now squarely in the public focus and require both government and scientific study. Two facts also emerge: (1) The new approach being touted publicly for advancing the hydrologic and environmental sciences is the establishment of community-operated "big science" (observatories, think tanks, community models, and data repositories). (2) There have been important changes in the business of science over the last 20 years that make it important for the hydrologic and environmental sciences to demonstrate the "value" of public investment in hydrological and environmental science. Given that community-operated big science (observatories, think tanks, community models, and data repositories) could become operational, I argue that such big science should not mean a reduction in the importance of single-investigator science. Rather, specific linkages between the large-scale, team-built, community-operated big science and the single investigator should provide context data, observatory data, and systems models for a continuing stream of hypotheses by discipline-based, specialized research and a strong rationale for continued, single-PI ("discovery-based") research. I also argue that big science can be managed to provide a better means of demonstrating the value of public investment in the hydrologic and environmental sciences. Decisions regarding policy will still be political, but big science could provide an integration of the best scientific understanding as a guide for the best policy.

  11. The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness

    Science.gov (United States)

    Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen

    2016-01-01

    The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term “Big Data”, which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing. PMID:27763525

  12. The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness

    Directory of Open Access Journals (Sweden)

    Ho Ting Wong

    2016-10-01

    Full Text Available The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term “Big Data”, which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing.

  13. The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness.

    Science.gov (United States)

    Wong, Ho Ting; Chiang, Vico Chung Lim; Choi, Kup Sze; Loke, Alice Yuen

    2016-10-17

    The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term "Big Data", which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing.

  14. Technology for Mining the Big Data of MOOCs

    Science.gov (United States)

    O'Reilly, Una-May; Veeramachaneni, Kalyan

    2014-01-01

    Because MOOCs bring big data to the forefront, they confront learning science with technology challenges. We describe an agenda for developing technology that enables MOOC analytics. Such an agenda needs to efficiently address the detailed, low level, high volume nature of MOOC data. It also needs to help exploit the data's capacity to reveal, in…

  15. Envisioning the future of 'big data' biomedicine.

    Science.gov (United States)

    Bui, Alex A T; Van Horn, John Darrell

    2017-05-01

    Through the increasing availability of more efficient data collection procedures, biomedical scientists are now confronting ever larger sets of data, often finding themselves struggling to process and interpret what they have gathered. This, while still more data continues to accumulate. This torrent of biomedical information necessitates creative thinking about how the data are being generated, how they might be best managed, analyzed, and eventually how they can be transformed into further scientific understanding for improving patient care. Recognizing this as a major challenge, the National Institutes of Health (NIH) has spearheaded the "Big Data to Knowledge" (BD2K) program - the agency's most ambitious biomedical informatics effort ever undertaken to date. In this commentary, we describe how the NIH has taken on "big data" science head-on, how a consortium of leading research centers are developing the means for handling large-scale data, and how such activities are being marshalled for the training of a new generation of biomedical data scientists. All in all, the NIH BD2K program seeks to position data science at the heart of 21 st Century biomedical research. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  17. Concurrence of big data analytics and healthcare: A systematic review.

    Science.gov (United States)

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  18. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  19. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  20. The community-driven BiG CZ software system for integration and analysis of bio- and geoscience data in the critical zone

    Science.gov (United States)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.; Valentine, D. W., Jr.; Richard, S. M.; Cheetham, R.; Meyer, F.; Henry, C.; Berg-Cross, G.; Packman, A. I.; Aronson, E. L.

    2014-12-01

    Here we present the prototypes of a new scientific software system designed around the new Observations Data Model version 2.0 (ODM2, https://github.com/UCHIC/ODM2) to substantially enhance integration of biological and Geological (BiG) data for Critical Zone (CZ) science. The CZ science community takes as its charge the effort to integrate theory, models and data from the multitude of disciplines collectively studying processes on the Earth's surface. The central scientific challenge of the CZ science community is to develop a "grand unifying theory" of the critical zone through a theory-model-data fusion approach, for which the key missing need is a cyberinfrastructure for seamless 4D visual exploration of the integrated knowledge (data, model outputs and interpolations) from all the bio and geoscience disciplines relevant to critical zone structure and function, similar to today's ability to easily explore historical satellite imagery and photographs of the earth's surface using Google Earth. This project takes the first "BiG" steps toward answering that need. The overall goal of this project is to co-develop with the CZ science and broader community, including natural resource managers and stakeholders, a web-based integration and visualization environment for joint analysis of cross-scale bio and geoscience processes in the critical zone (BiG CZ), spanning experimental and observational designs. We will: (1) Engage the CZ and broader community to co-develop and deploy the BiG CZ software stack; (2) Develop the BiG CZ Portal web application for intuitive, high-performance map-based discovery, visualization, access and publication of data by scientists, resource managers, educators and the general public; (3) Develop the BiG CZ Toolbox to enable cyber-savvy CZ scientists to access BiG CZ Application Programming Interfaces (APIs); and (4) Develop the BiG CZ Central software stack to bridge data systems developed for multiple critical zone domains into a single

  1. NASA EOSDIS Evolution in the BigData Era

    Science.gov (United States)

    Lynnes, Christopher

    2015-01-01

    NASA's EOSDIS system faces several challenges in the Big Data Era. Although volumes are large (but not unmanageably so), the variety of different data collections is daunting. That variety also brings with it a large and diverse user community. One key evolution EOSDIS is working toward is to enable more science analysis to be performed close to the data.

  2. Micromechanical characteristics of an Al/sub 2/O/sub 3/-TiNi ceramic produced in a high-pressure chamber

    Energy Technology Data Exchange (ETDEWEB)

    Barashkov, G.A.; Neshpor, V.S.; Berdikov, V.F.; Pushkarev, O.I.; Lavrenova, E. A.

    1987-03-01

    The micromechanical characteristics of an Al/sub 2/O/sub 3/-TiNi ceramic produced in high-pressure chambers under conditions of forced mass transfer are investigated experimentally using the microindentation method. The objective of the study is to use micromechanical characteristics to determine the time required for producing an Al/sub 2/O/sub 3/-TiNi ceramic with a fully formed structure. It is found that the process of forced mass transfer and crystallization is completed within 60-120 s.

  3. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  4. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  5. The PACA Project: Convergence of Scientific Research, Social Media and Citizen Science in the Era of Astronomical Big Data

    Science.gov (United States)

    Yanamandra-Fisher, Padma A.

    2015-08-01

    The Pro-Am Collaborative Astronomy (PACA) project promotes and supports the professional-amateur astronomer collaboration in scientific research via social media and has been implemented in several comet observing campaigns. In 2014, two comet observing campaigns involving pro-am collaborations were initiated: (1) C/2013 A1 (C/SidingSpring) and (2) 67P/Churyumov-Gerasimenko (CG), target for ESA/Rosetta mission. The evolving need for individual customized observing campaigns has been incorporated into the evolution of The PACA Project that currently is focused on comets: from supporting observing campaigns of current comets, legacy data, historical comets; interconnected with social media and a set of shareable documents addressing observational strategies; consistent standards for data; data access, use, and storage, to align with the needs of professional observers in the era of astronmical big data. The empowerment of amateur astronomers vis-à-vis their partnerships with the professional scientists creates a new demographic of data scientists, enabling citizen science of the integrated data from both the professional and amateur communities.While PACA identifies a consistent collaborative approach to pro-am collaborations, given the volume of data generated for each campaign, new ways of rapid data analysis, mining access and storage are needed. Several interesting results emerged from the synergistic inclusion of both social media and amateur astronomers. The PACA Project is expanding to include pro-am collaborations on other solar system objects; allow for immersive outreach and include various types of astronomical communities, ranging from individuals, to astronmical societies and telescopic networks. Enabling citizen science research in the era of astronomical big data is a challenge which requires innovative approaches and integration of professional and amateur astronomers with data scientists and some examples of recent projects will be highlighted.

  6. Resonance – Journal of Science Education | Indian Academy of ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 4. Motion of a Tiny Tool Thrown by an Astronaut towards another Astronaut inside a Spinning Space Vehicle in a State of Free Fall Revisited. S N Maitra. Classroom Volume 15 Issue 4 April 2010 pp 355-362 ...

  7. Measuring adolescent science motivation

    Science.gov (United States)

    Schumm, Maximiliane F.; Bogner, Franz X.

    2016-02-01

    To monitor science motivation, 232 tenth graders of the college preparatory level ('Gymnasium') completed the Science Motivation Questionnaire II (SMQ-II). Additionally, personality data were collected using a 10-item version of the Big Five Inventory. A subsequent exploratory factor analysis based on the eigenvalue-greater-than-one criterion, extracted a loading pattern, which in principle, followed the SMQ-II frame. Two items were dropped due to inappropriate loadings. The remaining SMQ-II seems to provide a consistent scale matching the findings in literature. Nevertheless, also possible shortcomings of the scale are discussed. Data showed a higher perceived self-determination in girls which seems compensated by their lower self-efficacy beliefs leading to equality of females and males in overall science motivation scores. Additionally, the Big Five personality traits and science motivation components show little relationship.

  8. Data Science Methodology for Cybersecurity Projects

    OpenAIRE

    Foroughi, Farhad; Luksch, Peter

    2018-01-01

    Cyber-security solutions are traditionally static and signature-based. The traditional solutions along with the use of analytic models, machine learning and big data could be improved by automatically trigger mitigation or provide relevant awareness to control or limit consequences of threats. This kind of intelligent solutions is covered in the context of Data Science for Cyber-security. Data Science provides a significant role in cyber-security by utilising the power of data (and big data),...

  9. Mash-up of techniques between data crawling/transfer, data preservation/stewardship and data processing/visualization technologies on a science cloud system designed for Earth and space science: a report of successful operation and science projects of the NICT Science Cloud

    Science.gov (United States)

    Murata, K. T.

    2014-12-01

    Data-intensive or data-centric science is 4th paradigm after observational and/or experimental science (1st paradigm), theoretical science (2nd paradigm) and numerical science (3rd paradigm). Science cloud is an infrastructure for 4th science methodology. The NICT science cloud is designed for big data sciences of Earth, space and other sciences based on modern informatics and information technologies [1]. Data flow on the cloud is through the following three techniques; (1) data crawling and transfer, (2) data preservation and stewardship, and (3) data processing and visualization. Original tools and applications of these techniques have been designed and implemented. We mash up these tools and applications on the NICT Science Cloud to build up customized systems for each project. In this paper, we discuss science data processing through these three steps. For big data science, data file deployment on a distributed storage system should be well designed in order to save storage cost and transfer time. We developed a high-bandwidth virtual remote storage system (HbVRS) and data crawling tool, NICTY/DLA and Wide-area Observation Network Monitoring (WONM) system, respectively. Data files are saved on the cloud storage system according to both data preservation policy and data processing plan. The storage system is developed via distributed file system middle-ware (Gfarm: GRID datafarm). It is effective since disaster recovery (DR) and parallel data processing are carried out simultaneously without moving these big data from storage to storage. Data files are managed on our Web application, WSDBank (World Science Data Bank). The big-data on the cloud are processed via Pwrake, which is a workflow tool with high-bandwidth of I/O. There are several visualization tools on the cloud; VirtualAurora for magnetosphere and ionosphere, VDVGE for google Earth, STICKER for urban environment data and STARStouch for multi-disciplinary data. There are 30 projects running on the NICT

  10. Geospatial big data and cartography : research challenges and opportunities for making maps that matter

    OpenAIRE

    Robinson, Anthony C.; Demsar, Urska; Moore, Antoni B.; Buckley, Aileen; Jiang, Bin; Field, Kenneth; Kraak, Menno-Jan; Camboim, Silvana P; Sluter, Claudia R

    2017-01-01

    Geospatial big data present a new set of challenges and opportunities for cartographic researchers in technical, methodological, and artistic realms. New computational and technical paradigms for cartography are accompanying the rise of geospatial big data. Additionally, the art and science of cartography needs to focus its contemporary efforts on work that connects to outside disciplines and is grounded in problems that are important to humankind and its sustainability. Following the develop...

  11. (Updated) Nanotechnology: Understanding the Tiny Particles That May Save a Life | Poster

    Science.gov (United States)

    By Nathalie Walker, Guest Writer Could nanotechnology—the study of tiny matter ranging in size from 1 to 200 nanometers—be the future of cancer treatment? Although it is a relatively new field in cancer research, nanotechnology is not new to everyday life. Have you ever thought about the tennis ball you’ve thrown with your dog at the park and wondered what it is made of?

  12. Fuzzy VIKOR approach for selection of big data analyst in procurement management

    Directory of Open Access Journals (Sweden)

    Surajit Bag

    2016-07-01

    Full Text Available Background: Big data and predictive analysis have been hailed as the fourth paradigm of science. Big data and analytics are critical to the future of business sustainability. The demand for data scientists is increasing with the dynamic nature of businesses, thus making it indispensable to manage big data, derive meaningful results and interpret management decisions. Objectives: The purpose of this study was to provide a brief conceptual review of big data and analytics and further illustrate the use of a multicriteria decision-making technique in selecting the right skilled candidate for big data and analytics in procurement management. Method: It is important for firms to select and recruit the right data analyst, both in terms of skills sets and scope of analysis. The nature of such a problem is complex and multicriteria decision-making, which deals with both qualitative and quantitative factors. In the current study, an application of the Fuzzy VIsekriterijumska optimizacija i KOmpromisno Resenje (VIKOR method was used to solve the big data analyst selection problem. Results: From this study, it was identified that Technical knowledge (C1, Intellectual curiosity (C4 and Business acumen (C5 are the strongest influential criteria and must be present in the candidate for the big data and analytics job. Conclusion: Fuzzy VIKOR is the perfect technique in this kind of multiple criteria decisionmaking problematic scenario. This study will assist human resource managers and procurement managers in selecting the right workforce for big data analytics.

  13. [Big data in imaging].

    Science.gov (United States)

    Sewerin, Philipp; Ostendorf, Benedikt; Hueber, Axel J; Kleyer, Arnd

    2018-04-01

    Until now, most major medical advancements have been achieved through hypothesis-driven research within the scope of clinical trials. However, due to a multitude of variables, only a certain number of research questions could be addressed during a single study, thus rendering these studies expensive and time consuming. Big data acquisition enables a new data-based approach in which large volumes of data can be used to investigate all variables, thus opening new horizons. Due to universal digitalization of the data as well as ever-improving hard- and software solutions, imaging would appear to be predestined for such analyses. Several small studies have already demonstrated that automated analysis algorithms and artificial intelligence can identify pathologies with high precision. Such automated systems would also seem well suited for rheumatology imaging, since a method for individualized risk stratification has long been sought for these patients. However, despite all the promising options, the heterogeneity of the data and highly complex regulations covering data protection in Germany would still render a big data solution for imaging difficult today. Overcoming these boundaries is challenging, but the enormous potential advances in clinical management and science render pursuit of this goal worthwhile.

  14. Small data, data infrastructures and big data (Working Paper 1)

    OpenAIRE

    Kitchin, Rob; Lauriault, Tracey P.

    2014-01-01

    The production of academic knowledge has progressed for the past few centuries using small data studies characterized by sampled data generated to answer specific questions. It is a strategy that has been remarkably successful, enabling the sciences, social sciences and humanities to advance in leaps and bounds. This approach is presently being challenged by the development of big data. Small data studies will, however, continue to be important in the future because of their utility in answer...

  15. Analysis of the transformations temperatures of helicoidal Ti-Ni actuators using computational numerical methods

    Directory of Open Access Journals (Sweden)

    Carlos Augusto do N. Oliveira

    2013-01-01

    Full Text Available The development of shape memory actuators has enabled noteworthy applications in the mechanical engineering, robotics, aerospace, and oil industries and in medicine. These applications have been targeted on miniaturization and taking full advantage of spaces. This article analyses a Ti-Ni shape memory actuator used as part of a flow control system. A Ti-Ni spring actuator is subjected to thermomechanical training and parameters such as transformation temperature, thermal hysteresis and shape memory effect performance were investigated. These parameters were important for understanding the behavior of the actuator related to martensitic phase transformation during the heating and cooling cycles which it undergoes when in service. The multiple regression methodology was used as a computational tool for analysing data in order to simulate and predict the results for stress and cycles where the experimental data was not developed. The results obtained using the training cycles enable actuators to be characterized and the numerical simulation to be validated.

  16. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Science.gov (United States)

    Batyaev, V. F.; Skliarov, S. V.

    2018-01-01

    The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  17. Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2018-01-01

    Full Text Available The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW. The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration, meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.

  18. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  19. The Diatoms: Big Significance of Tiny Glass Houses

    Indian Academy of Sciences (India)

    Permanent link: https://www.ias.ac.in/article/fulltext/reso/020/10/0919-0930. Keywords. Algae; primary production; frustule; nanotechnology; silica cell wall. Author Affiliations. Aditi Kale1 Balasubramanian Karthick1. Biodiversity and Paleobiology Group Agharkar Research Institute G G Agarkar Road, Pune 411004, India ...

  20. The use of big data in transfusion medicine.

    Science.gov (United States)

    Pendry, K

    2015-06-01

    'Big data' refers to the huge quantities of digital information now available that describe much of human activity. The science of data management and analysis is rapidly developing to enable organisations to convert data into useful information and knowledge. Electronic health records and new developments in Pathology Informatics now support the collection of 'big laboratory and clinical data', and these digital innovations are now being applied to transfusion medicine. To use big data effectively, we must address concerns about confidentiality and the need for a change in culture and practice, remove barriers to adopting common operating systems and data standards and ensure the safe and secure storage of sensitive personal information. In the UK, the aim is to formulate a single set of data and standards for communicating test results and so enable pathology data to contribute to national datasets. In transfusion, big data has been used for benchmarking, detection of transfusion-related complications, determining patterns of blood use and definition of blood order schedules for surgery. More generally, rapidly available information can monitor compliance with key performance indicators for patient blood management and inventory management leading to better patient care and reduced use of blood. The challenges of enabling reliable systems and analysis of big data and securing funding in the restrictive financial climate are formidable, but not insurmountable. The promise is that digital information will soon improve the implementation of best practice in transfusion medicine and patient blood management globally. © 2015 British Blood Transfusion Society.

  1. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  2. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  3. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  4. Big agronomic data validates an oxymoron: Sustainable intensification under climate change

    Science.gov (United States)

    Crop science is increasingly embracing big data to reconcile the apparent rift between intensification of food production and sustainability of a steadily stressed production base. A strategy based on long-term agroecosystem research and modeling simulation of crops, crop rotations and cropping sys...

  5. Big Data Comes to School

    Directory of Open Access Journals (Sweden)

    Bill Cope

    2016-03-01

    Full Text Available The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.

  6. Economics and econophysics in the era of Big Data

    Science.gov (United States)

    Cheong, Siew Ann

    2016-12-01

    There is an undeniable disconnect between theory-heavy economics and the real world, and some cross polination of ideas with econophysics, which is more balanced between data and models, might help economics along the way to become a truly scientific enterprise. With the coming of the era of Big Data, this transformation of economics into a data-driven science is becoming more urgent. In this article, I use the story of Kepler's discovery of his three laws of planetary motion to enlarge the framework of the scientific approach, from one that focuses on experimental sciences, to one that accommodates observational sciences, and further to one that embraces data mining and machine learning. I distinguish between the ontological values of Kepler's Laws vis-a-vis Newton's Laws, and argue that the latter is more fundamental because it is able to explain the former. I then argue that the fundamental laws of economics lie not in mathematical equations, but in models of adaptive economic agents. With this shift in mind set, it becomes possible to think about how interactions between agents can lead to the emergence of multiple stable states and critical transitions, and complex adaptive policies and regulations that might actually work in the real world. Finally, I discuss how Big Data, exploratory agent-based modeling, and predictive agent-based modeling can come together in a unified framework to make economics a true science.

  7. Electron irradiation effect on the reverse phase transformation temperatures in TiNi shape memory alloy thin films

    International Nuclear Information System (INIS)

    Wang, Z.G.; Zu, X.T.; Fu, Y.Q.; Zhu, S.; Wang, L.M.

    2005-01-01

    In this work, Ti-Ni shape memory alloy thin films were irradiated by 1.7 MeV electron with three types of fluences: 4 x 10 20 , 7 x 10 20 and 1 x 10 21 /m 2 . The influence of electron irradiation on the transformation behavior of the TiNi thin films were investigated by differential scanning calorimetry. The transformation temperatures A s and A f shifted to higher temperature after electron irradiation, the martensite was stabilized. The electron irradiation effect can be easily eliminated by one thermal cycle. The shifts of the transformation temperatures can be explained from the change of potential energy barrier and coherency energy between parent phase and martensite after irradiation

  8. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  9. [Big Data and Public Health - Results of the Working Group 1 of the Forum Future Public Health, Berlin 2016].

    Science.gov (United States)

    Moebus, Susanne; Kuhn, Joseph; Hoffmann, Wolfgang

    2017-11-01

    Big Data is a diffuse term, which can be described as an approach to linking gigantic and often unstructured data sets. Big Data is used in many corporate areas. For Public Health (PH), however, Big Data is not a well-developed topic. In this article, Big Data is explained according to the intention of use, information efficiency, prediction and clustering. Using the example of application in science, patient care, equal opportunities and smart cities, typical challenges and open questions of Big Data for PH are outlined. In addition to the inevitable use of Big Data, networking is necessary, especially with knowledge-carriers and decision-makers from politics and health care practice. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Genome Variation Map: a data repository of genome variations in BIG Data Center

    OpenAIRE

    Song, Shuhui; Tian, Dongmei; Li, Cuiping; Tang, Bixia; Dong, Lili; Xiao, Jingfa; Bao, Yiming; Zhao, Wenming; He, Hang; Zhang, Zhang

    2017-01-01

    Abstract The Genome Variation Map (GVM; http://bigd.big.ac.cn/gvm/) is a public data repository of genome variations. As a core resource in the BIG Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences, GVM dedicates to collect, integrate and visualize genome variations for a wide range of species, accepts submissions of different types of genome variations from all over the world and provides free open access to all publicly available data in support of worldwide research a...

  11. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  12. Automated protocols for spaceborne sub-meter resolution "Big Data" products for Earth Science

    Science.gov (United States)

    Neigh, C. S. R.; Carroll, M.; Montesano, P.; Slayback, D. A.; Wooten, M.; Lyapustin, A.; Shean, D. E.; Alexandrov, O.; Macander, M. J.; Tucker, C. J.

    2017-12-01

    The volume of available remotely sensed data has grown exceeding Petabytes per year and the cost for data, storage systems and compute power have both dropped exponentially. This has opened the door for "Big Data" processing systems with high-end computing (HEC) such as the Google Earth Engine, NASA Earth Exchange (NEX), and NASA Center for Climate Simulation (NCCS). At the same time, commercial very high-resolution (VHR) satellites have grown into a constellation with global repeat coverage that can support existing NASA Earth observing missions with stereo and super-spectral capabilities. Through agreements with the National Geospatial-Intelligence Agency NASA-Goddard Space Flight Center is acquiring Petabytes of global sub-meter to 4 meter resolution imagery from WorldView-1,2,3 Quickbird-2, GeoEye-1 and IKONOS-2 satellites. These data are a valuable no-direct cost for the enhancement of Earth observation research that supports US government interests. We are currently developing automated protocols for generating VHR products to support NASA's Earth observing missions. These include two primary foci: 1) on demand VHR 1/2° ortho mosaics - process VHR to surface reflectance, orthorectify and co-register multi-temporal 2 m multispectral imagery compiled as user defined regional mosaics. This will provide an easy access dataset to investigate biodiversity, tree canopy closure, surface water fraction, and cropped area for smallholder agriculture; and 2) on demand VHR digital elevation models (DEMs) - process stereo VHR to extract VHR DEMs with the NASA Ames stereo pipeline. This will benefit Earth surface studies on the cryosphere (glacier mass balance, flow rates and snow depth), hydrology (lake/water body levels, landslides, subsidence) and biosphere (forest structure, canopy height/cover) among others. Recent examples of products used in NASA Earth Science projects will be provided. This HEC API could foster surmounting prior spatial-temporal limitations while

  13. Evolution of the Air Toxics under the Big Sky Program

    Science.gov (United States)

    Marra, Nancy; Vanek, Diana; Hester, Carolyn; Holian, Andrij; Ward, Tony; Adams, Earle; Knuth, Randy

    2011-01-01

    As a yearlong exploration of air quality and its relation to respiratory health, the "Air Toxics Under the Big Sky" program offers opportunities for students to learn and apply science process skills through self-designed inquiry-based research projects conducted within their communities. The program follows a systematic scope and sequence…

  14. Exogenously applied D-pinitol and D-chiro-inositol modifies the accumulation of α-D-galactosides in developing tiny vetch (Vicia hirsuta [L.] S.F. Gray seeds

    Directory of Open Access Journals (Sweden)

    Lesław B. Lahuta

    2011-01-01

    Full Text Available In the present study we have investigated the effect of exogenous cyclitols on the accumulation of their galactosides and raffinose family oligosaccharides (RFOs, as well as on some enzymes important for their biosynthesis in seeds of tiny vetch (Vicia hirsuta [L.] S.F. Gray. Immature seeds during 6-day incubation with D-chiro-inositol (naturally does not appear in seeds of tiny vetch were accumulated cyclitol and its galactosides (fagopyritols: B1 and B2. Short 4-hour incubation with D-chiro-inositol, and subsequent slow desiccation process caused accumulation of free cyclitol only, without biosynthesis of its galactosides. Feeding D-chiro-inositol to pods of tiny vetch induced accumulation of high levels of its galactosides (fagopyritol B1, B2 and B3 in maturing seeds. Similarly, feeding D-pinitol increased accumulation of its mono-, di- and tri-galactosides: GPA, GPB, DGPA and TGPA in tiny vetch seed. Accumulation of both cyclitols and their galactosides drastically reduced accumulation of verbascose. Inhibition of RFOs biosynthesis by elevated levels of free cyclitols suggests some competition between formation of both types of galactosides and similarity of both biosynthetic routes in tiny vetch seeds. Galactinol synthase (GolS from tiny vetch seeds demonstrated ability to utilize D-chiro-inositol as galactosyl acceptor, instead of myo-inositol. Presence of both cyclitols, as substrates for GolS, caused synthesis of their galactosides: fagopyritol B1 and galactinol. However, formation of galactinol was more efficient than fagopyritol B1. D-chiro-Inositol and D-pinitol at concentrations several-fold higher than myo-inositol had inhibitory effect on GolS. Thus, we suggest that a level of free cyclitols can have an influence on the rate of galactinol biosynthesis and further accumulation of RFOs and galactosyl cyclitols in tiny vetch seeds.

  15. Geologic map of Big Bend National Park, Texas

    Science.gov (United States)

    Turner, Kenzie J.; Berry, Margaret E.; Page, William R.; Lehman, Thomas M.; Bohannon, Robert G.; Scott, Robert B.; Miggins, Daniel P.; Budahn, James R.; Cooper, Roger W.; Drenth, Benjamin J.; Anderson, Eric D.; Williams, Van S.

    2011-01-01

    The purpose of this map is to provide the National Park Service and the public with an updated digital geologic map of Big Bend National Park (BBNP). The geologic map report of Maxwell and others (1967) provides a fully comprehensive account of the important volcanic, structural, geomorphological, and paleontological features that define BBNP. However, the map is on a geographically distorted planimetric base and lacks topography, which has caused difficulty in conducting GIS-based data analyses and georeferencing the many geologic features investigated and depicted on the map. In addition, the map is outdated, excluding significant data from numerous studies that have been carried out since its publication more than 40 years ago. This report includes a modern digital geologic map that can be utilized with standard GIS applications to aid BBNP researchers in geologic data analysis, natural resource and ecosystem management, monitoring, assessment, inventory activities, and educational and recreational uses. The digital map incorporates new data, many revisions, and greater detail than the original map. Although some geologic issues remain unresolved for BBNP, the updated map serves as a foundation for addressing those issues. Funding for the Big Bend National Park geologic map was provided by the United States Geological Survey (USGS) National Cooperative Geologic Mapping Program and the National Park Service. The Big Bend mapping project was administered by staff in the USGS Geology and Environmental Change Science Center, Denver, Colo. Members of the USGS Mineral and Environmental Resources Science Center completed investigations in parallel with the geologic mapping project. Results of these investigations addressed some significant current issues in BBNP and the U.S.-Mexico border region, including contaminants and human health, ecosystems, and water resources. Funding for the high-resolution aeromagnetic survey in BBNP, and associated data analyses and

  16. Agrupamentos epistemológicos de artigos publicados sobre big data analytics

    Directory of Open Access Journals (Sweden)

    Patricia Kuzmenko FURLAN

    Full Text Available Resumo A era do big data já é realidade para empresas e indivíduos, e a literatura acadêmica sobre o tema tem crescido rapidamente nos últimos anos. Neste artigo, pretendeu-se identificar quais são os principais nichos e vertentes de publicação sobre o big data analytics. A opção metodológica foi realizar pesquisa bibliométrica na base de dados ISI Web of Science, utilizando-se aquele termo para focar as práticas de gestão de big data. Foi possível identificar cinco grupos distintos dentre os artigos encontrados: evolução do big data; gestão, negócios e estratégia; comportamento humano e aspectos socioculturais; mineração dos dados (data mining e geração de conhecimento; e Internet das Coisas. Concluiu-se que o tema é emergente e pouco consolidado, apresentando grande variação nos termos empregados, o que influencia nas buscas bibliográficas. Como resultado complementar da pesquisa, foram identificadas as principais palavras-chave empregadas nas publicações sobre big data analytics, o que contribui para as pesquisas bibliográficas de estudos futuros.

  17. Adoption of geodemographic and ethno-cultural taxonomies for analysing Big Data

    Directory of Open Access Journals (Sweden)

    Richard James Webber

    2015-05-01

    Full Text Available This paper is intended to contribute to the discussion of the differential level of adoption of Big Data among research communities. Recognising the impracticality of conducting an audit across all forms and uses of Big Data, we have restricted our enquiry to one very specific form of Big Data, namely general purpose taxonomies, of which Mosaic, Acorn and Origins are examples, that rely on data from a variety of Big Data feeds. The intention of these taxonomies is to enable the records of consumers and citizens held on Big Data datasets to be coded according to type of residential neighbourhood or ethno-cultural heritage without any use of questionnaires. Based on our respective experience in the academic social sciences, in government and in the design and marketing of these taxonomies, we identify the features of these classifications which appear to render them attractive or problematic to different categories of potential user or researcher depending on how the relationship is conceived. We conclude by identifying seven classifications of user or potential user who, on account of their background, current position and future career expectations, tend to respond in different ways to the opportunity to adopt these generic systems as aids for understanding social processes.

  18. Growth and surface morphology of ion-beam sputtered Ti-Ni thin films

    International Nuclear Information System (INIS)

    Rao, Ambati Pulla; Sunandana, C.S.

    2008-01-01

    Titanium-nickel thin films have been deposited on float glass substrates by ion beam sputtering in 100% pure argon atmosphere. Sputtering is predominant at energy region of incident ions, 1000 eV to 100 keV. The as-deposited films were investigated by X-ray photoelectron spectroscopy (XPS) and atomic force microscope (AFM). In this paper we attempted to study the surface morphology and elemental composition through AFM and XPS, respectively. Core level as well as valence band spectra of ion-beam sputtered Ti-Ni thin films at various Ar gas rates (5, 7 and 12 sccm) show that the thin film deposited at 3 sccm possess two distinct peaks at binding energies 458.55 eV and 464.36 eV mainly due to TiO 2 . Upon increasing Ar rate oxidation of Ti-Ni is reduced and the Ti-2p peaks begin approaching those of pure elemental Ti. Here Ti-2p peaks are observed at binding energy positions of 454.7 eV and 460.5 eV. AFM results show that the average grain size and roughness decrease, upon increasing Ar gas rate, from 2.90 μm to 0.096 μm and from 16.285 nm to 1.169 nm, respectively

  19. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  20. Next Generation Workload Management and Analysis System for Big Data

    Energy Technology Data Exchange (ETDEWEB)

    De, Kaushik [Univ. of Texas, Arlington, TX (United States)

    2017-04-24

    We report on the activities and accomplishments of a four-year project (a three-year grant followed by a one-year no cost extension) to develop a next generation workload management system for Big Data. The new system is based on the highly successful PanDA software developed for High Energy Physics (HEP) in 2005. PanDA is used by the ATLAS experiment at the Large Hadron Collider (LHC), and the AMS experiment at the space station. The program of work described here was carried out by two teams of developers working collaboratively at Brookhaven National Laboratory (BNL) and the University of Texas at Arlington (UTA). These teams worked closely with the original PanDA team – for the sake of clarity the work of the next generation team will be referred to as the BigPanDA project. Their work has led to the adoption of BigPanDA by the COMPASS experiment at CERN, and many other experiments and science projects worldwide.

  1. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  2. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  3. Has the time come for big science in wildlife health?

    Science.gov (United States)

    Sleeman, Jonathan M.

    2013-01-01

    The consequences of wildlife emerging diseases are global and profound with increased burden on the public health system, negative impacts on the global economy, declines and extinctions of wildlife species, and subsequent loss of ecological integrity. Examples of health threats to wildlife include Batrachochytrium dendrobatidis, which causes a cutaneous fungal infection of amphibians and is linked to declines of amphibians globally; and the recently discovered Pseudogymnoascus (Geomyces) destructans, the etiologic agent of white nose syndrome which has caused precipitous declines of North American bat species. Of particular concern are the novel pathogens that have emerged as they are particularly devastating and challenging to manage. A big science approach to wildlife health research is needed if we are to make significant and enduring progress in managing these diseases. The advent of new analytical models and bench assays will provide us with the mathematical and molecular tools to identify and anticipate threats to wildlife, and understand the ecology and epidemiology of these diseases. Specifically, new molecular diagnostic techniques have opened up avenues for pathogen discovery, and the application of spatially referenced databases allows for risk assessments that can assist in targeting surveillance. Long-term, systematic collection of data for wildlife health and integration with other datasets is also essential. Multidisciplinary research programs should be expanded to increase our understanding of the drivers of emerging diseases and allow for the development of better disease prevention and management tools, such as vaccines. Finally, we need to create a National Fish and Wildlife Health Network that provides the operational framework (governance, policies, procedures, etc.) by which entities with a stake in wildlife health cooperate and collaborate to achieve optimal outcomes for human, animal, and ecosystem health.

  4. The Big Bang: UK Young Scientists' and Engineers' Fair 2010

    Science.gov (United States)

    Allison, Simon

    2010-01-01

    The Big Bang: UK Young Scientists' and Engineers' Fair is an annual three-day event designed to promote science, technology, engineering and maths (STEM) careers to young people aged 7-19 through experiential learning. It is supported by stakeholders from business and industry, government and the community, and brings together people from various…

  5. Population-based imaging biobanks as source of big data.

    Science.gov (United States)

    Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian

    2017-06-01

    Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.

  6. Functional connectomics from a "big data" perspective.

    Science.gov (United States)

    Xia, Mingrui; He, Yong

    2017-10-15

    In the last decade, explosive growth regarding functional connectome studies has been observed. Accumulating knowledge has significantly contributed to our understanding of the brain's functional network architectures in health and disease. With the development of innovative neuroimaging techniques, the establishment of large brain datasets and the increasing accumulation of published findings, functional connectomic research has begun to move into the era of "big data", which generates unprecedented opportunities for discovery in brain science and simultaneously encounters various challenging issues, such as data acquisition, management and analyses. Big data on the functional connectome exhibits several critical features: high spatial and/or temporal precision, large sample sizes, long-term recording of brain activity, multidimensional biological variables (e.g., imaging, genetic, demographic, cognitive and clinic) and/or vast quantities of existing findings. We review studies regarding functional connectomics from a big data perspective, with a focus on recent methodological advances in state-of-the-art image acquisition (e.g., multiband imaging), analysis approaches and statistical strategies (e.g., graph theoretical analysis, dynamic network analysis, independent component analysis, multivariate pattern analysis and machine learning), as well as reliability and reproducibility validations. We highlight the novel findings in the application of functional connectomic big data to the exploration of the biological mechanisms of cognitive functions, normal development and aging and of neurological and psychiatric disorders. We advocate the urgent need to expand efforts directed at the methodological challenges and discuss the direction of applications in this field. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Small Artifacts - Big Technologies

    DEFF Research Database (Denmark)

    Kreiner, Kristian

    2005-01-01

    The computer IC is the heart of the information and telecommunication technology. It is a tiny artifact, but with incredible organizing powers. We use this physical artifact as the location for studying central problems of the knowledge economy. First, the paper describes the history of chip design...

  8. The dynamics of big data and human rights: the case of scientific research.

    Science.gov (United States)

    Vayena, Effy; Tasioulas, John

    2016-12-28

    In this paper, we address the complex relationship between big data and human rights. Because this is a vast terrain, we restrict our focus in two main ways. First, we concentrate on big data applications in scientific research, mostly health-related research. And, second, we concentrate on two human rights: the familiar right to privacy and the less well-known right to science. Our contention is that human rights interact in potentially complex ways with big data, not only constraining it, but also enabling it in various ways; and that such rights are dynamic in character, rather than fixed once and for all, changing in their implications over time in line with changes in the context we inhabit, and also as they interact among themselves in jointly responding to the opportunities and risks thrown up by a changing world. Understanding this dynamic interaction of human rights is crucial for formulating an ethic tailored to the realities-the new capabilities and risks-of the rapidly evolving digital environment.This article is part of the themed issue 'The ethical impact of data science'. © 2016 The Author(s).

  9. How Does National Scientific Funding Support Emerging Interdisciplinary Research: A Comparison Study of Big Data Research in the US and China

    Science.gov (United States)

    Huang, Ying; Zhang, Yi; Youtie, Jan; Porter, Alan L.; Wang, Xuefeng

    2016-01-01

    How do funding agencies ramp-up their capabilities to support research in a rapidly emerging area? This paper addresses this question through a comparison of research proposals awarded by the US National Science Foundation (NSF) and the National Natural Science Foundation of China (NSFC) in the field of Big Data. Big data is characterized by its size and difficulties in capturing, curating, managing and processing it in reasonable periods of time. Although Big Data has its legacy in longstanding information technology research, the field grew very rapidly over a short period. We find that the extent of interdisciplinarity is a key aspect in how these funding agencies address the rise of Big Data. Our results show that both agencies have been able to marshal funding to support Big Data research in multiple areas, but the NSF relies to a greater extent on multi-program funding from different fields. We discuss how these interdisciplinary approaches reflect the research hot-spots and innovation pathways in these two countries. PMID:27219466

  10. Lattice QCD simulations on big cats, sea monsters and clock towers

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint, E-mail: bjoo@jlab.or [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)

    2009-07-01

    We present details of lattice QCD computations we are performing on the Cray XT series of computers, from BigBen - an XT3 hosted at the Pittsburgh Supercomputing Center (PSC) - through Jaguar (XT4) and Kraken (XT5) - which are hosted at the National Center for Computational Science (NCCS) and the National Institute of Computational Science (NICS), respectively, at Oak Ridge National Laboratory (ORNL). We discuss algorithmic tuning to make the computation more efficient and present some recent results.

  11. Lattice QCD simulations on big cats, sea monsters and clock towers

    International Nuclear Information System (INIS)

    Joo, Balint

    2009-01-01

    We present details of lattice QCD computations we are performing on the Cray XT series of computers, from BigBen - an XT3 hosted at the Pittsburgh Supercomputing Center (PSC) - through Jaguar (XT4) and Kraken (XT5) - which are hosted at the National Center for Computational Science (NCCS) and the National Institute of Computational Science (NICS), respectively, at Oak Ridge National Laboratory (ORNL). We discuss algorithmic tuning to make the computation more efficient and present some recent results.

  12. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  13. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  14. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  15. Measuring Adolescent Science Motivation

    Science.gov (United States)

    Schumm, Maximiliane F.; Bogner, Franz X.

    2016-01-01

    To monitor science motivation, 232 tenth graders of the college preparatory level ("Gymnasium") completed the Science Motivation Questionnaire II (SMQ-II). Additionally, personality data were collected using a 10-item version of the Big Five Inventory. A subsequent exploratory factor analysis based on the eigenvalue-greater-than-one…

  16. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  17. NOAA's Big Data Partnership and Applications to Ocean Sciences

    Science.gov (United States)

    Kearns, E. J.

    2016-02-01

    New opportunities for the distribution of NOAA's oceanographic and other environmental data are being explored through NOAA's Big Data Partnership (BDP) with Amazon Web Services, Google Cloud Platform, IBM, Microsoft Corp. and the Open Cloud Consortium. This partnership was established in April 2015 through Cooperative Research and Development Agreements, and is seeking new, financially self-sustaining collaborations between the Partners and the federal government centered upon NOAA's data and their potential value in the information marketplace. We will discuss emerging opportunities for collaboration among businesses and NOAA, progress in making NOAA's ocean data more widely accessible through the Partnerships, and applications based upon this access to NOAA's data.

  18. Cosmic Heritage Evolution from the Big Bang to Conscious Life

    CERN Document Server

    Shaver, Peter

    2011-01-01

    This book follows the evolutionary trail all the way from the Big Bang 13.7 billion years ago to conscious life today. It is an accessible introductory book written for the interested layperson – anyone interested in the ‘big picture’ coming from modern science. It covers a wide range of topics including the origin and evolution of our universe, the nature and origin of life, the evolution of life including questions of birth and death, the evolution of cognition, the nature of consciousness, the possibility of extraterrestrial life and the future of the universe. The book is written in a narrative style, as these topics are all parts of a single story. It concludes with a discussion on the nature and future of science.  “Peter Shaver has written engagingly for anyone curious about the world we inhabit.  If you'd like to know how the Universe began, where the chemical elements originated, how life may have started on Earth, how man, ants and bacteria are related to each other, or why we humans think...

  19. Linking Big and Small Data Across the Social, Engineering, and Earth Sciences

    Science.gov (United States)

    Chen, R. S.; de Sherbinin, A. M.; Levy, M. A.; Downs, R. R.

    2014-12-01

    The challenges of sustainable development cut across the social, health, ecological, engineering, and Earth sciences, across a wide range of spatial and temporal scales, and across the spectrum from basic to applied research and decision making. The rapidly increasing availability of data and information in digital form from a variety of data repositories, networks, and other sources provides new opportunities to link and integrate both traditional data holdings as well as emerging "big data" resources in ways that enable interdisciplinary research and facilitate the use of objective scientific data and information in society. Taking advantage of these opportunities not only requires improved technical and scientific data interoperability across disciplines, scales, and data types, but also concerted efforts to bridge gaps and barriers between key communities, institutions, and networks. Given the long time perspectives required in planning sustainable approaches to development, it is also imperative to address user requirements for long-term data continuity and stewardship by trustworthy repositories. We report here on lessons learned by CIESIN working on a range of sustainable development issues to integrate data across multiple repositories and networks. This includes CIESIN's roles in developing policy-relevant climate and environmental indicators, soil data for African agriculture, and exposure and risk measures for hazards, disease, and conflict, as well as CIESIN's participation in a range of national and international initiatives related both to sustainable development and to open data access, interoperability, and stewardship.

  20. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  1. Big Data access and infrastructure for modern biology: case studies in data repository utility.

    Science.gov (United States)

    Boles, Nathan C; Stone, Tyler; Bergeron, Charles; Kiehl, Thomas R

    2017-01-01

    Big Data is no longer solely the purview of big organizations with big resources. Today's routine tools and experimental methods can generate large slices of data. For example, high-throughput sequencing can quickly interrogate biological systems for the expression levels of thousands of different RNAs, examine epigenetic marks throughout the genome, and detect differences in the genomes of individuals. Multichannel electrophysiology platforms produce gigabytes of data in just a few minutes of recording. Imaging systems generate videos capturing biological behaviors over the course of days. Thus, any researcher now has access to a veritable wealth of data. However, the ability of any given researcher to utilize that data is limited by her/his own resources and skills for downloading, storing, and analyzing the data. In this paper, we examine the necessary resources required to engage Big Data, survey the state of modern data analysis pipelines, present a few data repository case studies, and touch on current institutions and programs supporting the work that relies on Big Data. © 2016 New York Academy of Sciences.

  2. Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.

    Science.gov (United States)

    Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed

    2018-01-01

    The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.

  3. Proceedings – Mathematical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Author Affiliations. GUANGYUE HUANG1 2 BINGQING MA1 2. College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, People's Republic of China; Henan Engineering Laboratory for Big Data Statistical Analysis and Optimal Control, Xinxiang 453007, People's Republic of China ...

  4. International Conference on Data Science & Social Research

    CERN Document Server

    Amaturo, Enrica; Grassia, Maria; Aragona, Biagio; Marino, Marina

    2017-01-01

    This edited volume lays the groundwork for Social Data Science, addressing epistemological issues, methods, technologies, software and applications of data science in the social sciences. It presents data science techniques for the collection, analysis and use of both online and offline new (big) data in social research and related applications. Among others, the individual contributions cover topics like social media, learning analytics, clustering, statistical literacy, recurrence analysis and network analysis. Data science is a multidisciplinary approach based mainly on the methods of statistics and computer science, and its aim is to develop appropriate methodologies for forecasting and decision-making in response to an increasingly complex reality often characterized by large amounts of data (big data) of various types (numeric, ordinal and nominal variables, symbolic data, texts, images, data streams, multi-way data, social networks etc.) and from diverse sources. This book presents selected papers from...

  5. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  6. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    Science.gov (United States)

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  7. Artificial Intelligence techniques for big data analysis

    OpenAIRE

    Aditya Khatri

    2017-01-01

    During my stay in Salamanca (Spain), I was fortunate enough to participate in the BISITE Research Group of the University of Salamanca. The University of Salamanca is the oldest university in Spain and in 2018 it celebrates its 8th centenary. As a computer science researcher, I participated in one of the many international projects that the research group has active, especially in big data analysis using Artificial Intelligence (AI) techniques. AI is one of BISITE's main lines of rese...

  8. A Systematic Review of Security Mechanisms for Big Data in Health and New Alternatives for Hospitals

    Directory of Open Access Journals (Sweden)

    Sofiane Hamrioui

    2017-01-01

    Full Text Available Computer security is something that brings to mind the greatest developers and companies who wish to protect their data. Major steps forward are being taken via advances made in the security of technology. The main purpose of this paper is to provide a view of different mechanisms and algorithms used to ensure big data security and to theoretically put forward an improvement in the health-based environment using a proposed model as reference. A search was conducted for information from scientific databases as Google Scholar, IEEE Xplore, Science Direct, Web of Science, and Scopus to find information related to security in big data. The search criteria used were “big data”, “health”, “cloud”, and “security”, with dates being confined to the period from 2008 to the present time. After analyzing the different solutions, two security alternatives are proposed combining different techniques analyzed in the state of the art, with a view to providing existing information on the big data over cloud with maximum security in different hospitals located in the province of Valladolid, Spain. New mechanisms and algorithms help to create a more secure environment, although it is necessary to continue developing new and better ones to make things increasingly difficult for cybercriminals.

  9. A Systematic Review of Techniques and Sources of Big Data in the Healthcare Sector.

    Science.gov (United States)

    Alonso, Susel Góngora; de la Torre Díez, Isabel; Rodrigues, Joel J P C; Hamrioui, Sofiane; López-Coronado, Miguel

    2017-10-14

    The main objective of this paper is to present a review of existing researches in the literature, referring to Big Data sources and techniques in health sector and to identify which of these techniques are the most used in the prediction of chronic diseases. Academic databases and systems such as IEEE Xplore, Scopus, PubMed and Science Direct were searched, considering the date of publication from 2006 until the present time. Several search criteria were established as 'techniques' OR 'sources' AND 'Big Data' AND 'medicine' OR 'health', 'techniques' AND 'Big Data' AND 'chronic diseases', etc. Selecting the paper considered of interest regarding the description of the techniques and sources of Big Data in healthcare. It found a total of 110 articles on techniques and sources of Big Data on health from which only 32 have been identified as relevant work. Many of the articles show the platforms of Big Data, sources, databases used and identify the techniques most used in the prediction of chronic diseases. From the review of the analyzed research articles, it can be noticed that the sources and techniques of Big Data used in the health sector represent a relevant factor in terms of effectiveness, since it allows the application of predictive analysis techniques in tasks such as: identification of patients at risk of reentry or prevention of hospital or chronic diseases infections, obtaining predictive models of quality.

  10. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  11. The influence of the substrate on the adhesive strength of the micro-arc oxidation coating developed on TiNi shape memory alloy

    Science.gov (United States)

    Hsieh, Shy-Feng; Ou, Shih-Fu; Chou, Chia-Kai

    2017-01-01

    TiNi shape memory alloys (SMAs), used as long-term implant materials, have a disadvantage. Ni-ion release from the alloys may trigger allergies in the human body. Micro-arc oxidation has been utilized to modify the surface of the TiNi SMA for improving its corrosion resistance and biocompatibility. However, there are very few reports investigating the essential adhesive strength between the micro-arc oxidized film and TiNi SMA. Two primary goals were attained by this study. First, Ti50Ni48.5Mo1.5 SMA having a phase transformation temperature (Af) less than body temperature and good shape recovery were prepared. Next, the Ti50Ni50 and Ti50Ni48.5Mo1.5 SMA surfaces were modified by micro-arc oxidation in phosphoric acid by applying relatively low voltages to maintain the adhesive strength. The results indicated that the pore size, film thickness, and P content increased with applied voltage. The micro-arc oxidized film, comprising Ti oxides, Ni oxide, and phosphate compounds, exhibited a glassy amorphous structure. The outmost surface of the micro-arc oxidized film contained a large amount of P (>12 at%) but only a trace of Ni (micro-arc oxidized films exceeded the requirements of ISO 13779. Furthermore, Mo addition into TiNi SMAs was found to be favorable for improving the adhesive strength of the micro-arc oxidized film.

  12. Waarop is sy voetstukke ingesink? - 'n Besinning oor die skepping en die big bang

    Directory of Open Access Journals (Sweden)

    L.C. Bezuidenhout

    1998-08-01

    Full Text Available Through the ages the debate between theology and the natural sciencesconcerning the origin of the universe was turbulent. Today the big bangtheory is almost generally accepted in scientific circles. In this article thedebate between theology and science is evaluated critically. The theologicalimplications of the big bang theory is discussed and the relevance of thecosmogony in Genesis 1 for a modem society is evaluated. Biblical modelsand scientific models of the birth of the cosmos do not have to be in conflictwith each other.

  13. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  14. Big and broad social data and the sociological imagination: A collaborative response

    Directory of Open Access Journals (Sweden)

    William Housley

    2014-08-01

    Full Text Available In this paper, we reflect on the disciplinary contours of contemporary sociology, and social science more generally, in the age of ‘big and broad’ social data. Our aim is to suggest how sociology and social sciences may respond to the challenges and opportunities presented by this ‘data deluge’ in ways that are innovative yet sensitive to the social and ethical life of data and methods. We begin by reviewing relevant contemporary methodological debates and consider how they relate to the emergence of big and broad social data as a product, reflexive artefact and organizational feature of emerging global digital society. We then explore the challenges and opportunities afforded to social science through the widespread adoption of a new generation of distributed, digital technologies and the gathering momentum of the open data movement, grounding our observations in the work of the Collaborative Online Social Media ObServatory (COSMOS project. In conclusion, we argue that these challenges and opportunities motivate a renewed interest in the programme for a ‘public sociology’, characterized by the co-production of social scientific knowledge involving a broad range of actors and publics.

  15. Data mining and knowledge discovery for big data methodologies, challenge and opportunities

    CERN Document Server

    2014-01-01

    The field of data mining has made significant and far-reaching advances over the past three decades.  Because of its potential power for solving complex problems, data mining has been successfully applied to diverse areas such as business, engineering, social media, and biological science. Many of these applications search for patterns in complex structural information. In biomedicine for example, modeling complex biological systems requires linking knowledge across many levels of science, from genes to disease.  Further, the data characteristics of the problems have also grown from static to dynamic and spatiotemporal, complete to incomplete, and centralized to distributed, and grow in their scope and size (this is known as big data). The effective integration of big data for decision-making also requires privacy preservation. The contributions to this monograph summarize the advances of data mining in the respective fields. This volume consists of nine chapters that address subjects ranging from mining da...

  16. Big Data and HPC: A Happy Marriage

    KAUST Repository

    Mehmood, Rashid

    2016-01-25

    International Data Corporation (IDC) defines Big Data technologies as “a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data produced every day, by enabling high velocity capture, discovery, and/or analysis”. High Performance Computing (HPC) most generally refers to “the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business”. Big data platforms are built primarily considering the economics and capacity of the system for dealing with the 4V characteristics of data. HPC traditionally has been more focussed on the speed of digesting (computing) the data. For these reasons, the two domains (HPC and Big Data) have developed their own paradigms and technologies. However, recently, these two have grown fond of each other. HPC technologies are needed by Big Data to deal with the ever increasing Vs of data in order to forecast and extract insights from existing and new domains, faster, and with greater accuracy. Increasingly more data is being produced by scientific experiments from areas such as bioscience, physics, and climate, and therefore, HPC needs to adopt data-driven paradigms. Moreover, there are synergies between them with unimaginable potential for developing new computing paradigms, solving long-standing grand challenges, and making new explorations and discoveries. Therefore, they must get married to each other. In this talk, we will trace the HPC and big data landscapes through time including their respective technologies, paradigms and major applications areas. Subsequently, we will present the factors that are driving the convergence of the two technologies, the synergies between them, as well as the benefits of their convergence to the biosciences field. The opportunities and challenges of the

  17. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  18. Soil biogeochemistry in the age of big data

    Science.gov (United States)

    Cécillon, Lauric; Barré, Pierre; Coissac, Eric; Plante, Alain; Rasse, Daniel

    2015-04-01

    already been made thanks to meta-analysis, chemometrics, machine-learning systems and bioinformatics. Some techniques like structural equation modeling eventually propose to explore causalities opening a way towards the mechanistic understanding of soil big data rather than simple correlations. We claim that data science should be fully integrated into soil biogeochemists basic education schemes. We expect the blooming of a new generation of soil biogeochemists highly skilled in manipulating big data. Will big data represent a net gain for soil biogeochemistry? Increasing the amount of data will increase associated biases that may further be exacerbated by the increasing distance between data manipulators, soil sampling and data acquisition. Integrating data science into soil biogeochemistry should thus not be done at the expenses of pedology and metrology. We further expect that the more data, the more spurious correlations will appear leading to possible misinterpretation of data. Finally, big data on soils characteristics and processes will always need to be confronted to biogeochemical theories and socio-economic knowledge to be useful. Big data could revolutionize soil biogeochemistry, fostering new scientific and business models around the conservation of the soil natural capital, but our community should go into this new era with clear-sightedness and discernment.

  19. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  20. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  1. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  2. Data Science as an Innovation Challenge: From Big Data to Value Proposition

    Directory of Open Access Journals (Sweden)

    Victoria Kayser

    2018-03-01

    Full Text Available Analyzing “big data” holds huge potential for generating business value. The ongoing advancement of tools and technology over recent years has created a new ecosystem full of opportunities for data-driven innovation. However, as the amount of available data rises to new heights, so too does complexity. Organizations are challenged to create the right contexts, by shaping interfaces and processes, and by asking the right questions to guide the data analysis. Lifting the innovation potential requires teaming and focus to efficiently assign available resources to the most promising initiatives. With reference to the innovation process, this article will concentrate on establishing a process for analytics projects from first ideas to realization (in most cases: a running application. The question we tackle is: what can the practical discourse on big data and analytics learn from innovation management? The insights presented in this article are built on our practical experiences in working with various clients. We will classify analytics projects as well as discuss common innovation barriers along this process.

  3. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  4. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  5. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  6. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  7. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  8. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  9. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  10. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  11. Infectious Disease Surveillance in the Big Data Era: Towards Faster and Locally Relevant Systems

    Science.gov (United States)

    Simonsen, Lone; Gog, Julia R.; Olson, Don; Viboud, Cécile

    2016-01-01

    While big data have proven immensely useful in fields such as marketing and earth sciences, public health is still relying on more traditional surveillance systems and awaiting the fruits of a big data revolution. A new generation of big data surveillance systems is needed to achieve rapid, flexible, and local tracking of infectious diseases, especially for emerging pathogens. In this opinion piece, we reflect on the long and distinguished history of disease surveillance and discuss recent developments related to use of big data. We start with a brief review of traditional systems relying on clinical and laboratory reports. We then examine how large-volume medical claims data can, with great spatiotemporal resolution, help elucidate local disease patterns. Finally, we review efforts to develop surveillance systems based on digital and social data streams, including the recent rise and fall of Google Flu Trends. We conclude by advocating for increased use of hybrid systems combining information from traditional surveillance and big data sources, which seems the most promising option moving forward. Throughout the article, we use influenza as an exemplar of an emerging and reemerging infection which has traditionally been considered a model system for surveillance and modeling. PMID:28830112

  12. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  13. ldentifying Episodes of Earth Science Phenomena Using a Big-Data Technology

    Science.gov (United States)

    Kuo, Kwo-Sen; Oloso, Amidu; Rushing, John; Lin, Amy; Fekete, Gyorgy; Ramachandran, Rahul; Clune, Thomas; Dunny, Daniel

    2014-01-01

    's intricate dynamics, we are continuously discovering novel ES phenomena. We generally gain understanding of a given phenomenon by observing and studying individual events. This process usually begins by identifying the occurrences of these events. Once representative events are identified or found, we must locate associated observed or simulated data prior to commencing analysis and concerted studies of the phenomenon. Knowledge concerning the phenomenon can accumulate only after analysis has started. However, as mentioned previously, comprehensive records only exist for a very limited set of high-impact phenomena; aside from these, finding events and locating associated data currently may take a prohibitive amount of time and effort on the part of an individual investigator. The reason for the lack of comprehensive records for most of the ES phenomena is mainly due to the perception that they do not pose immediate and/or severe threat to life and property. Thus they are not consistently tracked, monitored, and catalogued. Many phenomena even lack precise and/or commonly accepted criteria for definitions. Moreover, various Earth Science observations and data have accumulated to a previously unfathomable volume; NASA Earth Observing System Data Information System (EOSDIS) alone archives several petabytes (PB) of satellite remote sensing data, which are steadily increasing. All of these factors contribute to the difficulty of methodically identifying events corresponding to a given phenomenon and significantly impede systematic investigations. We have not only envisioned AES as an environment for identifying customdefined events but also aspired for it to be an interactive environment with quick turnaround time for revisions of query criteria and results, as well as a collaborative environment where geographically distributed experts may work together on the same phenomena. A Big Data technology is thus required for the realization of such a system. In the following, we first

  14. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  15. Advancing Alzheimer's research: A review of big data promises.

    Science.gov (United States)

    Zhang, Rui; Simon, Gyorgy; Yu, Fang

    2017-10-01

    To review the current state of science using big data to advance Alzheimer's disease (AD) research and practice. In particular, we analyzed the types of research foci addressed, corresponding methods employed and study findings reported using big data in AD. Systematic review was conducted for articles published in PubMed from January 1, 2010 through December 31, 2015. Keywords with AD and big data analytics were used for literature retrieval. Articles were reviewed and included if they met the eligibility criteria. Thirty-eight articles were included in this review. They can be categorized into seven research foci: diagnosing AD or mild cognitive impairment (MCI) (n=10), predicting MCI to AD conversion (n=13), stratifying risks for AD (n=5), mining the literature for knowledge discovery (n=4), predicting AD progression (n=2), describing clinical care for persons with AD (n=3), and understanding the relationship between cognition and AD (n=3). The most commonly used datasets are AD Neuroimaging Initiative (ADNI) (n=16), electronic health records (EHR) (n=11), MEDLINE (n=3), and other research datasets (n=8). Logistic regression (n=9) and support vector machine (n=8) are the most used methods for data analysis. Big data are increasingly used to address AD-related research questions. While existing research datasets are frequently used, other datasets such as EHR data provide a unique, yet under-utilized opportunity for advancing AD research. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  17. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  18. Addressing the Big-Earth-Data Variety Challenge with the Hierarchical Triangular Mesh

    Science.gov (United States)

    Rilee, Michael L.; Kuo, Kwo-Sen; Clune, Thomas; Oloso, Amidu; Brown, Paul G.; Yu, Honfeng

    2016-01-01

    We have implemented an updated Hierarchical Triangular Mesh (HTM) as the basis for a unified data model and an indexing scheme for geoscience data to address the variety challenge of Big Earth Data. We observe that, in the absence of variety, the volume challenge of Big Data is relatively easily addressable with parallel processing. The more important challenge in achieving optimal value with a Big Data solution for Earth Science (ES) data analysis, however, is being able to achieve good scalability with variety. With HTM unifying at least the three popular data models, i.e. Grid, Swath, and Point, used by current ES data products, data preparation time for integrative analysis of diverse datasets can be drastically reduced and better variety scaling can be achieved. In addition, since HTM is also an indexing scheme, when it is used to index all ES datasets, data placement alignment (or co-location) on the shared nothing architecture, which most Big Data systems are based on, is guaranteed and better performance is ensured. Moreover, our updated HTM encoding turns most geospatial set operations into integer interval operations, gaining further performance advantages.

  19. Datatrust: Or, the political quest for numerical evidence and the epistemologies of Big Data

    DEFF Research Database (Denmark)

    Rieder, Gernot; Simon, Judith

    2016-01-01

    Recently, there has been renewed interest in so-called evidence-based policy making. Enticed by the grand promises of Big Data, public officials seem increasingly inclined to experiment with more data-driven forms of governance. But while the rise of Big Data and related consequences has been...... how the epistemological claims of Big Data science intersect with specific forms of trust, truth, and objectivity. We conclude by arguing that regulators' faith in numbers can be attributed to a distinct political culture, a representative democracy undermined by pervasive public distrust...... a major issue of concern across different disciplines, attempts to develop a better understanding of the phenomenon's historical foundations have been rare. This short commentary addresses this gap by situating the current push for numerical evidence within a broader socio-political context, demonstrating...

  20. Britain's big science in a bind

    CERN Multimedia

    Williams, N

    1996-01-01

    UK 1994 science-administration reforms, which formed the Particle Physics and Astronomy Research Council (PPARC) to separate the two capital-intensive fields from other disciplines, has not been a success. Most of PPARC's funds go to CERN and ESA dues, with little left to use for other resources.

  1. Change of texture, microdeformation and hardness in surface layer of TiNi alloy depending on the number of pulses of electron beam effects

    International Nuclear Information System (INIS)

    Meisner, L. L.; Meisner, S. N.; Markov, A. B.; Yakovlev, E. V.; Ozur, G. E.; Rotshtein, V. P.; Mironov, Yu. P.

    2015-01-01

    This work comprises a study of the influence of the pulse number of low-energy high-current electron beam (LEHCEB) exposure on the value and character of distribution of residual elastic stresses, texturing effects and the relationship between structural-phase states and physical and mechanical properties of the modified surface layers of TiNi alloy. LEHCEB processing of the surface of TiNi samples was carried out using a RITM-SP [3] installation. Energy density of electron beam was constant at E s = 3.9 ± 0.5 J/cm 2 ; pulse duration was 2.8 ± 0.3 μs. The number of pulses in the series was changeable, (n = 2–128). It was shown that as the result of multiple LEHCEB processing of TiNi samples, hierarchically organized multilayer structure is formed in the surface layer. The residual stress field of planar type is formed in the modified surface layer as following: in the direction of the normal to the surface the strain component ε ⊥ < 0 (compressing strain), and in a direction parallel to the surface, the strain component ε || > 0 (tensile deformation). Texturing effects and the level of residual stresses after LEHCEB processing of TiNi samples with equal energy density of electron beam (∼3.8 J/cm 2 ) depend on the number of pulses and increase with the rise of n > 10

  2. Science & Technology Review March/April 2008

    Energy Technology Data Exchange (ETDEWEB)

    Chinn, D J

    2008-01-22

    This month's issue has the following articles: (1) Science and Security in Sharp Focus--Commentary by William H. Goldstein; (2) Extending the Search for Extrasolar Planets--The Gemini Planet Imager will delve deep into the universe to identify planets that cannot be detected with current instrumentation; (3) Standardizing the Art of Electron-Beam Welding--The Laboratory's EBeam Profiler makes electron-beam welds consistent and improves quality control; (4) Molecular Building Blocks Made of Diamonds--Livermore physicists are exploring the electrical properties of diamondoids, tiny molecules of diamond; and (5) Animation Brings Science to Life--Animation helps scientists and engineers effectively communicate their ideas and research in a visually compelling way.

  3. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  4. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  5. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  6. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  7. NST and NST integration: nuclear science and technique and nano science and technique

    International Nuclear Information System (INIS)

    Zhao Yuliang; Chai Zhifang; Liu Yuanfang

    2008-01-01

    Nuclear science is considered as a big science and also the frontier in the 20 th century, it developed many big scientific facilities and many technique platforms (e.g., nuclear reactor, synchrotron radiation, accelerator, etc.) Nuclear Science and Technology (NST) provide us with many unique tools such as neutron beams, electron beams, gamma rays, alpha rays, beta rays, energetic particles, etc. These are efficient and essential probes for studying many technique and scientific issues in the fields of new materials, biological sciences, environmental sciences, life sciences, medical science, etc. Nano Science and Technology (NST) is a newly emerging multidisciplinary science and the frontier in the 21 st century, it is expected to dominate the technological revolution in diverse aspects of our life. It involves diverse fields such as nanomaterials, nanobiological sciences, environmental nanotechnology, nanomedicine, etc. nanotechnology was once considered as a futuristic science with applications several decades in the future and beyond. But, the rapid development of nanotechnology has broken this prediction. For example, diverse types of manufactured nanomaterials or nanostructures have been currently utilized in industrial products, semiconductors, electronics, stain-resistant clothing, ski wax, catalysts, other commodity products such as food, sunscreens, cosmetics, automobile parts, etc., to improve their performance of previous functions, or completely create novel functions. They will also be increasingly utilized in medicines for purposes of clinic therapy, diagnosis, and drug delivery. In the talk, we will discuss the possibility of NST-NST integration: how to apply the unique probes of advanced radiochemical and nuclear techniques in nanoscience and nanotechnology. (authors)

  8. Science on the streets of the Big Apple.

    Science.gov (United States)

    Greene, Brian; Nurse, Paul

    2008-05-30

    A five-day festival of science takes place this week at venues across New York City. The festival features not only leading researchers from New York and beyond but also actors, writers, musicians, and choreographers in a series of multimedia programs designed to reveal science to the general public in exciting new ways.

  9. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  10. Biomedical Big Data Training Collaborative (BBDTC): An effort to bridge the talent gap in biomedical science and research.

    Science.gov (United States)

    Purawat, Shweta; Cowart, Charles; Amaro, Rommie E; Altintas, Ilkay

    2016-06-01

    The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC collaborative is an e-learning platform that supports the biomedical community to access, develop and deploy open training materials. The BBDTC supports Big Data skill training for biomedical scientists at all levels, and from varied backgrounds. The natural hierarchy of courses allows them to be broken into and handled as modules . Modules can be reused in the context of multiple courses and reshuffled, producing a new and different, dynamic course called a playlist . Users may create playlists to suit their learning requirements and share it with individual users or the wider public. BBDTC leverages the maturity and design of the HUBzero content-management platform for delivering educational content. To facilitate the migration of existing content, the BBDTC supports importing and exporting course material from the edX platform. Migration tools will be extended in the future to support other platforms. Hands-on training software packages, i.e., toolboxes , are supported through Amazon EC2 and Virtualbox virtualization technologies, and they are available as: ( i ) downloadable lightweight Virtualbox Images providing a standardized software tool environment with software packages and test data on their personal machines, and ( ii ) remotely accessible Amazon EC2 Virtual Machines for accessing biomedical big data tools and scalable big data experiments. At the moment, the BBDTC site contains three open Biomedical big data training courses with lecture contents, videos and hands-on training utilizing VM toolboxes, covering diverse topics. The courses have enhanced the hands-on learning environment by providing structured content that users can use at their own pace. A four course biomedical big data series is

  11. Ocean Networks Canada's "Big Data" Initiative

    Science.gov (United States)

    Dewey, R. K.; Hoeberechts, M.; Moran, K.; Pirenne, B.; Owens, D.

    2013-12-01

    Ocean Networks Canada operates two large undersea observatories that collect, archive, and deliver data in real time over the Internet. These data contribute to our understanding of the complex changes taking place on our ocean planet. Ocean Networks Canada's VENUS was the world's first cabled seafloor observatory to enable researchers anywhere to connect in real time to undersea experiments and observations. Its NEPTUNE observatory is the largest cabled ocean observatory, spanning a wide range of ocean environments. Most recently, we installed a new small observatory in the Arctic. Together, these observatories deliver "Big Data" across many disciplines in a cohesive manner using the Oceans 2.0 data management and archiving system that provides national and international users with open access to real-time and archived data while also supporting a collaborative work environment. Ocean Networks Canada operates these observatories to support science, innovation, and learning in four priority areas: study of the impact of climate change on the ocean; the exploration and understanding the unique life forms in the extreme environments of the deep ocean and below the seafloor; the exchange of heat, fluids, and gases that move throughout the ocean and atmosphere; and the dynamics of earthquakes, tsunamis, and undersea landslides. To date, the Ocean Networks Canada archive contains over 130 TB (collected over 7 years) and the current rate of data acquisition is ~50 TB per year. This data set is complex and diverse. Making these "Big Data" accessible and attractive to users is our priority. In this presentation, we share our experience as a "Big Data" institution where we deliver simple and multi-dimensional calibrated data cubes to a diverse pool of users. Ocean Networks Canada also conducts extensive user testing. Test results guide future tool design and development of "Big Data" products. We strive to bridge the gap between the raw, archived data and the needs and

  12. Tiny galaxies help unravel dark matter mystery

    CERN Multimedia

    O'Hanlon, Larry

    2007-01-01

    "The 70-year effort to unravel the mysteries of dark matter just got a big boost from some very puny galaxies. In the pas few years, a score of dwarf galaxies have been discovered hanging about the fringes of the Milky way. Now new measurements of the few stars int hese dwarfs reveal them to be dark mater distilleries, with upwards of 1'000 times more dark than normal matter." (3 pages)

  13. Nanotechnology, Big things from a Tiny World: a Review

    OpenAIRE

    Debnath Bhattacharyya; Shashank Singh; Niraj Satnalika; Ankesh Khandelwal; Seung-Hwan Jeon

    2009-01-01

    The purpose of this paper is to look into the present aspects of “Nanotechnology”. This paper gives a brief description of what Nanotechnology is?? And its application in various fields viz. computing, medicine, food technology, Robotics, Solar cells etc. It also deals with the future perspectives of Nanotechnology, risks in advanced nanotechnology.

  14. Nano-remediation: tiny particles cleaning up big environmental problems

    DEFF Research Database (Denmark)

    Grieger, Khara; Hjorth, Rune; Rice, Jacelyn

    2015-01-01

    Over the past decade, there has been an increased use of engineered nanomaterials (ENMs) in everything from consumer products to industrial manufacturing. Nanomaterials are substances which are less than 100 nanometres in size (a nanometre is one billionth of a metre). Although natural nanomateri...

  15. Nanoclay minerals and plastics: tiny particles deliver big impact

    CSIR Research Space (South Africa)

    Sinah Ray, S

    2015-10-01

    Full Text Available A polymer nanocomposite is an advanced plastic material where the incorporation of nanostructures such as clay minerals and other nanoparticles into the polymer has been achieved on the nano-level so that the material exhibits improvements in colour...

  16. Advanced statistical methods in data science

    CERN Document Server

    Chen, Jiahua; Lu, Xuewen; Yi, Grace; Yu, Hao

    2016-01-01

    This book gathers invited presentations from the 2nd Symposium of the ICSA- CANADA Chapter held at the University of Calgary from August 4-6, 2015. The aim of this Symposium was to promote advanced statistical methods in big-data sciences and to allow researchers to exchange ideas on statistics and data science and to embraces the challenges and opportunities of statistics and data science in the modern world. It addresses diverse themes in advanced statistical analysis in big-data sciences, including methods for administrative data analysis, survival data analysis, missing data analysis, high-dimensional and genetic data analysis, longitudinal and functional data analysis, the design and analysis of studies with response-dependent and multi-phase designs, time series and robust statistics, statistical inference based on likelihood, empirical likelihood and estimating functions. The editorial group selected 14 high-quality presentations from this successful symposium and invited the presenters to prepare a fu...

  17. Mechanical properties and related substructure of TiNi shape memory alloys

    International Nuclear Information System (INIS)

    Filip, P.; Kneissl, A.C.

    1995-01-01

    The mechanical properties of binary near equiatomic TiNi shape memory alloys were investigated after different types of mechanical and heat treatments. The changes of deformation behaviour are explained on the basis of substructure differences after work hardening. The ''elastic moduli'' of both the high-temperature phase B2 and the martensite B19' as well as the ''easy stage of deformation'' are dependent on the work hardening intensity and these changes are related to the mobility of B2/B19' interfaces. The martensite changes its morphology after work hardening. In contrast to a twinned martensite, typical for annealed alloys, the internally slipped martensite was detected after work hardening. (orig.)

  18. Precision Nutrition 4.0: A Big Data and Ethics Foresight Analysis--Convergence of Agrigenomics, Nutrigenomics, Nutriproteomics, and Nutrimetabolomics.

    Science.gov (United States)

    Özdemir, Vural; Kolker, Eugene

    2016-02-01

    Nutrition is central to sustenance of good health, not to mention its role as a cultural object that brings together or draws lines among societies. Undoubtedly, understanding the future paths of nutrition science in the current era of Big Data remains firmly on science, technology, and innovation strategy agendas around the world. Nutrigenomics, the confluence of nutrition science with genomics, brought about a new focus on and legitimacy for "variability science" (i.e., the study of mechanisms of person-to-person and population differences in response to food, and the ways in which food variably impacts the host, for example, nutrient-related disease outcomes). Societal expectations, both public and private, and claims over genomics-guided and individually-tailored precision diets continue to proliferate. While the prospects of nutrition science, and nutrigenomics in particular, are established, there is a need to integrate the efforts in four Big Data domains that are naturally allied--agrigenomics, nutrigenomics, nutriproteomics, and nutrimetabolomics--that address complementary variability questions pertaining to individual differences in response to food-related environmental exposures. The joint use of these four omics knowledge domains, coined as Precision Nutrition 4.0 here, has sadly not been realized to date, but the potentials for such integrated knowledge innovation are enormous. Future personalized nutrition practices would benefit from a seamless planning of life sciences funding, research, and practice agendas from "farm to clinic to supermarket to society," and from "genome to proteome to metabolome." Hence, this innovation foresight analysis explains the already existing potentials waiting to be realized, and suggests ways forward for innovation in both technology and ethics foresight frames on precision nutrition. We propose the creation of a new Precision Nutrition Evidence Barometer for periodic, independent, and ongoing retrieval, screening

  19. Tiny timekeepers witnessing high-rate exhumation processes.

    Science.gov (United States)

    Zhong, Xin; Moulas, Evangelos; Tajčmanová, Lucie

    2018-02-02

    Tectonic forces and surface erosion lead to the exhumation of rocks from the Earth's interior. Those rocks can be characterized by many variables including peak pressure and temperature, composition and exhumation duration. Among them, the duration of exhumation in different geological settings can vary by more than ten orders of magnitude (from hours to billion years). Constraining the duration is critical and often challenging in geological studies particularly for rapid magma ascent. Here, we show that the time information can be reconstructed using a simple combination of laser Raman spectroscopic data from mineral inclusions with mechanical solutions for viscous relaxation of the host. The application of our model to several representative geological settings yields best results for short events such as kimberlite magma ascent (less than ~4,500 hours) and a decompression lasting up to ~17 million years for high-pressure metamorphic rocks. This is the first precise time information obtained from direct microstructural observations applying a purely mechanical perspective. We show an unprecedented geological value of tiny mineral inclusions as timekeepers that contributes to a better understanding on the large-scale tectonic history and thus has significant implications for a new generation of geodynamic models.

  20. SCADA SYSTEM SIMULATION USING THE TINY TIGER 2 DEVELOPMENT BOARD

    Directory of Open Access Journals (Sweden)

    AGAPE C.P.

    2015-12-01

    Full Text Available This paper presents a new design for a surveillance and control system of a medium voltage cell. The accent is on the acquisition of information of the consumer’s state, the instantaneous current consumption, power and voltage apparent to the consumer. The proposed design is based on Wilke Technology development board at its basis being a Tiny-tiger 2 Multitasking Microcontroller. This computer has 2 MByte or 4 MByte Flash for programming, and 1 MByte SRAM with backup input for data. On the software’s behalf we managed to create a Delphi Interface which communicates with the serial port on the development board. The interface takes information about the consumer and its capacity to load with voltage.

  1. Small decisions with big impact on data analytics

    Directory of Open Access Journals (Sweden)

    Jana Diesner

    2015-11-01

    Full Text Available Big social data have enabled new opportunities for evaluating the applicability of social science theories that were formulated decades ago and were often based on small- to medium-sized samples. Big Data coupled with powerful computing has the potential to replace the statistical practice of sampling and estimating effects by measuring phenomena based on full populations. Preparing these data for analysis and conducting analytics involves a plethora of decisions, some of which are already embedded in previously collected data and built tools. These decisions refer to the recording, indexing and representation of data and the settings for analysis methods. While these choices can have tremendous impact on research outcomes, they are not often obvious, not considered or not being made explicit. Consequently, our awareness and understanding of the impact of these decisions on analysis results and derived implications are highly underdeveloped. This might be attributable to occasional high levels of over-confidence in computational solutions as well as the possible yet questionable assumption that Big Data can wash out minor data quality issues, among other reasons. This article provides examples for how to address this issue. It argues that checking, ensuring and validating the quality of big social data and related auxiliary material is a key ingredient for empowering users to gain reliable insights from their work. Scrutinizing data for accuracy issues, systematically fixing them and diligently documenting these processes can have another positive side effect: Closely interacting with the data, thereby forcing ourselves to understand their idiosyncrasies and patterns, can help us to move from being able to precisely model and formally describe effects in society to also understand and explain them.

  2. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  3. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  4. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  5. Science for Diplomacy, Diplomacy for Science

    Science.gov (United States)

    Colglazier, E. Wiliam

    2015-04-01

    I was a strong proponent of ``science diplomacy'' when I became Science and Technology Adviser to the Secretary of State in 2011. I thought I knew a lot about the subject after being engaged for four decades on international S&T policy issues and having had distinguished scientists as mentors who spent much of their time using science as a tool for building better relations between countries and working to make the world more peaceful, prosperous, and secure. I learned a lot from my three years inside the State Department, including great appreciation and respect for the real diplomats who work to defuse conflicts and avoid wars. But I also learned a lot about science diplomacy, both using science to advance diplomacy and diplomacy to advance science. My talk will focus on the five big things that I learned, and from that the one thing where I am focusing my energies to try to make a difference now that I am a private citizen again.

  6. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  7. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  8. Big Bang Day: 5 Particles - 3. The Anti-particle

    CERN Multimedia

    Franck Close

    2008-01-01

    Simon Singh looks at the stories behind the discovery of 5 of the universe's most significant subatomic particles: the Electron, the Quark, the Anti-particle, the Neutrino and the "next particle". 3. The Anti-particle. It appears to be the stuff of science fiction. Associated with every elementary particle is an antiparticle which has the same mass and opposite charge. Should the two meet and combine, the result is annihilation - and a flash of light. Thanks to mysterious processes that occurred after the Big Bang there are a vastly greater number of particles than anti-particles. So how could their elusive existence be proved? At CERN particle physicists are crashing together subatomic particles at incredibly high speeds to create antimatter, which they hope will finally reveal what happened at the precise moment of the Big Bang to create the repertoire of elementary particles and antiparticles in existence today.

  9. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  10. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  11. The Scope of Big Data in One Medicine: Unprecedented Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Molly E. McCue

    2017-11-01

    Full Text Available Advances in high-throughput molecular biology and electronic health records (EHR, coupled with increasing computer capabilities have resulted in an increased interest in the use of big data in health care. Big data require collection and analysis of data at an unprecedented scale and represents a paradigm shift in health care, offering (1 the capacity to generate new knowledge more quickly than traditional scientific approaches; (2 unbiased collection and analysis of data; and (3 a holistic understanding of biology and pathophysiology. Big data promises more personalized and precision medicine for patients with improved accuracy and earlier diagnosis, and therapy tailored to an individual’s unique combination of genes, environmental risk, and precise disease phenotype. This promise comes from data collected from numerous sources, ranging from molecules to cells, to tissues, to individuals and populations—and the integration of these data into networks that improve understanding of heath and disease. Big data-driven science should play a role in propelling comparative medicine and “one medicine” (i.e., the shared physiology, pathophysiology, and disease risk factors across species forward. Merging of data from EHR across institutions will give access to patient data on a scale previously unimaginable, allowing for precise phenotype definition and objective evaluation of risk factors and response to therapy. High-throughput molecular data will give insight into previously unexplored molecular pathophysiology and disease etiology. Investigation and integration of big data from a variety of sources will result in stronger parallels drawn at the molecular level between human and animal disease, allow for predictive modeling of infectious disease and identification of key areas of intervention, and facilitate step-changes in our understanding of disease that can make a substantial impact on animal and human health. However, the use of big data

  12. The Scope of Big Data in One Medicine: Unprecedented Opportunities and Challenges.

    Science.gov (United States)

    McCue, Molly E; McCoy, Annette M

    2017-01-01

    Advances in high-throughput molecular biology and electronic health records (EHR), coupled with increasing computer capabilities have resulted in an increased interest in the use of big data in health care. Big data require collection and analysis of data at an unprecedented scale and represents a paradigm shift in health care, offering (1) the capacity to generate new knowledge more quickly than traditional scientific approaches; (2) unbiased collection and analysis of data; and (3) a holistic understanding of biology and pathophysiology. Big data promises more personalized and precision medicine for patients with improved accuracy and earlier diagnosis, and therapy tailored to an individual's unique combination of genes, environmental risk, and precise disease phenotype. This promise comes from data collected from numerous sources, ranging from molecules to cells, to tissues, to individuals and populations-and the integration of these data into networks that improve understanding of heath and disease. Big data-driven science should play a role in propelling comparative medicine and "one medicine" (i.e., the shared physiology, pathophysiology, and disease risk factors across species) forward. Merging of data from EHR across institutions will give access to patient data on a scale previously unimaginable, allowing for precise phenotype definition and objective evaluation of risk factors and response to therapy. High-throughput molecular data will give insight into previously unexplored molecular pathophysiology and disease etiology. Investigation and integration of big data from a variety of sources will result in stronger parallels drawn at the molecular level between human and animal disease, allow for predictive modeling of infectious disease and identification of key areas of intervention, and facilitate step-changes in our understanding of disease that can make a substantial impact on animal and human health. However, the use of big data comes with significant

  13. Effect of hydrogen on transformation characteristics and deformation behavior in a Ti-Ni shape memory alloy

    International Nuclear Information System (INIS)

    Hoshiya, Taiji; Ando, Hiroei; Den, Shoji; Katsuta, Hiroshi.

    1992-01-01

    Transformation characteristics and deformation behavior of hydrogenated Ti-50.5 at% Ni alloys, which were occluded in a low pressure range of hydrogen between 1.1 and 78.5 kPa, have been studied by electrical resistivity measurement, tensile test, X-ray diffraction analysis and microstructural observation. M S temperature of the Ti-Ni alloys decreased with an increase in hydrogen content. This corresponds to the stabilization of the parent phase during cooling, which was confirmed by X-ray diffraction: The suppression effect of hydrogen takes place on the martensitic transformation. Critical stress for slip deformation of hydrogenated Ti-Ni alloys changed with hydrogen content and thus hydrogen had a major influence on deformation behavior of those alloys. With hydrogen contents above 0.032 mol%, hardening was distinguished from softening which was pronounced in the contents from 0 to 0.032 mol% H. Hydrides were formed in hydrogen contents over 1.9 mol%. The hydride formation results in the reorientation in variants of the R phase and increase in the lattice strains of the parent phase. (author)

  14. Nonlinear science as a fluctuating research frontier

    International Nuclear Information System (INIS)

    He Jihuan

    2009-01-01

    Nonlinear science has had quite a triumph in all conceivable applications in science and technology, especially in high energy physics and nanotechnology. COBE, which was awarded the physics Nobel Prize in 2006, might be probably more related to nonlinear science than the Big Bang theory. Five categories of nonlinear subjects in research frontier are pointed out.

  15. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  16. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  17. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  18. Big Bayou Creek and Little Bayou Creek Watershed Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Kszos, L.A.; Peterson, M.J.; Ryon; Smith, J.G.

    1999-03-01

    Biological monitoring of Little Bayou and Big Bayou creeks, which border the Paducah Site, has been conducted since 1987. Biological monitoring was conducted by University of Kentucky from 1987 to 1991 and by staff of the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) from 1991 through March 1999. In March 1998, renewed Kentucky Pollutant Discharge Elimination System (KPDES) permits were issued to the US Department of Energy (DOE) and US Enrichment Corporation. The renewed DOE permit requires that a watershed monitoring program be developed for the Paducah Site within 90 days of the effective date of the renewed permit. This plan outlines the sampling and analysis that will be conducted for the watershed monitoring program. The objectives of the watershed monitoring are to (1) determine whether discharges from the Paducah Site and the Solid Waste Management Units (SWMUs) associated with the Paducah Site are adversely affecting instream fauna, (2) assess the ecological health of Little Bayou and Big Bayou creeks, (3) assess the degree to which abatement actions ecologically benefit Big Bayou Creek and Little Bayou Creek, (4) provide guidance for remediation, (5) provide an evaluation of changes in potential human health concerns, and (6) provide data which could be used to assess the impact of inadvertent spills or fish kill. According to the cleanup will result in these watersheds [Big Bayou and Little Bayou creeks] achieving compliance with the applicable water quality criteria.

  19. Rapid thermal annealing of Ti-rich TiNi thin films: A new approach to fabricate patterned shape memory thin films

    International Nuclear Information System (INIS)

    Motemani, Y.; Tan, M.J.; White, T.J.; Huang, W.M.

    2011-01-01

    This paper reports the rapid thermal annealing (RTA) of Ti-rich TiNi thin films, synthesized by the co-sputtering of TiNi and Ti targets. Long-range order of aperiodic alloy could be achieved in a few seconds with the optimum temperature of 773 K. Longer annealing (773 K/240 s), transformed the film to a poorly ordered vitreous phase, suggesting a novel method for solid state amorphization. Reitveld refinement analyses showed significant differences in structural parameters of the films crystallized by rapid and conventional thermal annealing. Dependence of the elastic modulus on the valence electron density (VED) of the crystallized films was studied. It is suggested that RTA provides a new approach to fabricate patterned shape memory thin films.

  20. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  1. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  2. ``All that Matter ... in One Big Bang ...'', &Other Cosmological Singularities

    Science.gov (United States)

    Elizalde, Emilio

    2018-02-01

    The first part of this paper contains a brief description of the beginnings of modern cosmology, which, the author will argue, was most likely born in the Year 1912. Some of the pieces of evidence presented here have emerged from recent research in the history of science, and are not usually shared with the general audiences in popular science books. In special, the issue of the correct formulation of the original Big Bang concept, according to the precise words of Fred Hoyle, is discussed. Too often, this point is very deficiently explained (when not just misleadingly) in most of the available generalist literature. Other frequent uses of the same words, Big Bang, as to name the initial singularity of the cosmos, and also whole cosmological models, are then addressed, as evolutions of its original meaning. Quantum and inflationary additions to the celebrated singularity theorems by Penrose, Geroch, Hawking and others led to subsequent results by Borde, Guth and Vilenkin. And corresponding corrections to the Einstein field equations have originated, in particular, $R^2$, $f(R)$, and scalar-tensor gravities, giving rise to a plethora of new singularities. For completeness, an updated table with a classification of the same is given.

  3. SETI as a part of Big History

    Science.gov (United States)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive the Statistical Drake Equation (namely the statistical extension of the classical Drake Equation typical of SETI) can be regarded as the “frozen in time” part of GBM. This makes SETI a subset of our Big History Theory based on GBMs: just as the GBM is the “movie” unfolding in time, so the Statistical Drake Equation is its “still picture”, static in time, and the GBM is the time-extension of the Drake Equation. Darwinian Evolution on Earth may be easily described as an increasing GBM in the number of living species on Earth over the last 3.5 billion years. The first of them was RNA 3.5 billion years ago, and now 50 million living species or more exist, each

  4. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  5. Quest for Value in Big Earth Data

    Science.gov (United States)

    Kuo, Kwo-Sen; Oloso, Amidu O.; Rilee, Mike L.; Doan, Khoa; Clune, Thomas L.; Yu, Hongfeng

    2017-04-01

    Among all the V's of Big Data challenges, such as Volume, Variety, Velocity, Veracity, etc., we believe Value is the ultimate determinant, because a system delivering better value has a competitive edge over others. Although it is not straightforward to assess the value of scientific endeavors, we believe the ratio of scientific productivity increase to investment is a reasonable measure. Our research in Big Data approaches to data-intensive analysis for Earth Science has yielded some insights, as well as evidences, as to how optimal value might be attained. The first insight is that we should avoid, as much as possible, moving data through connections with relatively low bandwidth. That is, we recognize that moving data is expensive, albeit inevitable. They must at least be moved from the storage device into computer main memory and then to CPU registers for computation. When data must be moved it is better to move them via relatively high-bandwidth connections and avoid low-bandwidth ones. For this reason, a technology that can best exploit data locality will have an advantage over others. Data locality is easy to achieve and exploit with only one dataset. With multiple datasets, data colocation becomes important in addition to data locality. However, the organization of datasets can only be co-located for certain types of analyses. It is impossible for them to be co-located for all analyses. Therefore, our second insight is that we need to co-locate the datasets for the most commonly used analyses. In Earth Science, we believe the most common analysis requirement is "spatiotemporal coincidence". For example, when we analyze precipitation systems, we often would like to know the environment conditions "where and when" (i.e. at the same location and time) there is precipitation. This "where and when" indicates the "spatiotemporal coincidence" requirement. Thus, an associated insight is that datasets need to be partitioned per the physical dimensions, i.e. space

  6. Researchers' Night: science at the shops

    CERN Multimedia

    Corinne Pralavorio

    2015-01-01

    On 25 September, as part of European Researchers’ Night, CERN and POPScience joined forces to welcome the public at the Balexert shopping centre in Geneva. The Bulletin presents its gallery of photographs from the exciting and educational event.   Science through comic strips, games, cinema and television: POPScience approaches scientific questions through popular culture, with great success! Around 500 children attended the sessions for schools at Balexert's multiplex cinema, and 600 spectators flocked to the public screenings.  Using the big screen, scientists, directors and authors were on hand to disentangle truth from untruths and science from science fiction. The guests, some of whom appeared in person and others via video link, included Jorge Cham, author of PhD Comics and the spin-off film; David Saltzberg, physicist at CMS and scientific consultant for the television series The Big Bang Theory; Kip Thorne, scientific consultant for the film Interstellar; Lawrence ...

  7. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  8. Potential Solution of a Hardware-Software System V-Cluster for Big Data Analysis

    Science.gov (United States)

    Morra, G.; Tufo, H.; Yuen, D. A.; Brown, J.; Zihao, S.

    2017-12-01

    Today it cannot be denied that the Big Data revolution is taking place and is replacing HPC and numerical simulation as the main driver in society. Outside the immediate scientific arena, the Big Data market encompass much more than the AGU. There are many sectors in society that Big Data can ably serve, such as governments finances, hospitals, tourism, and, last by not least, scientific and engineering problems. In many countries, education has not kept pace with the demands from students outside computer science to get into Big Data science. Ultimate Vision (UV) in Beijing attempts to address this need in China by focusing part of our energy on education and training outside the immediate university environment. UV plans a strategy to maximize profits in our beginning. Therefore, we will focus on growing markets such as provincial governments, medical sectors, mass media, and education. And will not address issues such as performance for scientific collaboration, such as seismic networks, where the market share and profits are small by comparison. We have developed a software-hardware system, called V-Cluster, built with the latest NVIDIA GPUs and Intel CPUs with ample amounts of RAM (over couple of Tbytes) and local storage. We have put in an internal network with high bandwidth (over 100 Gbits/sec) and each node of V-Cluster can run at around 40 Tflops. Our system can scale linearly with the number of codes. Our main strength in data analytics is the use of graph-computing paradigm for optimizing the transfer rate in collaborative efforts. We focus in training and education with our clients in order to gain experience in learning about new applications. We will present the philosophy of this second generation of our Data Analytic system, whose costs fall far below those offered elsewhere.

  9. Big Data is not only about data: The two cultures of modelling

    Directory of Open Access Journals (Sweden)

    Giuseppe Alessandro Veltri

    2017-04-01

    Full Text Available The contribution of Big Data to social science is not limited to data availability but includes the introduction of analytical approaches that have been developed in computer science, and in particular in machine learning. This brings about a new ‘culture’ of statistical modelling that bears considerable potential for the social scientist. This argument is illustrated with a brief discussion of model-based recursive partitioning which can bridge the theory and data-driven approach. Such a method is an example of how this new approach can help revise models that work for the full dataset: it can be used for evaluating different models, a traditional weakness of the ‘traditional’ statistical approach used in social science.

  10. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  11. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  12. Medicinal chemistry in drug discovery in big pharma: past, present and future.

    Science.gov (United States)

    Campbell, Ian B; Macdonald, Simon J F; Procopiou, Panayiotis A

    2018-02-01

    The changes in synthetic and medicinal chemistry and related drug discovery science as practiced in big pharma over the past few decades are described. These have been predominantly driven by wider changes in society namely the computer, internet and globalisation. Thoughts about the future of medicinal chemistry are also discussed including sharing the risks and costs of drug discovery and the future of outsourcing. The continuing impact of access to substantial computing power and big data, the use of algorithms in data analysis and drug design are also presented. The next generation of medicinal chemists will communicate in ways that reflect social media and the results of constantly being connected to each other and data. Copyright © 2017. Published by Elsevier Ltd.

  13. Psycho-informatics: Big Data shaping modern psychometrics.

    Science.gov (United States)

    Markowetz, Alexander; Błaszkiewicz, Konrad; Montag, Christian; Switala, Christina; Schlaepfer, Thomas E

    2014-04-01

    For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. An experimental study on the erosion behavior of pseudoelastic TiNi alloy in dry sand and in aggressive media

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, T.; Li, D.Y. [Alberta Univ., Edmonton, AB (Canada). Dept. of Chemical Engineering

    2000-11-30

    The corrosive erosion behavior of Ti-51at.%Ni alloy under different erosion conditions was studied and compared to that of 304 stainless steel. Erosion tests were performed in a slurry-pot tester with dry sand, 3.5% NaCl slurry and 0.1 moll{sup -1} H{sub 2}SO{sub 4} slurry containing 30% silica sand, respectively. Synergistic effects of corrosion and erosion were studied in steady corrosion, polarization, dry sand erosion and micro-wear experiments. An electrochemical-scratching test characterized the failure and recovery of the passive film formed on TiNi alloy in 3.5% NaCl and 0.1 mol l{sup -1} H{sub 2}SO{sub 4} solutions, respectively. In both dry sand and the corrosive media, the TiNi alloy exhibited considerably greater erosion resistance than 304 stainless steel. (orig.)

  15. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  16. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  17. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  18. EDITORIAL: Big science at the nanoscale Big science at the nanoscale

    Science.gov (United States)

    Reed, Mark

    2009-10-01

    In 1990, the journal Nanotechnology was the first academic publication dedicated to disseminating the results of research in what was then a new field of scientific endeavour. To celebrate the 20th volume of Nanotechnology, we are publishing a special issue of top research papers covering all aspects of this multidisciplinary science, including biology, electronics and photonics, quantum phenomena, sensing and actuating, patterning and fabrication, material synthesis and the properties of nanomaterials. In the early 1980s, scanning probe microscopes brought the concepts of matter and interactions at the nanoscale into visual reality, and hastened a flurry of activity in the burgeoning new field of nanoscience. Twenty years on and nanotechnology has truly come of age. The ramifications are pervasive throughout daily life in communication, health care and entertainment technology. For example, DVDs have now consigned videotapes to the ark and mobile phones are as prevalent as house keys, and these technologies already look set to be superseded by internet phones and Blu-Ray discs. Nanotechnology has been in the unique position of following the explosive growth of this discipline from its outset. The surge of activity in the field is notable in the number of papers published by the journal each year, which has skyrocketed. The journal is now published weekly, publishing over 1400 articles a year. What is more, the quality of these articles is also constantly improving; the average number of citations to articles within two years of publication, quantified by the ISI impact factor, continues to increase every year. The rate of activity in the field shows no signs of slowing down, as is evident from the wealth of great research published each week. The aim of the 20th volume special issue is to present some of the very best and most recent research in many of the wide-ranging fields covered by the journal, a celebration of the present state of play in nanotechnology and

  19. Is Pluto a planet? Student powered video rap ';battle' over tiny Pluto's embattled planetary standing

    Science.gov (United States)

    Beisser, K.; Cruikshank, D. P.; McFadden, T.

    2013-12-01

    Is Pluto a planet? Some creative low income Bay-area middle-schoolers put a musical spin on this hot science debate with a video rap ';battle' over tiny Pluto's embattled planetary standing. The students' timing was perfect, with NASA's New Horizons mission set to conduct the first reconnaissance of Pluto and its moons in July 2015. Pluto - the last of the nine original planets to be explored by spacecraft - has been the subject of scientific study and speculation since Clyde Tombaugh discovered it in 1930, orbiting the Sun far beyond Neptune. Produced by the students and a very creative educator, the video features students 'battling' back and forth over the idea of Pluto being a planet. The group collaborated with actual space scientists to gather information and shot their video before a 'green screen' that was eventually filled with animations and visuals supplied by the New Horizons mission team. The video debuted at the Pluto Science Conference in Maryland in July 2013 - to a rousing response from researchers in attendance. The video marks a nontraditional approach to the ongoing 'great planet debate' while educating viewers on a recently discovered region of the solar system. By the 1990s, researchers had learned that Pluto possessed multiple exotic ices on its surface, a complex atmosphere and seasonal cycles, and a large moon (Charon) that likely resulted from a giant impact on Pluto itself. It also became clear that Pluto was no misfit among the planets - as had long been thought - but the largest and brightest body in a newly discovered 'third zone' of our planetary system called the Kuiper Belt. More recent observations have revealed that Pluto has a rich system of satellites - five known moons - and a surface that changes over time. Scientists even speculate that Pluto may possess an internal ocean. For these and other reasons, the 2003 Planetary Decadal Survey ranked a Pluto/Kuiper Belt mission as the highest priority mission for NASA's newly created

  20. Harnessing Big Data for Systems Pharmacology.

    Science.gov (United States)

    Xie, Lei; Draizen, Eli J; Bourne, Philip E

    2017-01-06

    Systems pharmacology aims to holistically understand mechanisms of drug actions to support drug discovery and clinical practice. Systems pharmacology modeling (SPM) is data driven. It integrates an exponentially growing amount of data at multiple scales (genetic, molecular, cellular, organismal, and environmental). The goal of SPM is to develop mechanistic or predictive multiscale models that are interpretable and actionable. The current explosions in genomics and other omics data, as well as the tremendous advances in big data technologies, have already enabled biologists to generate novel hypotheses and gain new knowledge through computational models of genome-wide, heterogeneous, and dynamic data sets. More work is needed to interpret and predict a drug response phenotype, which is dependent on many known and unknown factors. To gain a comprehensive understanding of drug actions, SPM requires close collaborations between domain experts from diverse fields and integration of heterogeneous models from biophysics, mathematics, statistics, machine learning, and semantic webs. This creates challenges in model management, model integration, model translation, and knowledge integration. In this review, we discuss several emergent issues in SPM and potential solutions using big data technology and analytics. The concurrent development of high-throughput techniques, cloud computing, data science, and the semantic web will likely allow SPM to be findable, accessible, interoperable, reusable, reliable, interpretable, and actionable.