WorldWideScience

Sample records for ignoring history big

  1. Big data in history

    CERN Document Server

    Manning, Patrick

    2013-01-01

    Big Data in History introduces the project to create a world-historical archive, tracing the last four centuries of historical dynamics and change. Chapters address the archive's overall plan, how to interpret the past through a global archive, the missions of gathering records, linking local data into global patterns, and exploring the results.

  2. The Case for "Big History."

    Science.gov (United States)

    Christian, David

    1991-01-01

    Urges an approach to the teaching of history that takes the largest possible perspective, crossing time as well as space. Discusses the problems and advantages of such an approach. Describes a course on "big" history that begins with time, creation myths, and astronomy, and moves on to paleontology and evolution. (DK)

  3. The International Big History Association

    Science.gov (United States)

    Duffy, Michael; Duffy, D'Neil

    2013-01-01

    IBHA, the International Big History Association, was organized in 2010 and "promotes the unified, interdisciplinary study and teaching of history of the Cosmos, Earth, Life, and Humanity." This is the vision that Montessori embraced long before the discoveries of modern science fleshed out the story of the evolving universe. "Big…

  4. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  5. A New Look at Big History

    Science.gov (United States)

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  6. The Marley hypothesis: denial of racism reflects ignorance of history.

    Science.gov (United States)

    Nelson, Jessica C; Adams, Glenn; Salter, Phia S

    2013-02-01

    This study used a signal detection paradigm to explore the Marley hypothesis--that group differences in perception of racism reflect dominant-group denial of and ignorance about the extent of past racism. White American students from a midwestern university and Black American students from two historically Black universities completed surveys about their historical knowledge and perception of racism. Relative to Black participants, White participants perceived less racism in both isolated incidents and systemic manifestations of racism. They also performed worse on a measure of historical knowledge (i.e., they did not discriminate historical fact from fiction), and this group difference in historical knowledge mediated the differences in perception of racism. Racial identity relevance moderated group differences in perception of systemic manifestations of racism (but not isolated incidents), such that group differences were stronger among participants who scored higher on a measure of racial identity relevance. The results help illuminate the importance of epistemologies of ignorance: cultural-psychological tools that afford denial of and inaction about injustice.

  7. A little big history of Tiananmen

    NARCIS (Netherlands)

    Quaedackers, E.; Grinin, L.E.; Korotayev, A.V.; Rodrigue, B.H.

    2011-01-01

    This contribution aims at demonstrating the usefulness of studying small-scale subjects such as Tiananmen, or the Gate of Heavenly Peace, in Beijing - from a Big History perspective. By studying such a ‘little big history’ of Tiananmen, previously overlooked yet fundamental explanations for why

  8. A little big history of Tiananmen

    OpenAIRE

    Quaedackers, E.; Grinin, L.E.; Korotayev, A.V.; Rodrigue, B.H.

    2011-01-01

    This contribution aims at demonstrating the usefulness of studying small-scale subjects such as Tiananmen, or the Gate of Heavenly Peace, in Beijing - from a Big History perspective. By studying such a ‘little big history’ of Tiananmen, previously overlooked yet fundamental explanations for why people built the gate the way they did can be found. These explanations are useful in their own right and may also be used to deepen our understanding of more traditional explanations of why Tiananmen ...

  9. SETI as a part of Big History

    Science.gov (United States)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive the Statistical Drake Equation (namely the statistical extension of the classical Drake Equation typical of SETI) can be regarded as the “frozen in time” part of GBM. This makes SETI a subset of our Big History Theory based on GBMs: just as the GBM is the “movie” unfolding in time, so the Statistical Drake Equation is its “still picture”, static in time, and the GBM is the time-extension of the Drake Equation. Darwinian Evolution on Earth may be easily described as an increasing GBM in the number of living species on Earth over the last 3.5 billion years. The first of them was RNA 3.5 billion years ago, and now 50 million living species or more exist, each

  10. Big-Time Football Conferences Tried To Ignore Rule on Representation of Women.

    Science.gov (United States)

    Naughton, Jim

    1997-01-01

    Controversy over limited representation of women on a key committee of the National Collegiate Athletic Association, the Division I Management Council, has renewed concerns that big-time football conferences are not committed to diverse membership on such panels. The division's board of directors rejected the first female nominees and suggested…

  11. The Natural Science Underlying Big History

    Directory of Open Access Journals (Sweden)

    Eric J. Chaisson

    2014-01-01

    Full Text Available Nature’s many varied complex systems—including galaxies, stars, planets, life, and society—are islands of order within the increasingly disordered Universe. All organized systems are subject to physical, biological, or cultural evolution, which together comprise the grander interdisciplinary subject of cosmic evolution. A wealth of observational data supports the hypothesis that increasingly complex systems evolve unceasingly, uncaringly, and unpredictably from big bang to humankind. These are global history greatly extended, big history with a scientific basis, and natural history broadly portrayed across ∼14 billion years of time. Human beings and our cultural inventions are not special, unique, or apart from Nature; rather, we are an integral part of a universal evolutionary process connecting all such complex systems throughout space and time. Such evolution writ large has significant potential to unify the natural sciences into a holistic understanding of who we are and whence we came. No new science (beyond frontier, nonequilibrium thermodynamics is needed to describe cosmic evolution’s major milestones at a deep and empirical level. Quantitative models and experimental tests imply that a remarkable simplicity underlies the emergence and growth of complexity for a wide spectrum of known and diverse systems. Energy is a principal facilitator of the rising complexity of ordered systems within the expanding Universe; energy flows are as central to life and society as they are to stars and galaxies. In particular, energy rate density—contrasting with information content or entropy production—is an objective metric suitable to gauge relative degrees of complexity among a hierarchy of widely assorted systems observed throughout the material Universe. Operationally, those systems capable of utilizing optimum amounts of energy tend to survive, and those that cannot are nonrandomly eliminated.

  12. How should we do the history of Big Data?

    OpenAIRE

    David Beer

    2016-01-01

    Taking its lead from Ian Hacking’s article ‘How should we do the history of statistics?’, this article reflects on how we might develop a sociologically informed history of big data. It argues that within the history of social statistics we have a relatively well developed history of the material phenomenon of big data. Yet this article argues that we now need to take the concept of ‘big data’ seriously, there is a pressing need to explore the type of work that is being done by that concept. ...

  13. Novel Readings: The History of a Writing Community by a Partial, Prejudiced, & Ignorant Historian

    Science.gov (United States)

    King, Virginia

    2013-01-01

    In this article, I explore the history of a higher education writing community from its establishment in 2005 to the present day. In order to provide a model of community development which may be generalizable, this inherently "partial" and "prejudiced" Autoethnographic account is framed by themes taken from three of the novels of Jane Austen…

  14. Organizational Ignorance

    DEFF Research Database (Denmark)

    Lange, Ann-Christina

    2016-01-01

    This paper provides an analysis of strategic uses of ignorance or not-knowing in one of the most secretive industries within the financial sector. The focus of the paper is on the relation between imitation and ignorance within the organizational structure of high-frequency trading (HFT) firms...... and investigate the kinds of imitations that might be produced from structures of not-knowing (i.e. structures intended to divide, obscure and protect knowledge). This point is illustrated through ethnographic studies and interviews within five HFT firms. The data show how a black-box structure of ignorance...

  15. Big History or the 13800 million years from the Big Bang to the Human Brain

    Science.gov (United States)

    Gústafsson, Ludvik E.

    2017-04-01

    Big History is the integrated history of the Cosmos, Earth, Life, and Humanity. It is an attempt to understand our existence as a continuous unfolding of processes leading to ever more complex structures. Three major steps in the development of the Universe can be distinguished, the first being the creation of matter/energy and forces in the context of an expanding universe, while the second and third steps were reached when completely new qualities of matter came into existence. 1. Matter comes out of nothing Quantum fluctuations and the inflation event are thought to be responsible for the creation of stable matter particles in what is called the Big Bang. Along with simple particles the universe is formed. Later larger particles like atoms and the most simple chemical elements hydrogen and helium evolved. Gravitational contraction of hydrogen and helium formed the first stars und later on the first galaxies. Massive stars ended their lives in violent explosions releasing heavier elements like carbon, oxygen, nitrogen, sulfur and iron into the universe. Subsequent star formation led to star systems with bodies containing these heavier elements. 2. Matter starts to live About 9200 million years after the Big Bang a rather inconspicous star of middle size formed in one of a billion galaxies. The leftovers of the star formation clumped into bodies rotating around the central star. In some of them elements like silicon, oxygen, iron and many other became the dominant matter. On the third of these bodies from the central star much of the surface was covered with an already very common chemical compound in the universe, water. Fluid water and plenty of various elements, especially carbon, were the ingredients of very complex chemical compounds that made up even more complex structures. These were able to replicate themselves. Life had appeared, the only occasion that we human beings know of. Life evolved subsequently leading eventually to the formation of multicellular

  16. The Varieties of Ignorance

    DEFF Research Database (Denmark)

    Nottelmann, Nikolaj

    2016-01-01

    This chapter discusses varieties of ignorance divided according to kind (what the subject is ignorant of), degree, and order (e.g. ignorance of ignorance equals second-order ignorance). It provides analyses of notions such as factual ignorance, erotetic ignorance (ignorance of answers to question...

  17. Pre-big bang cosmology: A long history of time?

    International Nuclear Information System (INIS)

    Veneziano, G.

    1999-01-01

    The popular myth according to which the Universe - and time itself - started with/near a big bang singularity is questioned. After claiming that the two main puzzles of standard cosmology allow for two possible logical answers, I will argue that superstring theory strongly favours the the pre-big bang (PBB) alternative. I will then explain why PBB inflation is as generic as classical gravitational collapse, and why, as a result of symmetries in the latter problem, recent fine-tuning objections to the PBB scenario are unfounded. A hot big bang state naturally results from the powerful amplification of vacuum quantum fluctuations before the big bang, a phenomenon whose observable consequences will be briefly summarized. (author)

  18. Can history improve big bang health reform? Commentary.

    Science.gov (United States)

    Marchildon, Gregory P

    2018-01-26

    At present, the professional skills of the historian are rarely relied upon when health policies are being formulated. There are numerous reasons for this, one of which is the natural desire of decision-makers to break with the past when enacting big bang policy change. This article identifies the strengths professional historians bring to bear on policy development using the establishment and subsequent reform of universal health coverage as an example. Historians provide pertinent and historically informed context; isolate the forces that have historically allowed for major reform; and separate the truly novel reforms from those attempted or implemented in the past. In addition, the historian's use of primary sources allows potentially new and highly salient facts to guide the framing of the policy problem and its solution. This paper argues that historians are critical for constructing a viable narrative of the establishment and evolution of universal health coverage policies. The lack of this narrative makes it difficult to achieve an accurate assessment of systemic gaps in coverage and access, and the design or redesign of universal health coverage that can successfully close these gaps.

  19. Big Data for Global History: The Transformative Promise of Digital Humanities

    Directory of Open Access Journals (Sweden)

    Joris van Eijnatten

    2013-12-01

    Full Text Available This article discusses the promises and challenges of digital humanitiesmethodologies for historical inquiry. In order to address the great outstanding question whether big data will re-invigorate macro-history, a number of research projects are described that use cultural text mining to explore big data repositories of digitised newspapers. The advantages of quantitative analysis, visualisation and named entity recognition in both exploration and analysis are illustrated in the study of public debates on drugs, drug trafficking, and drug users in the early twentieth century (wahsp, the comparative study of discourses about heredity, genetics, and eugenics in Dutch and German newspapers, 1863-1940 (biland and the study of trans-Atlantic discourses (Translantis. While many technological and practical obstacles remain, advantages over traditional hermeneutic methodology are found in heuristics, analytics, quantitative trans-disciplinarity, and reproducibility, offering a quantitative and trans-national perspective on the history of mentalities.

  20. Ignorability for categorical data

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2005-01-01

    We study the problem of ignorability in likelihood-based inference from incomplete categorical data. Two versions of the coarsened at random assumption (car) are distinguished, their compatibility with the parameter distinctness assumption is investigated and several conditions for ignorability...

  1. Ignore and Conquer.

    Science.gov (United States)

    Conroy, Mary

    1989-01-01

    Discusses how teachers can deal with student misbehavior by ignoring negative behavior that is motivated by a desire for attention. Practical techniques are described for pinpointing attention seekers, enlisting classmates to deal with misbehaving students, ignoring misbehavior, and distinguishing behavior that responds to this technique from…

  2. Transdisciplinary Perspectives in Bioethics: A Co-evolutionary Introduction from the Big History

    Directory of Open Access Journals (Sweden)

    Javier Collado-Ruano

    2016-10-01

    Full Text Available The main objective of this work is to expand the bioethics notion expressed in the Article 17th of the Universal Declaration on Bioethics and Human Rights, concerning the interconnections between human beings and other life forms. For this purpose, it is combined the transdisciplinary methodology with the theoretical framework of the “Big History” to approach the co-evolutionary phenomena that life is developing on Earth for some 3.8 billion years. As a result, the study introduces us to the unification, integration and inclusion of the history of the universe, the solar system, Earth, and life with the history of human beings. In conclusion, I consider to safeguard the cosmic miracle that represents the emergence of life we must adopt new transdisciplinary perspectives into bioethics to address the ecosystem complexity of co-evolutionary processes of life on Gaia as a whole.

  3. Strategic Self-Ignorance

    DEFF Research Database (Denmark)

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.

    We examine strategic self-ignorance—the use of ignorance as an excuse to overindulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals — a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58 percent) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  4. Strategic self-ignorance

    DEFF Research Database (Denmark)

    Thunström, Linda; Nordström, Leif Jonas; Shogren, Jason F.

    2016-01-01

    We examine strategic self-ignorance—the use of ignorance as an excuse to over-indulge in pleasurable activities that may be harmful to one’s future self. Our model shows that guilt aversion provides a behavioral rationale for present-biased agents to avoid information about negative future impacts...... of such activities. We then confront our model with data from an experiment using prepared, restaurant-style meals—a good that is transparent in immediate pleasure (taste) but non-transparent in future harm (calories). Our results support the notion that strategic self-ignorance matters: nearly three of five...... subjects (58%) chose to ignore free information on calorie content, leading at-risk subjects to consume significantly more calories. We also find evidence consistent with our model on the determinants of strategic self-ignorance....

  5. Ignorance, information and autonomy

    OpenAIRE

    Harris, J.; Keywood, K.

    2001-01-01

    People have a powerful interest in genetic privacy and its associated claim to ignorance, and some equally powerful desires to be shielded from disturbing information are often voiced. We argue, however, that there is no such thing as a right to remain in ignorance, where a right is understood as an entitlement that trumps competing claims. This does not of course mean that information must always be forced upon unwilling recipients, only that there is no prima facie entitlement to be protect...

  6. Clash of Ignorance

    Directory of Open Access Journals (Sweden)

    Mahmoud Eid

    2012-06-01

    Full Text Available The clash of ignorance thesis presents a critique of the clash of civilizations theory. It challenges the assumptions that civilizations are monolithic entities that do not interact and that the Self and the Other are always opposed to each other. Despite some significantly different values and clashes between Western and Muslim civilizations, they overlap with each other in many ways and have historically demonstrated the capacity for fruitful engagement. The clash of ignorance thesis makes a significant contribution to the understanding of intercultural and international communication as well as to the study of inter-group relations in various other areas of scholarship. It does this by bringing forward for examination the key impediments to mutually beneficial interaction between groups. The thesis directly addresses the particular problem of ignorance that other epistemological approaches have not raised in a substantial manner. Whereas the critique of Orientalism deals with the hegemonic construction of knowledge, the clash of ignorance paradigm broadens the inquiry to include various actors whose respective distortions of knowledge symbiotically promote conflict with each other. It also augments the power-knowledge model to provide conceptual and analytical tools for understanding the exploitation of ignorance for the purposes of enhancing particular groups’ or individuals’ power. Whereas academics, policymakers, think tanks, and religious leaders have referred to the clash of ignorance concept, this essay contributes to its development as a theory that is able to provide a valid basis to explain the empirical evidence drawn from relevant cases.

  7. From Punched Cards to "Big Data": A Social History of Database Populism

    Directory of Open Access Journals (Sweden)

    Kevin Driscoll

    2012-08-01

    Full Text Available Since the diffusion of the punched card tabulator following the 1890 U.S. Census, mass-scale information processing has been alternately a site of opportunity, ambivalence and fear in the American imagination. While large bureaucracies have tended to deploy database technology toward purposes of surveillance and control, the rise of personal computing made databases accessible to individuals and small businesses for the first time. Today, the massive collection of trace communication data by public and private institutions has renewed popular anxiety about the role of the database in society. This essay traces the social history of database technology across three periods that represent significant changes in the accessibility and infrastructure of information processing systems. Although many proposed uses of "big data" seem to threaten individual privacy, a largely-forgotten database populism from the 1970s and 1980s suggests that a reclamation of small-scale data processing might lead to sharper popular critique in the future.

  8. Too big to ignore the business case for big data

    CERN Document Server

    Simon, Phil

    2013-01-01

    Residents in Boston, Massachusetts are automatically reporting potholes and road hazards via their smartphones. Progressive Insurance tracks real-time customer driving patterns and uses that information to offer rates truly commensurate with individual safety. Google accurately predicts local flu outbreaks based upon thousands of user search queries. Amazon provides remarkably insightful, relevant, and timely product recommendations to its hundreds of millions of customers. Quantcast lets companies target precise audiences and key demographics throughout the Web. NASA runs contests via gami

  9. Ignorance, information and autonomy.

    Science.gov (United States)

    Harris, J; Keywood, K

    2001-09-01

    People have a powerful interest in genetic privacy and its associated claim to ignorance, and some equally powerful desires to be shielded from disturbing information are often voiced. We argue, however, that there is no such thing as a right to remain in ignorance, where a fight is understood as an entitlement that trumps competing claims. This does not of course mean that information must always be forced upon unwilling recipients, only that there is no prima facie entitlement to be protected from true or honest information about oneself. Any claims to be shielded from information about the self must compete on equal terms with claims based in the rights and interests of others. In balancing the weight and importance of rival considerations about giving or withholding information, if rights claims have any place, rights are more likely to be defensible on the side of honest communication of information rather than in defence of ignorance. The right to free speech and the right to decline to accept responsibility to take decisions for others imposed by those others seem to us more plausible candidates for fully fledged rights in this field than any purported right to ignorance. Finally, and most importantly, if the right to autonomy is invoked, a proper understanding of the distinction between claims to liberty and claims to autonomy show that the principle of autonomy, as it is understood in contemporary social ethics and English law, supports the giving rather than the withholding of information in most circumstances.

  10. The virtues of ignorance.

    Science.gov (United States)

    Son, Lisa K; Kornell, Nate

    2010-02-01

    Although ignorance and uncertainty are usually unwelcome feelings, they have unintuitive advantages for both human and non-human animals, which we review here. We begin with the perils of too much information: expertise and knowledge can come with illusions (and delusions) of knowing. We then describe how withholding information can counteract these perils: providing people with less information enables them to judge more precisely what they know and do not know, which in turn enhances long-term memory. Data are presented from a new experiment that illustrates how knowing what we do not know can result in helpful choices and enhanced learning. We conclude by showing that ignorance can be a virtue, as long as it is recognized and rectified. Copyright 2009 Elsevier B.V. All rights reserved.

  11. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  12. Moving Forward, Looking Back--Historical Perspective, "Big History" and the Return of the "Longue Durée": Time to Develop Our Scale Hopping Muscles

    Science.gov (United States)

    Hawkey, Kate

    2015-01-01

    "Big history" is a term receiving a great deal of attention at present, particularly in North America where considerable sums of money have been invested in designing curricula and assessment tools to help teachers teach history at far larger scales of time than normal. Hawkey considers the pros and cons of incorporating components of…

  13. History Writ Large: Big-character Posters, Red Logorrhoea and the Art of Words

    OpenAIRE

    Barmé, Geremie R.

    2012-01-01

    The starting point of this paper is the 1986 artwork of the then Xiamen-based artist Wu Shanzhuan, called ‘Red Humor’, which reworked references to big-character posters (dazi bao 大字报) and other Mao-era forms of political discourse, recalling the Cultural Revolution. It explains how Wu’s installation offered a provocative microcosm of the overwhelming mood engendered by a logocentric movement to ‘paint the nation red’ with word-images during the years 1966-1967. This discussion of the hyper-r...

  14. Transdisciplinary Perspectives in Bioethics: A Co-evolutionary Introduction from the Big History

    OpenAIRE

    Javier Collado-Ruano

    2016-01-01

    The main objective of this work is to expand the bioethics notion expressed in the Article 17th of the Universal Declaration on Bioethics and Human Rights, concerning the interconnections between human beings and other life forms. For this purpose, it is combined the transdisciplinary methodology with the theoretical framework of the “Big History” to approach the co-evolutionary phenomena that life is developing on Earth for some 3.8 billion years. As a result, the study introduces us to t...

  15. Big is a Thing of the Past: Climate Change and Methodology in the History of Ideas.

    Science.gov (United States)

    Coen, Deborah R

    2016-01-01

    The climate crisis has raised questions about the proper scale of historical analysis in the Anthropocene. After explaining how this methodological crisis differs from an earlier stand-off between proponents of microhistory and total history, this paper suggests a role for intellectual history in moving us beyond the current debate. What is needed is a history of "scaling"; that is, we need to historicize the process of mediating between different frameworks of measurement, even those that might at first appear incommensurable. Historical examples are explored in which such a process of commensuration has allowed for a pluralism of perceptions of space and time.

  16. Molecular ecology of the big brown bat (Eptesicus fuscus): Genetic and natural history variation in a hybrid zone

    Science.gov (United States)

    Neubaum, M.A.; Douglas, M.R.; Douglas, M.E.; O'Shea, T.J.

    2007-01-01

    Several geographically distinct mitochondrial DNA (mtDNA) lineages of the big brown bat (Eptesicus fuscus) have been documented in North America. Individuals from 2 of these lineages, an eastern and a western form, co-occur within maternity colonies in Colorado. The discovery of 2 divergent mtDNA lineages in sympatry prompted a set of questions regarding possible biological differences between haplotypes. We captured big brown bats at maternity roosts in Colorado and recorded data on body size, pelage color, litter size, roosting and overwintering behaviors, and local distributions. Wing biopsies were collected for genetic analysis. The ND2 region of the mtDNA molecule was used to determine lineage of the bats. In addition, nuclear DNA (nDNA) intron 1 of the ??-globin gene was used to determine if mtDNA lineages are hybridizing. Eastern and western mtDNA lineages differed by 10.3% sequence divergence and examination of genetic data suggests recent population expansion for both lineages. Differences in distribution occur along the Colorado Front Range, with an increasing proportion of western haplotypes farther south. Results from nDNA analyses demonstrated hybridization between the 2 lineages. Additionally, no outstanding distinctiveness was found between the mtDNA lineages in natural history characters examined. We speculate that historical climate changes separated this species into isolated eastern and western populations, and that secondary contact with subsequent interbreeding was facilitated by European settlement. ?? 2007 American Society of Mammalogists.

  17. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  18. The logic of strategic ignorance.

    Science.gov (United States)

    McGoey, Linsey

    2012-09-01

    Ignorance and knowledge are often thought of as opposite phenomena. Knowledge is seen as a source of power, and ignorance as a barrier to consolidating authority in political and corporate arenas. This article disputes this, exploring the ways that ignorance serves as a productive asset, helping individuals and institutions to command resources, deny liability in the aftermath of crises, and to assert expertise in the face of unpredictable outcomes. Through a focus on the Food and Drug Administration's licensing of Ketek, an antibiotic drug manufactured by Sanofi-Aventis and linked to liver failure, I suggest that in drug regulation, different actors, from physicians to regulators to manufacturers, often battle over who can attest to the least knowledge of the efficacy and safety of different drugs - a finding that raises new insights about the value of ignorance as an organizational resource. © London School of Economics and Political Science 2012.

  19. Big Data, Algorithmic Regulation, and the History of the Cybersyn Project in Chile, 1971–1973

    Directory of Open Access Journals (Sweden)

    Katharina Loeber

    2018-04-01

    Full Text Available We are living in a data-driven society. Big Data and the Internet of Things are popular terms. Governments, universities and the private sector make great investments in collecting and storing data and also extracting new knowledge from these data banks. Technological enthusiasm runs throughout political discourses. “Algorithmic regulation” is defined as a form of data-driven governance. Big Data shall offer brand new opportunities in scientific research. At the same time, political criticism of data storage grows because of a lack of privacy protection and the centralization of data in the hands of governments and corporations. Calls for data-driven dynamic regulation have existed in the past. In Chile, cybernetic development led to the creation of Cybersyn, a computer system that was created to manage the socialist economy under the Allende government 1971–1973. My contribution will present this Cybersyn project created by Stafford Beer. Beer proposed the creation of a “liberty machine” in which expert knowledge would be grounded in data-guided policy. The paper will focus on the human–technological complex in society. The first section of the paper will discuss whether the political and social environment can completely change the attempts of algorithmic regulation. I will deal specifically with the development of technological knowledge in Chile, a postcolonial state, and the relationship between citizens and data storage in a socialist state. In a second section, I will examine the question of which measures can lessen the danger of data storage regarding privacy in a democratic society. Lastly, I will discuss how much data-driven governance is required for democracy and political participation. I will present a second case study: digital participatory budgeting (DPB in Brazil.

  20. Ignoring Ignorance: Notes on Pedagogical Relationships in Citizen Science

    Directory of Open Access Journals (Sweden)

    Michael Scroggins

    2017-04-01

    Full Text Available Theoretically, this article seeks to broaden the conceptualization of ignorance within STS by drawing on a line of theory developed in the philosophy and anthropology of education to argue that ignorance can be productively conceptualized as a state of possibility and that doing so can enable more democratic forms of citizen science. In contrast to conceptualizations of ignorance as a lack, lag, or manufactured product, ignorance is developed here as both the opening move in scientific inquiry and the common ground over which that inquiry proceeds. Empirically, the argument is developed through an ethnographic description of Scroggins' participation in a failed citizen science project at a DIYbio laboratory. Supporting the empirical case are a review of the STS literature on expertise and a critical examination of the structures of participation within two canonical citizen science projects. Though onerous, through close attention to how people transform one another during inquiry, increasingly democratic forms of citizen science, grounded in the commonness of ignorance, can be put into practice.

  1. Growth history and crown vine coverage are principal factors influencing growth and mortality rates of big-leaf mahogany Swietenia macrophylla in Brazil

    Science.gov (United States)

    James Grogan; R. Matthew Landis

    2009-01-01

    1. Current efforts to model population dynamics of high-value tropical timber species largely assume that individual growth history is unimportant to population dynamics, yet growth autocorrelation is known to adversely affect model predictions. In this study, we analyse a decade of annual census data from a natural population of big-leaf mahogany Swietenia macrophylla...

  2. Developing a ‘big picture’: Effects of collaborative construction of multimodal representations in history

    NARCIS (Netherlands)

    Prangsma, M.E.; van Boxtel, C.A.M.; Kanselaar, G.

    2008-01-01

    Many pupils have difficulties with the abstract verbal information in history lessons. In this study we assessed the value of active construction of multimodal representations of historical phenomena. In an experimental study we compared the learning outcomes of pupils who co-constructed textual

  3. History Writ Large: Big-character Posters, Red Logorrhoea and the Art of Words

    Directory of Open Access Journals (Sweden)

    Geremie R. Barmé

    2013-01-01

    Full Text Available The starting point of this paper is the 1986 artwork of the then Xiamen-based artist Wu Shanzhuan, called ‘Red Humor’, which reworked references to big-character posters (dazi bao 大字报 and other Mao-era forms of political discourse, recalling the Cultural Revolution. It explains how Wu’s installation offered a provocative microcosm of the overwhelming mood engendered by a logocentric movement to ‘paint the nation red’ with word-images during the years 1966-1967. This discussion of the hyper-real use of the dazi bao during China’s Cultural Revolution era (c.1964-1978 allows us to probe into ‘the legacies of the word made image’ in modern China. The paper argues that, since the 1980s, Wu Shanzhuan has had many emulators and ‘avant-garde successors’, since we have seen multiple examples of parodic deconstructions of the cultural authority of the Chinese character (zi in recent decades.

  4. The History of Radio Astronomy and the National Radio Astronomy Observatory: Evolution Toward Big Science

    Science.gov (United States)

    Malphrus, Benjamin Kevin

    1990-01-01

    The purpose of this study is to examine the sequence of events that led to the establishment of the NRAO, the construction and development of instrumentation and the contributions and discovery events and to relate the significance of these events to the evolution of the sciences of radio astronomy and cosmology. After an overview of the resources, a brief discussion of the early days of the science is given to set the stage for an examination of events that led to the establishment of the NRAO. The developmental and construction phases of the major instruments including the 85-foot Tatel telescope, the 300-foot telescope, the 140-foot telescope, and the Green Bank lnterferometer are examined. The technical evolution of these instruments is traced and their relevance to scientific programs and discovery events is discussed. The history is told in narrative format that is interspersed with technical and scientific explanations. Through the use of original data technical and scientific information of historical concern is provided to elucidate major developments and events. An interpretive discussion of selected programs, events and technological developments that epitomize the contributions of the NRAO to the science of radio astronomy is provided. Scientific programs conducted with the NRAO instruments that were significant to galactic and extragalactic astronomy are presented. NRAO research programs presented include continuum and source surveys, mapping, a high precision verification of general relativity, and SETI programs. Cosmic phenomena investigated in these programs include galactic and extragalactic HI and HII, emission nebula, supernova remnants, cosmic masers, giant molecular clouds, radio stars, normal and radio galaxies, and quasars. Modern NRAO instruments including the VLA and VLBA and their scientific programs are presented in the final chapter as well as plans for future NRAO instruments such as the GBT.

  5. Fault-ignorant quantum search

    International Nuclear Information System (INIS)

    Vrana, Péter; Reeb, David; Reitzner, Daniel; Wolf, Michael M

    2014-01-01

    We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm. (paper)

  6. Aspiring to Spectral Ignorance in Earth Observation

    Science.gov (United States)

    Oliver, S. A.

    2016-12-01

    Enabling robust, defensible and integrated decision making in the Era of Big Earth Data requires the fusion of data from multiple and diverse sensor platforms and networks. While the application of standardised global grid systems provides a common spatial analytics framework that facilitates the computationally efficient and statistically valid integration and analysis of these various data sources across multiple scales, there remains the challenge of sensor equivalency; particularly when combining data from different earth observation satellite sensors (e.g. combining Landsat and Sentinel-2 observations). To realise the vision of a sensor ignorant analytics platform for earth observation we require automation of spectral matching across the available sensors. Ultimately, the aim is to remove the requirement for the user to possess any sensor knowledge in order to undertake analysis. This paper introduces the concept of spectral equivalence and proposes a methodology through which equivalent bands may be sourced from a set of potential target sensors through application of equivalence metrics and thresholds. A number of parameters can be used to determine whether a pair of spectra are equivalent for the purposes of analysis. A baseline set of thresholds for these parameters and how to apply them systematically to enable relation of spectral bands amongst numerous different sensors is proposed. The base unit for comparison in this work is the relative spectral response. From this input, determination of a what may constitute equivalence can be related by a user, based on their own conceptualisation of equivalence.

  7. Traffic forecasts ignoring induced demand

    DEFF Research Database (Denmark)

    Næss, Petter; Nicolaisen, Morten Skou; Strand, Arvid

    2012-01-01

    the model calculations included only a part of the induced traffic, the difference in cost-benefit results compared to the model excluding all induced traffic was substantial. The results show lower travel time savings, more adverse environmental impacts and a considerably lower benefitcost ratio when...... induced traffic is partly accounted for than when it is ignored. By exaggerating the economic benefits of road capacity increase and underestimating its negative effects, omission of induced traffic can result in over-allocation of public money on road construction and correspondingly less focus on other...... performance of a proposed road project in Copenhagen with and without short-term induced traffic included in the transport model. The available transport model was not able to include long-term induced traffic resulting from changes in land use and in the level of service of public transport. Even though...

  8. The Power of Ignorance | Code | Philosophical Papers

    African Journals Online (AJOL)

    Taking my point of entry from George Eliot's reference to 'the power of Ignorance', I analyse some manifestations of that power as she portrays it in the life of a young woman of affluence, in her novel Daniel Deronda. Comparing and contrasting this kind of ignorance with James Mill's avowed ignorance of local tradition and ...

  9. Big Data and reality

    Directory of Open Access Journals (Sweden)

    Ryan Shaw

    2015-11-01

    Full Text Available DNA sequencers, Twitter, MRIs, Facebook, particle accelerators, Google Books, radio telescopes, Tumblr: what do these things have in common? According to the evangelists of “data science,” all of these are instruments for observing reality at unprecedentedly large scales and fine granularities. This perspective ignores the social reality of these very different technological systems, ignoring how they are made, how they work, and what they mean in favor of an exclusive focus on what they generate: Big Data. But no data, big or small, can be interpreted without an understanding of the process that generated them. Statistical data science is applicable to systems that have been designed as scientific instruments, but is likely to lead to confusion when applied to systems that have not. In those cases, a historical inquiry is preferable.

  10. From dissecting ignorance to solving algebraic problems

    International Nuclear Information System (INIS)

    Ayyub, Bilal M.

    2004-01-01

    Engineers and scientists are increasingly required to design, test, and validate new complex systems in simulation environments and/or with limited experimental results due to international and/or budgetary restrictions. Dealing with complex systems requires assessing knowledge and information by critically evaluating them in terms relevance, completeness, non-distortion, coherence, and other key measures. Using the concepts and definitions from evolutionary knowledge and epistemology, ignorance is examined and classified in the paper. Two ignorance states for a knowledge agent are identified: (1) non-reflective (or blind) state, i.e. the person does not know of self-ignorance, a case of ignorance of ignorance; and (2) reflective state, i.e. the person knows and recognizes self-ignorance. Ignorance can be viewed to have a hierarchal classification based on its sources and nature as provided in the paper. The paper also explores limits on knowledge construction, closed and open world assumptions, and fundamentals of evidential reasoning using belief revision and diagnostics within the framework of ignorance analysis for knowledge construction. The paper also examines an algebraic problem set as identified by Sandia National Laboratories to be a basic building block for uncertainty propagation in computational mechanics. Solution algorithms are provided for the problem set for various assumptions about the state of knowledge about its parameters

  11. On the Rationality of Pluralistic Ignorance

    DEFF Research Database (Denmark)

    Bjerring, Jens Christian Krarup; Hansen, Jens Ulrik; Pedersen, Nikolaj Jang Lee Linding

    2014-01-01

    Pluralistic ignorance is a socio-psychological phenomenon that involves a systematic discrepancy between people’s private beliefs and public behavior in cer- tain social contexts. Recently, pluralistic ignorance has gained increased attention in formal and social epistemology. But to get clear...

  12. From dissecting ignorance to solving algebraic problems

    Energy Technology Data Exchange (ETDEWEB)

    Ayyub, Bilal M

    2004-09-01

    Engineers and scientists are increasingly required to design, test, and validate new complex systems in simulation environments and/or with limited experimental results due to international and/or budgetary restrictions. Dealing with complex systems requires assessing knowledge and information by critically evaluating them in terms relevance, completeness, non-distortion, coherence, and other key measures. Using the concepts and definitions from evolutionary knowledge and epistemology, ignorance is examined and classified in the paper. Two ignorance states for a knowledge agent are identified: (1) non-reflective (or blind) state, i.e. the person does not know of self-ignorance, a case of ignorance of ignorance; and (2) reflective state, i.e. the person knows and recognizes self-ignorance. Ignorance can be viewed to have a hierarchal classification based on its sources and nature as provided in the paper. The paper also explores limits on knowledge construction, closed and open world assumptions, and fundamentals of evidential reasoning using belief revision and diagnostics within the framework of ignorance analysis for knowledge construction. The paper also examines an algebraic problem set as identified by Sandia National Laboratories to be a basic building block for uncertainty propagation in computational mechanics. Solution algorithms are provided for the problem set for various assumptions about the state of knowledge about its parameters.

  13. Is There Such a Thing as 'White Ignorance' in British Education?

    Science.gov (United States)

    Bain, Zara

    2018-01-01

    I argue that political philosopher Charles W. Mills' twin concepts of 'the epistemology of ignorance' and 'white ignorance' are useful tools for thinking through racial injustice in the British education system. While anti-racist work in British education has a long history, racism persists in British primary, secondary and tertiary education. For…

  14. Ignorance-Based Instruction in Higher Education.

    Science.gov (United States)

    Stocking, S. Holly

    1992-01-01

    Describes how three groups of educators (in a medical school, a psychology department, and a journalism school) are helping instructors and students to recognize, manage, and use ignorance to promote learning. (SR)

  15. Is Ignorance of Climate Change Culpable?

    Science.gov (United States)

    Robichaud, Philip

    2017-10-01

    Sometimes ignorance is an excuse. If an agent did not know and could not have known that her action would realize some bad outcome, then it is plausible to maintain that she is not to blame for realizing that outcome, even when the act that leads to this outcome is wrong. This general thought can be brought to bear in the context of climate change insofar as we think (a) that the actions of individual agents play some role in realizing climate harms and (b) that these actions are apt targets for being considered right or wrong. Are agents who are ignorant about climate change and the way their actions contribute to it excused because of their ignorance, or is their ignorance culpable? In this paper I examine these questions from the perspective of recent developments in the theories of responsibility for ignorant action and characterize their verdicts. After developing some objections to existing attempts to explore these questions, I characterize two influential theories of moral responsibility and discuss their implications for three different types of ignorance about climate change. I conclude with some recommendations for how we should react to the face of the theories' conflicting verdicts. The answer to the question posed in the title, then, is: "Well, it's complicated."

  16. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  17. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  18. Knowledge, responsibility, decision making and ignorance

    DEFF Research Database (Denmark)

    Huniche, Lotte

    2001-01-01

    of and ignoring) seems to be commonly applicable to describing persons living at risk for Huntington´s Disease (HD). So what does everyday conduct of life look like from an "ignorance" perspective? And how can we discuss and argue about morality and ethics taking these seemingly diverse ways of living at risk...... into account? Posing this question, I hope to contribute to new reflections on possibilities and constraints in people´s lives with HD as well as in research and to open up new ways of discussing "right and wrong"....

  19. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  20. DMPD: TLR ignores methylated RNA? [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 16111629 TLR ignores methylated RNA? Ishii KJ, Akira S. Immunity. 2005 Aug;23(2):11...1-3. (.png) (.svg) (.html) (.csml) Show TLR ignores methylated RNA? PubmedID 16111629 Title TLR ignores methylated

  1. Should general psychiatry ignore somatization and hypochondriasis?

    Science.gov (United States)

    Creed, Francis

    2006-10-01

    This paper examines the tendency for general psychiatry to ignore somatization and hypochondriasis. These disorders are rarely included in national surveys of mental health and are not usually regarded as a concern of general psychiatrists; yet primary care doctors and other physicians often feel let down by psychiatry's failure to offer help in this area of medical practice. Many psychiatrists are unaware of the suffering, impaired function and high costs that can result from these disorders, because these occur mainly within primary care and secondary medical services. Difficulties in diagnosis and a tendency to regard them as purely secondary phenomena of depression, anxiety and related disorders mean that general psychiatry may continue to ignore somatization and hypochondriasis. If general psychiatry embraced these disorders more fully, however, it might lead to better prevention and treatment of depression as well as helping to prevent the severe disability that may arise in association with these disorders.

  2. Should general psychiatry ignore somatization and hypochondriasis?

    OpenAIRE

    CREED, FRANCIS

    2006-01-01

    This paper examines the tendency for general psychiatry to ignore somatization and hypochondriasis. These disorders are rarely included in national surveys of mental health and are not usually regarded as a concern of general psychiatrists; yet primary care doctors and other physicians often feel let down by psychiatry's failure to offer help in this area of medical practice. Many psychiatrists are unaware of the suffering, impaired function and high costs that can result fr...

  3. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  4. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  5. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  6. Issues ignored in laboratory quality surveillance

    International Nuclear Information System (INIS)

    Zeng Jing; Li Xingyuan; Zhang Tingsheng

    2008-01-01

    According to the work requirement of the related laboratory quality surveillance in ISO17025, this paper analyzed and discussed the issued ignored in the laboratory quality surveillance. In order to solve the present problem, it is required to understand the work responsibility in the quality surveillance correctly, to establish the effective working routine in the quality surveillance, and to conduct, the quality surveillance work. The object in the quality surveillance shall be 'the operator' who engaged in the examination/calibration directly in the laboratory, especially the personnel in training (who is engaged in the examination/calibration). The quality supervisors shall be fully authorized, so that they can correctly understand the work responsibility in quality surveillance, and are with the rights for 'full supervision'. The laboratory also shall arrange necessary training to the quality supervisor, so that they can obtain sufficient guide in time and are with required qualification or occupation prerequisites. (authors)

  7. Ignorance of electrosurgery among obstetricians and gynaecologists.

    Science.gov (United States)

    Mayooran, Zorana; Pearce, Scott; Tsaltas, Jim; Rombauts, Luk; Brown, T Ian H; Lawrence, Anthony S; Fraser, Kym; Healy, David L

    2004-12-01

    The purpose of this study was to assess the level of skill of laparoscopic surgeons in electrosurgery. Subjects were asked to complete a practical diathermy station and a written test of electrosurgical knowledge. Tests were held in teaching and non-teaching hospitals. Twenty specialists in obstetrics and gynaecology were randomly selected and tested on the Monash University gynaecological laparoscopic pelvi-trainer. Twelve candidates were consultants with 9-28 years of practice in operative laparoscopy, and 8 were registrars with up to six years of practice in operative laparoscopy. Seven consultants and one registrar were from rural Australia, and three consultants were from New Zealand. Candidates were marked with checklist criteria resulting in a pass/fail score, as well as a weighted scoring system. We retested 11 candidates one year later with the same stations. No improvement in electrosurgery skill in one year of obstetric and gynaecological practice. No candidate successfully completed the written electrosurgery station in the initial test. A slight improvement in the pass rate to 18% was observed in the second test. The pass rate of the diathermy station dropped from 50% to 36% in the second test. The study found ignorance of electrosurgery/diathermy among gynaecological surgeons. One year later, skills were no better.

  8. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  9. Beyond duplicity and ignorance in global fisheries

    Directory of Open Access Journals (Sweden)

    Daniel Pauly

    2009-06-01

    Full Text Available The three decades following World War II were a period of rapidly increasing fishing effort and landings, but also of spectacular collapses, particularly in small pelagic fish stocks. This is also the period in which a toxic triad of catch underreporting, ignoring scientific advice and blaming the environment emerged as standard response to ongoing fisheries collapses, which became increasingly more frequent, finally engulfing major North Atlantic fisheries. The response to the depletion of traditional fishing grounds was an expansion of North Atlantic (and generally of northern hemisphere fisheries in three dimensions: southward, into deeper waters and into new taxa, i.e. catching and marketing species of fish and invertebrates previously spurned, and usually lower in the food web. This expansion provided many opportunities for mischief, as illustrated by the European Union’s negotiated ‘agreements’ for access to the fish resources of Northwest Africa, China’s agreement-fee exploitation of the same, and Japan blaming the resulting resource declines on the whales. Also, this expansion provided new opportunities for mislabelling seafood unfamiliar to North Americans and Europeans, and misleading consumers, thus reducing the impact of seafood guides and similar effort toward sustainability. With fisheries catches declining, aquaculture—despite all public relation efforts—not being able to pick up the slack, and rapidly increasing fuel prices, structural changes are to be expected in both the fishing industry and the scientific disciplines that study it and influence its governance. Notably, fisheries biology, now predominantly concerned with the welfare of the fishing industry, will have to be converted into fisheries conservation science, whose goal will be to resolve the toxic triad alluded to above, and thus maintain the marine biodiversity and ecosystems that provide existential services to fisheries. Similarly, fisheries

  10. Learning to ignore: acquisition of sustained attentional suppression.

    Science.gov (United States)

    Dixon, Matthew L; Ruppel, Justin; Pratt, Jay; De Rosa, Eve

    2009-04-01

    We examined whether the selection mechanisms committed to the suppression of ignored stimuli can be modified by experience to produce a sustained, rather than transient, change in behavior. Subjects repeatedly ignored the shape of stimuli, while attending to their color. On subsequent attention to shape, there was a robust and sustained decrement in performance that was selective to when shape was ignored across multiple-color-target contexts, relative to a single-color-target context. Thus, amount of time ignored was not sufficient to induce a sustained performance decrement. Moreover, in this group, individual differences in initial color target selection were associated with the subsequent performance decrement when attending to previously ignored stimuli. Accompanying this sustained decrement in performance was a transfer in the locus of suppression from an exemplar (e.g., a circle) to a feature (i.e., shape) level of representation. These data suggest that learning can influence attentional selection by sustained attentional suppression of ignored stimuli.

  11. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  12. Big Dreams

    Science.gov (United States)

    Benson, Michael T.

    2015-01-01

    The Keen Johnson Building is symbolic of Eastern Kentucky University's historic role as a School of Opportunity. It is a place that has inspired generations of students, many from disadvantaged backgrounds, to dream big dreams. The construction of the Keen Johnson Building was inspired by a desire to create a student union facility that would not…

  13. Big Science

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1986-05-15

    Astronomy, like particle physics, has become Big Science where the demands of front line research can outstrip the science budgets of whole nations. Thus came into being the European Southern Observatory (ESO), founded in 1962 to provide European scientists with a major modern observatory to study the southern sky under optimal conditions.

  14. On strategic ignorance of environmental harm and social norms

    DEFF Research Database (Denmark)

    Thunström, Linda; van 't Veld, Klaas; Shogren, Jason

    , and that they use ignorance as an excuse to engage in less pro-environmental behavior. It also predicts that the cost of ignorance increases if people can learn about the social norm from the information. We test the model predictions empirically with an experiment that involves an imaginary long- distance flight...... and an option to buy offsets for the flight’s carbon footprint. More than half (53 percent) of the subjects choose to ignore information on the carbon footprint alone before deciding their offset purchase, but ignorance significantly decreases (to 29 percent) when the information additionally reveals the social...

  15. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  16. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  17. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  18. Big Data

    OpenAIRE

    Bútora, Matúš

    2017-01-01

    Cieľom bakalárskej práca je popísať problematiku Big Data a agregačné operácie OLAP pre podporu rozhodovania, ktoré sú na ne aplikované pomocou technológie Apache Hadoop. Prevažná časť práce je venovaná popisu práve tejto technológie. Posledná kapitola sa zaoberá spôsobom aplikovania agregačných operácií a problematikou ich realizácie. Nasleduje celkové zhodnotenie práce a možnosti využitia výsledného systému do budúcna. The aim of the bachelor thesis is to describe the Big Data issue and ...

  19. BIG DATA

    OpenAIRE

    Abhishek Dubey

    2018-01-01

    The term 'Big Data' portrays inventive methods and advances to catch, store, disseminate, oversee and break down petabyte-or bigger estimated sets of data with high-speed & diverted structures. Enormous information can be organized, non-structured or half-organized, bringing about inadequacy of routine information administration techniques. Information is produced from different distinctive sources and can touch base in the framework at different rates. With a specific end goal to handle this...

  20. Women in the History of Astronomy

    Science.gov (United States)

    Álvarez, M. Álvarez; Díaz, Ángeles I.

    1998-06-01

    We think about the History of Astronomy as the History of men. As the History of a few men: Ptolemy, Copernicus, Kepler, Newton,...- men who have changed our way of looking at the sky. But the History of Astronomy is more than that, it is the History of thousand of people whose daily work has allowed the development of knowledge and scientific theories at the time they lived. This, sometimes, "tedious work" permitted the big steps. Many of these people were women, as Theano who married Pythagoras and taught mathematics and astronomy in his school, Hypatia who managed the Library of Alexandria and wrote several astronomical treatises, Hildegard of Bingen who developed a theory on the origin and structure of the Universe in the 12th century, or Sofie Brahe who, worked with her famous brother. And so many privileged women who after a long process of study, were able to develop their scientific interests in spite of been excluded from most of the educational installations and formal and informal groups of men scientist. Most of the works done by these women have been ignored, or wrongly attributed to men throughout History. It often happens that although they have been recognized as good scientists in their own times, women have been discredited by posterior historians who refused to believe that important women scientist ever existed. Here we intend to make a short summary on the lives of some of these women and their astronomical works.

  1. Keynote: Big Data, Big Opportunities

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    The enthusiasm for big data is obscuring the complexity and diversity of data in scholarship and the challenges for stewardship. Inside the black box of data are a plethora of research, technology, and policy issues. Data are not shiny objects that are easily exchanged. Rather, data are representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Data practices are local, varying from field to field, individual to indiv...

  2. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  3. Big³. Editorial.

    Science.gov (United States)

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  4. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...... indtil videre begrænset. Denne artikel belyser, hvordan den offentlige sundhedssektor kan genanvende og udnytte en stadig større mængde data under hensyntagen til offentlige værdier. Artiklen bygger på et casestudie af anvendelsen af store mængder sundhedsdata i Dansk AlmenMedicinsk Database (DAMD......). Analysen viser, at (gen)brug af data i nye sammenhænge er en flerspektret afvejning mellem ikke alene økonomiske rationaler og kvalitetshensyn, men også kontrol over personfølsomme data og etiske implikationer for borgeren. I DAMD-casen benyttes data på den ene side ”i den gode sags tjeneste” til...

  5. On strategic ignorance of environmental harm and social norms

    DEFF Research Database (Denmark)

    Thunström, Linda; van’t Veld, Klaas; Shogren, Jason. F.

    2014-01-01

    decreases (to 29 percent) when the information additionally reveals the share of air travelers who buy carbon offsets. We find evidence that some people use ignorance as an excuse to reduce pro-environmental behavior—ignorance significantly decreases the probability of buying carbon offsets.......Are people strategically ignorant of the negative externalities their activities cause the environment? Herein we examine if people avoid costless information on those externalities and use ignorance as an excuse to reduce pro-environmental behavior. We develop a theoretical framework in which...... people feel internal pressure (“guilt”) from causing harm to the environment (e.g., emitting carbon dioxide) as well as external pressure to conform to the social norm for pro-environmental behavior (e.g., offsetting carbon emissions). Our model predicts that people may benefit from avoiding information...

  6. Big Data and Social Media

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    A critical analysis of the "keep everything" Big Data era, the impact on our lives of the information, at first glance "convenient for future use" that we make known about ourselves on the network. NB! The lecture will be recorded like all Academic Training lectures. Lecturer's biography: Father of the Internet, see https://internethalloffame.org/inductees/vint-cerf or https://en.wikipedia.org/wiki/Vint_Cerf The video on slide number 9 is from page https://www.gapminder.org/tools/#$state$time$value=2018&value;;&chart-type=bubbles   Keywords: Big Data, Internet, History, Applications, tools, privacy, technology, preservation, surveillance, google, Arpanet, CERN, Web  

  7. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  8. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  9. Big Bang or vacuum fluctuation

    International Nuclear Information System (INIS)

    Zel'dovich, Ya.B.

    1980-01-01

    Some general properties of vacuum fluctuations in quantum field theory are described. The connection between the ''energy dominance'' of the energy density of vacuum fluctuations in curved space-time and the presence of singularity is discussed. It is pointed out that a de-Sitter space-time (with the energy density of the vacuum fluctuations in the Einstein equations) that matches the expanding Friedman solution may describe the history of the Universe before the Big Bang. (P.L.)

  10. The Last Big Bang

    Energy Technology Data Exchange (ETDEWEB)

    McGuire, Austin D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meade, Roger Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-13

    As one of the very few people in the world to give the “go/no go” decision to detonate a nuclear device, Austin “Mac” McGuire holds a very special place in the history of both the Los Alamos National Laboratory and the world. As Commander of Joint Task Force Unit 8.1.1, on Christmas Island in the spring and summer of 1962, Mac directed the Los Alamos data collection efforts for twelve of the last atmospheric nuclear detonations conducted by the United States. Since data collection was at the heart of nuclear weapon testing, it fell to Mac to make the ultimate decision to detonate each test device. He calls his experience THE LAST BIG BANG, since these tests, part of Operation Dominic, were characterized by the dramatic displays of the heat, light, and sounds unique to atmospheric nuclear detonations – never, perhaps, to be witnessed again.

  11. Willful Ignorance and the Death Knell of Critical Thought

    Science.gov (United States)

    Rubin, Daniel Ian

    2018-01-01

    Independent, critical thought has never been more important in the United States. In the Age of Trump, political officials spout falsehoods called "alternative facts" as if they were on equal footing with researchable, scientific data. At the same time, an unquestioning populace engages in acts of "willful ignorance" on a daily…

  12. Tunnel Vision: New England Higher Education Ignores Demographic Peril

    Science.gov (United States)

    Hodgkinson, Harold L.

    2004-01-01

    This author states that American higher education ignores about 90 percent of the environment in which it operates. Colleges change admissions requirements without even informing high schools in their service areas. Community college graduates are denied access to four-year programs because of policy changes made only after it was too late for the…

  13. Policy makers ignoring science and scientists ignoring policy: the medical ethical challenges of heroin treatment

    Directory of Open Access Journals (Sweden)

    Small Dan

    2006-05-01

    Full Text Available Abstract A decade of research in Switzerland, The Netherlands, Germany, and Spain now constitutes a massive body of work supporting the use of heroin treatment for the most difficult patients addicted to opiates. These trials concur on this method's safety and efficacy and are now serving as a prelude to the institution of heroin treatment in clinical practice throughout Europe. While the different sampling and research protocols for heroin treatment in these studies were important to the academic claims about specific results and conclusions that could be drawn from each study, the overall outcomes were quite clear – and uniformly positive. They all find that the use of prescribed pharmaceutical heroin does exactly what it is intended to do: it reaches a treatment refractory group of addicts by engaging them in a positive healthcare relationship with a physician, it reduces their criminal activity, improves their health status, and increases their social tenure through more stable housing, employment, and contact with family. The Canadian trial (NAOMI, now underway for over a year, but not yet completed, now faces a dilemma about what to do with its patients who have successfully completed 12 months of heroin and must be withdrawn from heroin and transferred to other treatments in accordance with the research protocol approved by Government of Canada, federal granting body and host institutions. The problem is that the principal criterion for acceptance to NAOMI was their history of repeated failure in these very same treatment programs to which they will now be referred. The existence of the results from abroad (some of which were not yet available when NAOMI was designed and initiated now raises a very important question for Canada: is it ethical to continue to prohibit the medical use of heroin treatment that has already been shown to be feasible and effective in numerous medical studies throughout the world? And while this is being worked

  14. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  15. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  16. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  17. Professional orientation and pluralistic ignorance among jail correctional officers.

    Science.gov (United States)

    Cook, Carrie L; Lane, Jodi

    2014-06-01

    Research about the attitudes and beliefs of correctional officers has historically been conducted in prison facilities while ignoring jail settings. This study contributes to our understanding of correctional officers by examining the perceptions of those who work in jails, specifically measuring professional orientations about counseling roles, punitiveness, corruption of authority by inmates, and social distance from inmates. The study also examines whether officers are accurate in estimating these same perceptions of their peers, a line of inquiry that has been relatively ignored. Findings indicate that the sample was concerned about various aspects of their job and the management of inmates. Specifically, officers were uncertain about adopting counseling roles, were somewhat punitive, and were concerned both with maintaining social distance from inmates and with an inmate's ability to corrupt their authority. Officers also misperceived the professional orientation of their fellow officers and assumed their peer group to be less progressive than they actually were.

  18. 'More is less'. The tax effects of ignoring flow externalities

    International Nuclear Information System (INIS)

    Sandal, Leif K.; Steinshamn, Stein Ivar; Grafton, R. Quentin

    2003-01-01

    Using a model of non-linear, non-monotone decay of the stock pollutant, and starting from the same initial conditions, the paper shows that an optimal tax that corrects for both stock and flow externalities may result in a lower tax, fewer cumulative emissions (less decay in emissions) and higher output at the steady state than a corrective tax that ignores the flow externality. This 'more is less' result emphasizes that setting a corrective tax that ignores the flow externality, or imposing a corrective tax at too low a level where there exists only a stock externality, may affect both transitory and steady-state output, tax payments and cumulative emissions. The result has important policy implications for decision makers setting optimal corrective taxes and targeted emission limits whenever stock externalities exist

  19. Egoism, ignorance and choice : on society's lethal infection

    OpenAIRE

    Camilleri, Jonathan

    2015-01-01

    The ability to choose and our innate selfish, or rather, self-preservative urges are a recipe for disaster. Combining this with man's ignorance by definition and especially his general refusal to accept it, inevitably leads to Man's demise as a species. It is our false notion of freedom which contributes directly to our collective death, and therefore, man's trying to escape death is, in the largest of ways, counterproductive.

  20. The importance of ignoring: Alpha oscillations protect selectivity

    OpenAIRE

    Payne, Lisa; Sekuler, Robert

    2014-01-01

    Selective attention is often thought to entail an enhancement of some task-relevant stimulus or attribute. We discuss the perspective that ignoring irrelevant, distracting information plays a complementary role in information processing. Cortical oscillations within the alpha (8–14 Hz) frequency band have emerged as a marker of sensory suppression. This suppression is linked to selective attention for visual, auditory, somatic, and verbal stimuli. Inhibiting processing of irrelevant input mak...

  1. Maggots in the Brain: Sequelae of Ignored Scalp Wound.

    Science.gov (United States)

    Aggarwal, Ashish; Maskara, Prasant

    2018-01-01

    A 26-year-old male had suffered a burn injury to his scalp in childhood and ignored it. He presented with a complaint of something crawling on his head. Inspection of his scalp revealed multiple maggots on the brain surface with erosion of overlying bone and scalp. He was successfully managed by surgical debridement and regular dressing. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. On uncertainty in information and ignorance in knowledge

    Science.gov (United States)

    Ayyub, Bilal M.

    2010-05-01

    This paper provides an overview of working definitions of knowledge, ignorance, information and uncertainty and summarises formalised philosophical and mathematical framework for their analyses. It provides a comparative examination of the generalised information theory and the generalised theory of uncertainty. It summarises foundational bases for assessing the reliability of knowledge constructed as a collective set of justified true beliefs. It discusses system complexity for ancestor simulation potentials. It offers value-driven communication means of knowledge and contrarian knowledge using memes and memetics.

  3. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  4. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  5. Eyewitness Culture and History: Primary Written Sources. The Iconoclast.

    Science.gov (United States)

    McMurtry, John

    1995-01-01

    Asserts that contemporary history and historiography is "official" history that ignores the daily struggles of people for their continued survival. Argues that, while public illiteracy has nearly disappeared, individuals are ignorant of the wealth of primary-source materials of other cultures' histories. (CFR)

  6. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  7. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  8. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  9. Can Strategic Ignorance Explain the Evolution of Love?

    Science.gov (United States)

    Bear, Adam; Rand, David G

    2018-04-24

    People's devotion to, and love for, their romantic partners poses an evolutionary puzzle: Why is it better to stop your search for other partners once you enter a serious relationship when you could continue to search for somebody better? A recent formal model based on "strategic ignorance" suggests that such behavior can be adaptive and favored by natural selection, so long as you can signal your unwillingness to "look" for other potential mates to your current partner. Here, we re-examine this conclusion with a more detailed model designed to capture specific features of romantic relationships. We find, surprisingly, that devotion does not typically evolve in our model: Selection favors agents who choose to "look" while in relationships and who allow their partners to do the same. Non-looking is only expected to evolve if there is an extremely large cost associated with being left by your partner. Our results therefore raise questions about the role of strategic ignorance in explaining the evolution of love. Copyright © 2018 Cognitive Science Society, Inc.

  10. Exploitation of commercial remote sensing images: reality ignored?

    Science.gov (United States)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  11. The end of ignorance multiplying our human potential

    CERN Document Server

    Mighton, John

    2008-01-01

    A revolutionary call for a new understanding of how people learn. The End of Ignorance conceives of a world in which no child is left behind – a world based on the assumption that each child has the potential to be successful in every subject. John Mighton argues that by recognizing the barriers that we have experienced in our own educational development, by identifying the moment that we became disenchanted with a certain subject and forever closed ourselves off to it, we will be able to eliminate these same barriers from standing in the way of our children. A passionate examination of our present education system, The End of Ignorance shows how we all can work together to reinvent the way that we are taught. John Mighton, the author of The Myth of Ability, is the founder of JUMP Math, a system of learning based on the fostering of emergent intelligence. The program has proved so successful an entire class of Grade 3 students, including so-called slow learners, scored over 90% on a Grade 6 math test. A ...

  12. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  13. Positive Pychology: Zeigeist (or spirit of the times or ignorance (or disinformation of history?

    Directory of Open Access Journals (Sweden)

    Luis Fernández-Ríos

    2012-01-01

    Full Text Available El objetivo del presente trabajo es realizar una exposición crítica de la situación actual de la Psicología Positiva. Desde su nacimiento formal a finales de los años noventa, la Psicología Positiva ha ido adquiriendo, sin demasiada reflexión crítica, cada vez más popularidad. La diseminación de su discurso, por otro lado nada novedoso, en los medios de comunicación y publicaciones especializadas constituye un ejemplo de la popularidad adquirida. Esta aceptación irreflexiva ha tenido lugar en todo el mundo occidental, incluyendo España y Latinoamérica, admitiendo también los países de habla portuguesa. Después de numerar algunas características generales de la Psicología Positiva, se expone una serie de críticas que tiene que superar, si quiere conseguir llegar a ser un paradigma respetable, válido y fiable. Se concluye que la teoría de la Psicología Positiva no tiene nada de novedoso. Lo que en ella parece original es una manifestación del desconocimiento o desinformación de algunas ideas acerca de la historia de la Psicología, la Filosofía y la Antropología cultural. El futuro de la Psicología Positiva debería pasar por superar los graves problemas teóricos y prácticos que sirven como argumento para no considerarla como un nuevo paradigma.

  14. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  15. Mes chers collègues, les moines, ou le partage de l’ignorance

    Directory of Open Access Journals (Sweden)

    Laurence Caillet

    2009-03-01

    Full Text Available Mes chers collègues, les moines, ou le partage de l’ignorance. Aucun statut ne m’a autant étonnée que celui de collègue qui me fut conféré par les moines du Grand monastère de l’Est, à Nara. Après avoir testé mes connaissances en matière de rituel, ces moines fort savants manifestèrent en effet, avec ostentation, leur ignorance. Pointant pour moi des détails liturgiques qu’ils tenaient pour incompréhensibles, ils prirent un plaisir évident à bavarder histoire et théologie, comme si je pouvais apporter quoi que ce soit. Cette mise en scène du caractère incompréhensible du rituel soulignait le caractère ineffable de cérémonies jadis accomplies au ciel par des entités supérieures. Je fournissais prétexte à décrire la vanité de l’érudition face à l’accomplissement des mystères et aussi l’importance de cette érudition pour renouer avec un sens originel irrémédiablement inconnaissable.My dear colleagues the monks, or the sharing of ignorance. No status has ever surprised me as much as that of “colleague” conferred on me by the monks of the Great Eastern Monastery of Nara. After testing my knowledge of ritual, these very learned monks made great show of their ignorance. Drawing my attention to liturgical details that they held to be incomprehensible, they took obvious pleasure in chatting about history and theology, as if I were capable of making the slightest contribution. This staging of the impenetrable nature of the ritual highlighted the ineffable character of the ceremonies performed in heaven long ago by superior beings. I provided a convenient pretext for describing the vanity of erudition in the face of the accomplishment of the mysteries, and also the importance of this erudition for renewing an original, irreparably unknowable meaning.

  16. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  17. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  18. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  19. Lessons in Equality: From Ignorant Schoolmaster to Chinese Aesthetics

    Directory of Open Access Journals (Sweden)

    Ernest Ženko

    2017-09-01

    Full Text Available The postponement of equality is not only a recurring topic in Jacques Rancière’s writings, but also the most defining feature of modern Chinese aesthetics. Particularly in the period after 1980’s, when the country opened its doors to Western ideas, Chinese aesthetics extensively played a subordinate role in an imbalanced knowledge transfer, in which structural inequality was only reinforced. Aesthetics in China plays an important role and is expected not only to interpret literature and art, but also to help building a harmonious society within globalized world. This is the reason why some commentators – Wang Jianjiang being one of them – point out that it is of utmost importance to eliminate this imbalance and develop proper Chinese aesthetics. Since the key issue in this development is the problem of inequality, an approach developed by Jacques Rancière, “the philosopher of equality”, is proposed. Even though Rancière wrote extensively about literature, art and aesthetics, in order to confront the problem of Chinese aesthetics, it seems that a different approach, found in his repertoire, could prove to be more fruitful. In 1987, he published a book titled The Ignorant Schoolmaster, which contributed to his ongoing philosophical emancipatory project, and focused on inequality and its conditions in the realm of education. The Ignorant Schoolmaster, nonetheless, stretches far beyond the walls of classroom or even educational system, and brings to the fore political implications that cluster around the fundamental core of Rancière's political philosophy: the definition of politics as the verification of the presupposition of the equality of intelligence. Equality cannot be postponed as a goal to be only attained in the future and, therefore, has to be considered as a premise of egalitarian politics that needs to operate as a presupposition.   Article received: May 21, 2017; Article accepted: May 28, 2017; Published online

  20. Big Data in Caenorhabditis elegans: quo vadis?

    Science.gov (United States)

    Hutter, Harald; Moerman, Donald

    2015-11-05

    A clear definition of what constitutes "Big Data" is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of "complete" data sets for this organism is actually rather small--not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein-protein interaction--important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. © 2015 Hutter and Moerman. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  1. The importance of ignoring: Alpha oscillations protect selectivity.

    Science.gov (United States)

    Payne, Lisa; Sekuler, Robert

    2014-06-01

    Selective attention is often thought to entail an enhancement of some task-relevant stimulus or attribute. We discuss the perspective that ignoring irrelevant, distracting information plays a complementary role in information processing. Cortical oscillations within the alpha (8-14 Hz) frequency band have emerged as a marker of sensory suppression. This suppression is linked to selective attention for visual, auditory, somatic, and verbal stimuli. Inhibiting processing of irrelevant input makes responses more accurate and timely. It also helps protect material held in short-term memory against disruption. Furthermore, this selective process keeps irrelevant information from distorting the fidelity of memories. Memory is only as good as the perceptual representations on which it is based, and on whose maintenance it depends. Modulation of alpha oscillations can be exploited as an active, purposeful mechanism to help people pay attention and remember the things that matter.

  2. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  3. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  4. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  5. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  6. What Is Hospitality in the Academy? Epistemic Ignorance and the (Im)possible Gift

    Science.gov (United States)

    Kuokkanen, Rauna

    2008-01-01

    The academy is considered by many as the major Western institution of knowledge. This article, however, argues that the academy is characterized by prevalent "epistemic ignorance"--a concept informed by Spivak's discussion of "sanctioned ignorance." Epistemic ignorance refers to academic practices and discourses that enable the continued exclusion…

  7. Experiences of Being Ignored by Peers during Late Adolescence: Linkages to Psychological Maladjustment

    Science.gov (United States)

    Bowker, Julie C.; Adams, Ryan E.; Fredstrom, Bridget K.; Gilman, Rich

    2014-01-01

    In this study on being ignored by peers, 934 twelfth-grade students reported on their experiences of being ignored, victimized, and socially withdrawn, and completed measures of friendship and psychological adjustment (depression, self-esteem, and global satisfaction). Peer nominations of being ignored, victimized, and accepted by peers were also…

  8. The Evolution of a Big Idea: Why Don't We Know Anything about Africa?

    Science.gov (United States)

    Meyer, Michael James

    2009-01-01

    This article is about my experiences as a ninth grade history teacher trying to implement a "big idea" unit on ancient African history. My experiences as a first year teacher and also my experience in seeing this unit develop over three years are chronicled. I conclude that implementing a big idea strategy of instruction is possible in a…

  9. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  10. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  11. Hooking up: Gender Differences, Evolution, and Pluralistic Ignorance

    Directory of Open Access Journals (Sweden)

    Chris Reiber

    2010-07-01

    Full Text Available “Hooking-up” – engaging in no-strings-attached sexual behaviors with uncommitted partners - has become a norm on college campuses, and raises the potential for disease, unintended pregnancy, and physical and psychological trauma. The primacy of sex in the evolutionary process suggests that predictions derived from evolutionary theory may be a useful first step toward understanding these contemporary behaviors. This study assessed the hook-up behaviors and attitudes of 507 college students. As predicted by behavioral-evolutionary theory: men were more comfortable than women with all types of sexual behaviors; women correctly attributed higher comfort levels to men, but overestimated men's actual comfort levels; and men correctly attributed lower comfort levels to women, but still overestimated women's actual comfort levels. Both genders attributed higher comfort levels to same-gendered others, reinforcing a pluralistic ignorance effect that might contribute to the high frequency of hook-up behaviors in spite of the low comfort levels reported and suggesting that hooking up may be a modern form of intrasexual competition between females for potential mates.

  12. Sarajevo: Politics and Cultures of Remembrance and Ignorance

    Directory of Open Access Journals (Sweden)

    Adla Isanović

    2017-10-01

    Full Text Available This text critically reflects on cultural events organized to mark the 100th anniversary of the start of the First World War in Sarajevo and Bosnia & Herzegovina. It elaborates on disputes which showed that culture is in the centre of identity politics and struggles (which can also take a fascist nationalist form, accept the colonizer’s perspective, etc., on how commemorations ‘swallowed’ the past and present, but primarily contextualizes, historicizes and politicizes Sarajevo 2014 and its politics of visibility. This case is approached as an example and symptomatic of the effects of the current state of capitalism, coloniality, racialization and subjugation, as central to Europe today. Article received: June 2, 2017; Article accepted: June 8, 2017; Published online: October 15, 2017; Original scholarly paper How to cite this article: Isanović, Adla. "Sarajevo: Politics and Cultures of Remembrance and Ignorance." AM Journal of Art and Media Studies 14 (2017: 133-144. doi: 10.25038/am.v0i14.199

  13. Technology trends in econometric energy models: Ignorance or information?

    International Nuclear Information System (INIS)

    Boyd, G.; Kokkelenberg, E.; State Univ., of New York, Binghamton, NY; Ross, M.; Michigan Univ., Ann Arbor, MI

    1991-01-01

    Simple time trend variables in factor demand models can be statistically powerful variables, but may tell the researcher very little. Even more complex specification of technical change, e.g. factor biased, are still the economentrician's ''measure of ignorance'' about the shifts that occur in the underlying production process. Furthermore, in periods of rapid technology change the parameters based on time trends may be too large for long run forecasting. When there is clearly identifiable engineering information about new technology adoption that changes the factor input mix, data for the technology adoption may be included in the traditional factor demand model to economically model specific factor biased technical change and econometrically test their contribution. The adoption of thermomechanical pulping (TMP) and electric are furnaces (EAF) are two electricity intensive technology trends in the Paper and Steel industries, respectively. This paper presents the results of including these variables in a tradition econometric factor demand model, which is based on the Generalized Leontief. The coefficients obtained for this ''engineering based'' technical change compares quite favorably to engineering estimates of the impact of TMP and EAF on electricity intensities, improves the estimates of the other price coefficients, and yields a more believable long run electricity forecast. 6 refs., 1 fig

  14. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  15. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  16. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  17. Algorithmic design considerations for geospatial and/or temporal big data

    CSIR Research Space (South Africa)

    Van Zyl, T

    2014-02-01

    Full Text Available Mining. In addition, ignoring the spatiotemporal autocorrelation in the data can lead to spurious results, for instance, the salt and pepper effect when clustering. The solution to the big data challenge is simple to describe yet in most cases...

  18. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  19. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  20. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  1. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  2. Cerebral tuberculoma: an entity not to ignore | Frioui | Pan African ...

    African Journals Online (AJOL)

    Symptoms and radiologic features are nonspecific, leading sometimes to misdiagnosis. We report the case of a 60-year-old male, with a history of diffuse bilateral infiltrative pulmonary disease at the stage of fibrosis, he made two generalized seizures associated with occipital headaches. CT scan showed a left frontal tumor, ...

  3. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  4. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  5. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  6. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  7. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  8. Should we ignore U-235 series contribution to dose?

    International Nuclear Information System (INIS)

    Beaugelin-Seiller, Karine; Goulet, Richard; Mihok, Steve; Beresford, Nicholas A.

    2016-01-01

    Environmental Risk Assessment (ERA) methodology for radioactive substances is an important regulatory tool for assessing the safety of licensed nuclear facilities for wildlife, and the environment as a whole. ERAs are therefore expected to be both fit for purpose and conservative. When uranium isotopes are assessed, there are many radioactive decay products which could be considered. However, risk assessors usually assume 235 U and its daughters contribute negligibly to radiological dose. The validity of this assumption has not been tested: what might the 235 U family contribution be and how does the estimate depend on the assumptions applied? In this paper we address this question by considering aquatic wildlife in Canadian lakes exposed to historic uranium mining practices. A full theoretical approach was used, in parallel to a more realistic assessment based on measurements of several elements of the U decay chains. The 235 U family contribution varied between about 4% and 75% of the total dose rate depending on the assumptions of the equilibrium state of the decay chains. Hence, ignoring the 235 U series will not result in conservative dose assessments for wildlife. These arguments provide a strong case for more in situ measurements of the important members of the 235 U chain and for its consideration in dose assessments. - Highlights: • Realistic ecological risk assessment infers a complete inventory of radionuclides. • U-235 family may not be minor when assessing total dose rates experienced by biota. • There is a need to investigate the real state of equilibrium decay of U chains. • There is a need to improve the capacity to measure all elements of the U decay chains.

  9. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  10. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  11. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  12. BigDansing

    KAUST Repository

    Khayyat, Zuhair; Ilyas, Ihab F.; Jindal, Alekh; Madden, Samuel; Ouzzani, Mourad; Papotti, Paolo; Quiané -Ruiz, Jorge-Arnulfo; Tang, Nan; Yin, Si

    2015-01-01

    of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic

  13. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  14. Big Creek Pit Tags

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The BCPITTAGS database is used to store data from an Oncorhynchus mykiss (steelhead/rainbow trout) population dynamics study in Big Creek, a coastal stream along the...

  15. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-01-01

    on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also

  16. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  17. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  18. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  19. Minsky on "Big Government"

    Directory of Open Access Journals (Sweden)

    Daniel de Santana Vasconcelos

    2014-03-01

    Full Text Available This paper objective is to assess, in light of the main works of Minsky, his view and analysis of what he called the "Big Government" as that huge institution which, in parallels with the "Big Bank" was capable of ensuring stability in the capitalist system and regulate its inherently unstable financial system in mid-20th century. In this work, we analyze how Minsky proposes an active role for the government in a complex economic system flawed by financial instability.

  20. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  1. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  2. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  3. Science: Big Bang comes to the Alps

    CERN Multimedia

    Cookson, Clive

    2008-01-01

    "The most extensive and expensive scientific instrument in history is due to start working this summer at CERN, the European particle physics laboratory near Geneva. Two beams of protons will accelerate in opposite directions around a 27 km tunnel under the alpine foothills until they are travelling almost at the speed of light - and then smash together, reproducing on a tiny scale the intense energy of the new-born universe after the inaugural Big Bang 15bn years ago. (1 page)

  4. Science Big Bang comes to the Alps

    CERN Multimedia

    2008-01-01

    The most extensive and expensive scientific instrument in history is due to start working this summer at Cern, the European particle physics laboratory near Geneva. Two beams of protons will accelerate in opposite directions around a 27km tunnel under the Alpine foothills until they are travelling almost at the speed of light - and then smash together, reproducing on a tiny scale the intense energy of the new-born universe after the inaugural Big Bang 15bn years ago.

  5. That Escalated Quickly—Planning to Ignore RPE Can Backfire

    Directory of Open Access Journals (Sweden)

    Maik Bieleke

    2017-09-01

    Full Text Available Ratings of perceived exertion (RPE are routinely assessed in exercise science and RPE is substantially associated with physiological criterion measures. According to the psychobiological model of endurance, RPE is a central limiting factor in performance. While RPE is known to be affected by psychological manipulations, it remains to be examined whether RPE can be self-regulated during static muscular endurance exercises to enhance performance. In this experiment, we investigate the effectiveness of the widely used and recommended self-regulation strategy of if-then planning (i.e., implementation intentions in down-regulating RPE and improving performance in a static muscular endurance task. 62 female students (age: M = 23.7 years, SD = 4.0 were randomly assigned to an implementation intention or a control condition and performed a static muscular endurance task. They held two intertwined rings as long as possible while avoiding contacts between the rings. In the implementation intention condition, participants had an if-then plan: “If the task becomes too strenuous for me, then I ignore the strain and tell myself: Keep going!” Every 25 ± 10 s participants reported their RPE along with their perceived pain. Endurance performance was measured as time to failure, along with contact errors as a measure of performance quality. No differences emerged between implementation intention and control participants regarding time to failure and performance quality. However, mixed-effects model analyses revealed a significant Time-to-Failure × Condition interaction for RPE. Compared to the control condition, participants in the implementation intention condition reported substantially greater increases in RPE during the second half of the task and reached higher total values of RPE before task termination. A similar but weaker pattern evinced for perceived pain. Our results demonstrate that RPE during an endurance task can be self-regulated with if

  6. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  7. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  8. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  9. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  11. Behavioural responses to human-induced change: Why fishing should not be ignored.

    Science.gov (United States)

    Diaz Pauli, Beatriz; Sih, Andrew

    2017-03-01

    Change in behaviour is usually the first response to human-induced environmental change and key for determining whether a species adapts to environmental change or becomes maladapted. Thus, understanding the behavioural response to human-induced changes is crucial in the interplay between ecology, evolution, conservation and management. Yet the behavioural response to fishing activities has been largely ignored. We review studies contrasting how fish behaviour affects catch by passive (e.g., long lines, angling) versus active gears (e.g., trawls, seines). We show that fishing not only targets certain behaviours, but it leads to a multitrait response including behavioural, physiological and life-history traits with population, community and ecosystem consequences. Fisheries-driven change (plastic or evolutionary) of fish behaviour and its correlated traits could impact fish populations well beyond their survival per se , affecting predation risk, foraging behaviour, dispersal, parental care, etc., and hence numerous ecological issues including population dynamics and trophic cascades . In particular, we discuss implications of behavioural responses to fishing for fisheries management and population resilience. More research on these topics, however, is needed to draw general conclusions, and we suggest fruitful directions for future studies.

  12. Big Bang Cosmic Titanic: Cause for Concern?

    Science.gov (United States)

    Gentry, Robert

    2013-04-01

    This abstract alerts physicists to a situation that, unless soon addressed, may yet affect PRL integrity. I refer to Stanley Brown's and DAE Robert Caldwell's rejection of PRL submission LJ12135, A Cosmic Titanic: Big Bang Cosmology Unravels Upon Discovery of Serious Flaws in Its Foundational Expansion Redshift Assumption, by their claim that BB is an established theory while ignoring our paper's Titanic, namely, that BB's foundational spacetime expansion redshifts assumption has now been proven to be irrefutably false because it is contradicted by our seminal discovery that GPS operation unequivocally proves that GR effects do not produce in-flight photon wavelength changes demanded by this central assumption. This discovery causes the big bang to collapse as quickly as did Ptolemaic cosmology when Copernicus discovered its foundational assumption was heliocentric, not geocentric. Additional evidence that something is amiss in PRL's treatment of LJ12135 comes from both Brown and EiC Gene Spouse agreeing to meet at my exhibit during last year's Atlanta APS to discuss this cover-up issue. Sprouse kept his commitment; Brown didn't. Question: If Brown could have refuted my claim of a cover-up, why didn't he come to present it before Gene Sprouse? I am appealing LJ12135's rejection.

  13. Big Impacts and Transient Oceans on Titan

    Science.gov (United States)

    Zahnle, K. J.; Korycansky, D. G.; Nixon, C. A.

    2014-01-01

    We have studied the thermal consequences of very big impacts on Titan [1]. Titan's thick atmosphere and volatile-rich surface cause it to respond to big impacts in a somewhat Earth-like manner. Here we construct a simple globally-averaged model that tracks the flow of energy through the environment in the weeks, years, and millenia after a big comet strikes Titan. The model Titan is endowed with 1.4 bars of N2 and 0.07 bars of CH4, methane lakes, a water ice crust, and enough methane underground to saturate the regolith to the surface. We assume that half of the impact energy is immediately available to the atmosphere and surface while the other half is buried at the site of the crater and is unavailable on time scales of interest. The atmosphere and surface are treated as isothermal. We make the simplifying assumptions that the crust is everywhere as methane saturated as it was at the Huygens landing site, that the concentration of methane in the regolith is the same as it is at the surface, and that the crust is made of water ice. Heat flow into and out of the crust is approximated by step-functions. If the impact is great enough, ice melts. The meltwater oceans cool to the atmosphere conductively through an ice lid while at the base melting their way into the interior, driven down in part through Rayleigh-Taylor instabilities between the dense water and the warm ice. Topography, CO2, and hydrocarbons other than methane are ignored. Methane and ethane clathrate hydrates are discussed quantitatively but not fully incorporated into the model.

  14. The Mathematical Miseducation of America's Youth: Ignoring Research and Scientific Study in Education.

    Science.gov (United States)

    Battista, Michael T.

    1999-01-01

    Because traditional instruction ignores students' personal construction of mathematical meaning, mathematical thought development is not properly nurtured. Several issues must be addressed, including adults' ignorance of math- and student-learning processes, identification of math-education research specialists, the myth of coverage, testing…

  15. Persistence of Memory for Ignored Lists of Digits: Areas of Developmental Constancy and Change.

    Science.gov (United States)

    Cowan, Nelson; Nugent, Lara D.; Elliott, Emily M.; Saults, J. Scott

    2000-01-01

    Examined persistence of sensory memory by studying developmental differences in recall of attended and ignored lists of digits for second-graders, fifth-graders, and adults. Found developmental increase in the persistence of memory only for the final item in an ignored list, which is the item for which sensory memory is thought to be the most…

  16. Modelling non-ignorable missing data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cornelis A.W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  17. Modelling non-ignorable missing-data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cees A. W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  18. The concept of ignorance in a risk assessment and risk management context

    International Nuclear Information System (INIS)

    Aven, T.; Steen, R.

    2010-01-01

    There are many definitions of ignorance in the context of risk assessment and risk management. Most refer to situations in which there are lack of knowledge, poor basis for probability assignments and possible outcomes not (fully) known. The purpose of this paper is to discuss the ignorance concept in this setting. Based on a set of risk and uncertainty features, we establish conceptual structures characterising the level of ignorance. These features include the definition of chances (relative frequency-interpreted probabilities) and the existence of scientific uncertainties. Based on these structures, we suggest a definition of ignorance linked to scientific uncertainties, i.e. the lack of understanding of how consequences of the activity are influenced by the underlying factors. In this way, ignorance can be viewed as a condition for applying the precautionary principle. The discussion is also linked to the use and boundaries of risk assessments in the case of large uncertainties, and the methods for classifying risk and uncertainty problems.

  19. Big Java late objects

    CERN Document Server

    Horstmann, Cay S

    2012-01-01

    Big Java: Late Objects is a comprehensive introduction to Java and computer programming, which focuses on the principles of programming, software engineering, and effective learning. It is designed for a two-semester first course in programming for computer science students.

  20. Big ideas: innovation policy

    OpenAIRE

    John Van Reenen

    2011-01-01

    In the last CentrePiece, John Van Reenen stressed the importance of competition and labour market flexibility for productivity growth. His latest in CEP's 'big ideas' series describes the impact of research on how policy-makers can influence innovation more directly - through tax credits for business spending on research and development.

  1. Big Data ethics

    NARCIS (Netherlands)

    Zwitter, Andrej

    2014-01-01

    The speed of development in Big Data and associated phenomena, such as social media, has surpassed the capacity of the average consumer to understand his or her actions and their knock-on effects. We are moving towards changes in how ethics has to be perceived: away from individual decisions with

  2. The Big Sky inside

    Science.gov (United States)

    Adams, Earle; Ward, Tony J.; Vanek, Diana; Marra, Nancy; Hester, Carolyn; Knuth, Randy; Spangler, Todd; Jones, David; Henthorn, Melissa; Hammill, Brock; Smith, Paul; Salisbury, Rob; Reckin, Gene; Boulafentis, Johna

    2009-01-01

    The University of Montana (UM)-Missoula has implemented a problem-based program in which students perform scientific research focused on indoor air pollution. The Air Toxics Under the Big Sky program (Jones et al. 2007; Adams et al. 2008; Ward et al. 2008) provides a community-based framework for understanding the complex relationship between poor…

  3. Moving Another Big Desk.

    Science.gov (United States)

    Fawcett, Gay

    1996-01-01

    New ways of thinking about leadership require that leaders move their big desks and establish environments that encourage trust and open communication. Educational leaders must trust their colleagues to make wise choices. When teachers are treated democratically as leaders, classrooms will also become democratic learning organizations. (SM)

  4. A Big Bang Lab

    Science.gov (United States)

    Scheider, Walter

    2005-01-01

    The February 2005 issue of The Science Teacher (TST) reminded everyone that by learning how scientists study stars, students gain an understanding of how science measures things that can not be set up in lab, either because they are too big, too far away, or happened in a very distant past. The authors of "How Far are the Stars?" show how the…

  5. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  6. The Big Bang

    CERN Multimedia

    Moods, Patrick

    2006-01-01

    How did the Universe begin? The favoured theory is that everything - space, time, matter - came into existence at the same moment, around 13.7 thousand million years ago. This event was scornfully referred to as the "Big Bang" by Sir Fred Hoyle, who did not believe in it and maintained that the Universe had always existed.

  7. Big Data Analytics

    Indian Academy of Sciences (India)

    The volume and variety of data being generated using computersis doubling every two years. It is estimated that in 2015,8 Zettabytes (Zetta=1021) were generated which consistedmostly of unstructured data such as emails, blogs, Twitter,Facebook posts, images, and videos. This is called big data. Itis possible to analyse ...

  8. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  9. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  10. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  11. Inflated granularity: Spatial “Big Data” and geodemographics

    Directory of Open Access Journals (Sweden)

    Craig M Dalton

    2015-08-01

    Full Text Available Data analytics, particularly the current rhetoric around “Big Data”, tend to be presented as new and innovative, emerging ahistorically to revolutionize modern life. In this article, we situate one branch of Big Data analytics, spatial Big Data, through a historical predecessor, geodemographic analysis, to help develop a critical approach to current data analytics. Spatial Big Data promises an epistemic break in marketing, a leap from targeting geodemographic areas to targeting individuals. Yet it inherits characteristics and problems from geodemographics, including a justification through the market, and a process of commodification through the black-boxing of technology. As researchers develop sustained critiques of data analytics and its effects on everyday life, we must so with a grounding in the cultural and historical contexts from which data technologies emerged. This article and others (Barnes and Wilson, 2014 develop a historically situated, critical approach to spatial Big Data. This history illustrates connections to the critical issues of surveillance, redlining, and the production of consumer subjects and geographies. The shared histories and structural logics of spatial Big Data and geodemographics create the space for a continued critique of data analyses’ role in society.

  12. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  13. Astronomy in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Yanxia Zhang

    2015-05-01

    Full Text Available The fields of Astrostatistics and Astroinformatics are vital for dealing with the big data issues now faced by astronomy. Like other disciplines in the big data era, astronomy has many V characteristics. In this paper, we list the different data mining algorithms used in astronomy, along with data mining software and tools related to astronomical applications. We present SDSS, a project often referred to by other astronomical projects, as the most successful sky survey in the history of astronomy and describe the factors influencing its success. We also discuss the success of Astrostatistics and Astroinformatics organizations and the conferences and summer schools on these issues that are held annually. All the above indicates that astronomers and scientists from other areas are ready to face the challenges and opportunities provided by massive data volume.

  14. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  15. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  16. Finding the big bang

    CERN Document Server

    Page, Lyman A; Partridge, R Bruce

    2009-01-01

    Cosmology, the study of the universe as a whole, has become a precise physical science, the foundation of which is our understanding of the cosmic microwave background radiation (CMBR) left from the big bang. The story of the discovery and exploration of the CMBR in the 1960s is recalled for the first time in this collection of 44 essays by eminent scientists who pioneered the work. Two introductory chapters put the essays in context, explaining the general ideas behind the expanding universe and fossil remnants from the early stages of the expanding universe. The last chapter describes how the confusion of ideas and measurements in the 1960s grew into the present tight network of tests that demonstrate the accuracy of the big bang theory. This book is valuable to anyone interested in how science is done, and what it has taught us about the large-scale nature of the physical universe.

  17. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Klinkby Madsen, Anders; Rasche, Andreas

    data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects......This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... of international development agendas to algorithms that synthesize large-scale data, (3) novel ways of rationalizing knowledge claims that underlie development efforts, and (4) shifts in professional and organizational identities of those concerned with producing and processing data for development. Our discussion...

  18. Big nuclear accidents

    International Nuclear Information System (INIS)

    Marshall, W.; Billingon, D.E.; Cameron, R.F.; Curl, S.J.

    1983-09-01

    Much of the debate on the safety of nuclear power focuses on the large number of fatalities that could, in theory, be caused by extremely unlikely but just imaginable reactor accidents. This, along with the nuclear industry's inappropriate use of vocabulary during public debate, has given the general public a distorted impression of the risks of nuclear power. The paper reviews the way in which the probability and consequences of big nuclear accidents have been presented in the past and makes recommendations for the future, including the presentation of the long-term consequences of such accidents in terms of 'loss of life expectancy', 'increased chance of fatal cancer' and 'equivalent pattern of compulsory cigarette smoking'. The paper presents mathematical arguments, which show the derivation and validity of the proposed methods of presenting the consequences of imaginable big nuclear accidents. (author)

  19. Big Bounce and inhomogeneities

    International Nuclear Information System (INIS)

    Brizuela, David; Mena Marugan, Guillermo A; Pawlowski, Tomasz

    2010-01-01

    The dynamics of an inhomogeneous universe is studied with the methods of loop quantum cosmology, via a so-called hybrid quantization, as an example of the quantization of vacuum cosmological spacetimes containing gravitational waves (Gowdy spacetimes). The analysis of this model with an infinite number of degrees of freedom, performed at the effective level, shows that (i) the initial Big Bang singularity is replaced (as in the case of homogeneous cosmological models) by a Big Bounce, joining deterministically two large universes, (ii) the universe size at the bounce is at least of the same order of magnitude as that of the background homogeneous universe and (iii) for each gravitational wave mode, the difference in amplitude at very early and very late times has a vanishing statistical average when the bounce dynamics is strongly dominated by the inhomogeneities, whereas this average is positive when the dynamics is in a near-vacuum regime, so that statistically the inhomogeneities are amplified. (fast track communication)

  20. Really big numbers

    CERN Document Server

    Schwartz, Richard Evan

    2014-01-01

    In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...

  1. Big Bang Circus

    Science.gov (United States)

    Ambrosini, C.

    2011-06-01

    Big Bang Circus is an opera I composed in 2001 and which was premiered at the Venice Biennale Contemporary Music Festival in 2002. A chamber group, four singers and a ringmaster stage the story of the Universe confronting and interweaving two threads: how early man imagined it and how scientists described it. Surprisingly enough fancy, myths and scientific explanations often end up using the same images, metaphors and sometimes even words: a strong tension, a drumskin starting to vibrate, a shout…

  2. Big Bang 5

    CERN Document Server

    Apolin, Martin

    2007-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Der Band 5 RG behandelt die Grundlagen (Maßsystem, Größenordnungen) und die Mechanik (Translation, Rotation, Kraft, Erhaltungssätze).

  3. Big Bang 8

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Band 8 vermittelt auf verständliche Weise Relativitätstheorie, Kern- und Teilchenphysik (und deren Anwendungen in der Kosmologie und Astrophysik), Nanotechnologie sowie Bionik.

  4. Big Bang 6

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Der Band 6 RG behandelt die Gravitation, Schwingungen und Wellen, Thermodynamik und eine Einführung in die Elektrizität anhand von Alltagsbeispielen und Querverbindungen zu anderen Disziplinen.

  5. Big Bang 7

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. In Band 7 werden neben einer Einführung auch viele aktuelle Aspekte von Quantenmechanik (z. Beamen) und Elektrodynamik (zB Elektrosmog), sowie die Klimaproblematik und die Chaostheorie behandelt.

  6. Big Bang Darkleosynthesis

    OpenAIRE

    Krnjaic, Gordan; Sigurdson, Kris

    2014-01-01

    In a popular class of models, dark matter comprises an asymmetric population of composite particles with short range interactions arising from a confined nonabelian gauge group. We show that coupling this sector to a well-motivated light mediator particle yields efficient darkleosynthesis , a dark-sector version of big-bang nucleosynthesis (BBN), in generic regions of parameter space. Dark matter self-interaction bounds typically require the confinement scale to be above ΛQCD , which generica...

  7. Recent big flare

    International Nuclear Information System (INIS)

    Moriyama, Fumio; Miyazawa, Masahide; Yamaguchi, Yoshisuke

    1978-01-01

    The features of three big solar flares observed at Tokyo Observatory are described in this paper. The active region, McMath 14943, caused a big flare on September 16, 1977. The flare appeared on both sides of a long dark line which runs along the boundary of the magnetic field. Two-ribbon structure was seen. The electron density of the flare observed at Norikura Corona Observatory was 3 x 10 12 /cc. Several arc lines which connect both bright regions of different magnetic polarity were seen in H-α monochrome image. The active region, McMath 15056, caused a big flare on December 10, 1977. At the beginning, several bright spots were observed in the region between two main solar spots. Then, the area and the brightness increased, and the bright spots became two ribbon-shaped bands. A solar flare was observed on April 8, 1978. At first, several bright spots were seen around the solar spot in the active region, McMath 15221. Then, these bright spots developed to a large bright region. On both sides of a dark line along the magnetic neutral line, bright regions were generated. These developed to a two-ribbon flare. The time required for growth was more than one hour. A bright arc which connects two ribbons was seen, and this arc may be a loop prominence system. (Kato, T.)

  8. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  9. Ignoring the Obvious: Combined Arms and Fire and Maneuver Tactics Prior to World War I

    National Research Council Canada - National Science Library

    Bruno, Thomas

    2002-01-01

    The armies that entered WWI ignored many pre-war lessons though WWI armies later developed revolutionary tactical-level advances, scholars claim that this tactical evolution followed an earlier period...

  10. Leadership in the Big Bangs of European Integration

    DEFF Research Database (Denmark)

    ? and, more importantly, what factors allowed specific actors to provide leadership in a given context? These conclusions provide a major step forward in the literature on the history-making bargains in the EU, allowing us to answer with more confidence the question of which actors have guided the big...... bangs in the European integration process in the past two decades, and why.  ...

  11. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  12. Disaggregating asthma: Big investigation versus big data.

    Science.gov (United States)

    Belgrave, Danielle; Henderson, John; Simpson, Angela; Buchan, Iain; Bishop, Christopher; Custovic, Adnan

    2017-02-01

    We are facing a major challenge in bridging the gap between identifying subtypes of asthma to understand causal mechanisms and translating this knowledge into personalized prevention and management strategies. In recent years, "big data" has been sold as a panacea for generating hypotheses and driving new frontiers of health care; the idea that the data must and will speak for themselves is fast becoming a new dogma. One of the dangers of ready accessibility of health care data and computational tools for data analysis is that the process of data mining can become uncoupled from the scientific process of clinical interpretation, understanding the provenance of the data, and external validation. Although advances in computational methods can be valuable for using unexpected structure in data to generate hypotheses, there remains a need for testing hypotheses and interpreting results with scientific rigor. We argue for combining data- and hypothesis-driven methods in a careful synergy, and the importance of carefully characterized birth and patient cohorts with genetic, phenotypic, biological, and molecular data in this process cannot be overemphasized. The main challenge on the road ahead is to harness bigger health care data in ways that produce meaningful clinical interpretation and to translate this into better diagnoses and properly personalized prevention and treatment plans. There is a pressing need for cross-disciplinary research with an integrative approach to data science, whereby basic scientists, clinicians, data analysts, and epidemiologists work together to understand the heterogeneity of asthma. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Vertical landscraping, a big regionalism for Dubai.

    Science.gov (United States)

    Wilson, Matthew

    2010-01-01

    Dubai's ecologic and economic complications are exacerbated by six years of accelerated expansion, a fixed top-down approach to urbanism and the construction of iconic single-phase mega-projects. With recent construction delays, project cancellations and growing landscape issues, Dubai's tower typologies have been unresponsive to changing environmental, socio-cultural and economic patterns (BBC, 2009; Gillet, 2009; Lewis, 2009). In this essay, a theory of "Big Regionalism" guides an argument for an economically and ecologically linked tower typology called the Condenser. This phased "box-to-tower" typology is part of a greater Landscape Urbanist strategy called Vertical Landscraping. Within this strategy, the Condenser's role is to densify the city, facilitating the creation of ecologic voids that order the urban region. Delineating "Big Regional" principles, the Condenser provides a time-based, global-local urban growth approach that weaves Bigness into a series of urban-regional, economic and ecological relationships, builds upon the environmental performance of the city's regional architecture and planning, promotes a continuity of Dubai's urban history, and responds to its landscape issues while condensing development. These speculations permit consideration of the overlooked opportunities embedded within Dubai's mega-projects and their long-term impact on the urban morphology.

  14. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  15. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  16. Passionate ignorance

    DEFF Research Database (Denmark)

    Hyldgaard, Kirsten

    2006-01-01

    Psychoanalysis has nothing to say about education. Psychoanalysis has something to say about pedagogy; psychoanalysis has pedagogical-philosophical implications. Pedagogy, in distinction to education, addresses the question of the subject. This implies that pedagogical theory is and cannot be a s...

  17. Fatal ignorance.

    Science.gov (United States)

    1996-01-01

    The Rajiv Gandhi Foundation (RGF), together with the AIMS-affiliated NGO AIDS Cell, Delhi, held a workshop as part of an effort to raise a 90-doctor RGF AIDS workforce which will work together with nongovernmental organizations on AIDS prevention, control, and management. 25 general practitioners registered with the Indian Medical Council, who have practiced medicine in Delhi for the past 10-20 years, responded to a pre-program questionnaire on HIV-related knowledge and attitudes. 6 out of the 25 physicians did not know what the acronym AIDS stands for, extremely low awareness of the clinical aspects of the disease was revealed, 9 believed in the conspiracy theory of HIV development and accidental release by the US Central Intelligence Agency, 8 believed that AIDS is a problem of only the promiscuous, 18 did not know that the mode of HIV transmission is similar to that of the hepatitis B virus, 12 were unaware that HIV-infected people will test HIV-seronegative during the first three months after initial infection and that they will develop symptoms of full-blown AIDS only after 10 years, 10 did not know the name of even one drug used to treat the disease, 3 believed aspirin to be an effective drug against AIDS, many believed fantastic theories about the modes of HIV transmission, and many were acutely homophobic. Efforts were made to clear misconceptions about HIV during the workshop. It is hoped that participating doctors' attitudes about AIDS and the high-risk groups affected by it were also improved.

  18. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  19. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  20. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  1. How Big is Earth?

    Science.gov (United States)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  2. Privacy and Big Data

    CERN Document Server

    Craig, Terence

    2011-01-01

    Much of what constitutes Big Data is information about us. Through our online activities, we leave an easy-to-follow trail of digital footprints that reveal who we are, what we buy, where we go, and much more. This eye-opening book explores the raging privacy debate over the use of personal data, with one undeniable conclusion: once data's been collected, we have absolutely no control over who uses it or how it is used. Personal data is the hottest commodity on the market today-truly more valuable than gold. We are the asset that every company, industry, non-profit, and government wants. Pri

  3. Visualizing big energy data

    DEFF Research Database (Denmark)

    Hyndman, Rob J.; Liu, Xueqin Amy; Pinson, Pierre

    2018-01-01

    Visualization is a crucial component of data analysis. It is always a good idea to plot the data before fitting models, making predictions, or drawing conclusions. As sensors of the electric grid are collecting large volumes of data from various sources, power industry professionals are facing th...... the challenge of visualizing such data in a timely fashion. In this article, we demonstrate several data-visualization solutions for big energy data through three case studies involving smart-meter data, phasor measurement unit (PMU) data, and probabilistic forecasts, respectively....

  4. Big Data Challenges

    Directory of Open Access Journals (Sweden)

    Alexandru Adrian TOLE

    2013-10-01

    Full Text Available The amount of data that is traveling across the internet today, not only that is large, but is complex as well. Companies, institutions, healthcare system etc., all of them use piles of data which are further used for creating reports in order to ensure continuity regarding the services that they have to offer. The process behind the results that these entities requests represents a challenge for software developers and companies that provide IT infrastructure. The challenge is how to manipulate an impressive volume of data that has to be securely delivered through the internet and reach its destination intact. This paper treats the challenges that Big Data creates.

  5. Big data naturally rescaled

    International Nuclear Information System (INIS)

    Stoop, Ruedi; Kanders, Karlis; Lorimer, Tom; Held, Jenny; Albert, Carlo

    2016-01-01

    We propose that a handle could be put on big data by looking at the systems that actually generate the data, rather than the data itself, realizing that there may be only few generic processes involved in this, each one imprinting its very specific structures in the space of systems, the traces of which translate into feature space. From this, we propose a practical computational clustering approach, optimized for coping with such data, inspired by how the human cortex is known to approach the problem.

  6. A Matrix Big Bang

    OpenAIRE

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matr...

  7. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  8. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  9. [Big data in imaging].

    Science.gov (United States)

    Sewerin, Philipp; Ostendorf, Benedikt; Hueber, Axel J; Kleyer, Arnd

    2018-04-01

    Until now, most major medical advancements have been achieved through hypothesis-driven research within the scope of clinical trials. However, due to a multitude of variables, only a certain number of research questions could be addressed during a single study, thus rendering these studies expensive and time consuming. Big data acquisition enables a new data-based approach in which large volumes of data can be used to investigate all variables, thus opening new horizons. Due to universal digitalization of the data as well as ever-improving hard- and software solutions, imaging would appear to be predestined for such analyses. Several small studies have already demonstrated that automated analysis algorithms and artificial intelligence can identify pathologies with high precision. Such automated systems would also seem well suited for rheumatology imaging, since a method for individualized risk stratification has long been sought for these patients. However, despite all the promising options, the heterogeneity of the data and highly complex regulations covering data protection in Germany would still render a big data solution for imaging difficult today. Overcoming these boundaries is challenging, but the enormous potential advances in clinical management and science render pursuit of this goal worthwhile.

  10. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Fields, Brian D.; Olive, Keith A.

    2006-01-01

    We present an overview of the standard model of big bang nucleosynthesis (BBN), which describes the production of the light elements in the early universe. The theoretical prediction for the abundances of D, 3 He, 4 He, and 7 Li is discussed. We emphasize the role of key nuclear reactions and the methods by which experimental cross section uncertainties are propagated into uncertainties in the predicted abundances. The observational determination of the light nuclides is also discussed. Particular attention is given to the comparison between the predicted and observed abundances, which yields a measurement of the cosmic baryon content. The spectrum of anisotropies in the cosmic microwave background (CMB) now independently measures the baryon density to high precision; we show how the CMB data test BBN, and find that the CMB and the D and 4 He observations paint a consistent picture. This concordance stands as a major success of the hot big bang. On the other hand, 7 Li remains discrepant with the CMB-preferred baryon density; possible explanations are reviewed. Finally, moving beyond the standard model, primordial nucleosynthesis constraints on early universe and particle physics are also briefly discussed

  11. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  12. Infectious Disease Surveillance in the Big Data Era

    DEFF Research Database (Denmark)

    Simonsen, Lone; Gog, Julia R.; Olson, Don

    2016-01-01

    , flexible, and local tracking of infectious diseases, especially for emerging pathogens. In this opinion piece, we reflect on the long and distinguished history of disease surveillance and discuss recent developments related to use of big data. We start with a brief review of traditional systems relying...... of Google Flu Trends. We conclude by advocating for increased use of hybrid systems combining information from traditional surveillance and big data sources, which seems the most promising option moving forward. Throughout the article, we use influenza as an exemplar of an emerging and reemerging infection...

  13. 2013 strategic petroleum reserve big hill well integrity grading report.

    Energy Technology Data Exchange (ETDEWEB)

    Lord, David L.; Roberts, Barry L.; Lord, Anna C. Snider; Bettin, Giorgia; Sobolik, Steven Ronald; Park, Byoung Yoon; Rudeen, David Keith; Eldredge, Lisa; Wynn, Karen; Checkai, Dean; Perry, James Thomas

    2014-02-01

    This report summarizes the work performed in developing a framework for the prioritization of cavern access wells for remediation and monitoring at the Big Hill Strategic Petroleum Reserve site. This framework was then applied to all 28 wells at the Big Hill site with each well receiving a grade for remediation and monitoring. Numerous factors affecting well integrity were incorporated into the grading framework including casing survey results, cavern pressure history, results from geomechanical simulations, and site geologic factors. The framework was developed in a way as to be applicable to all four of the Strategic Petroleum Reserve sites.

  14. Mid-adolescent neurocognitive development of ignoring and attending emotional stimuli

    Directory of Open Access Journals (Sweden)

    Nora C. Vetter

    2015-08-01

    Full Text Available Appropriate reactions toward emotional stimuli depend on the distribution of prefrontal attentional resources. In mid-adolescence, prefrontal top-down control systems are less engaged, while subcortical bottom-up emotional systems are more engaged. We used functional magnetic resonance imaging to follow the neural development of attentional distribution, i.e. attending versus ignoring emotional stimuli, in adolescence. 144 healthy adolescents were studied longitudinally at age 14 and 16 while performing a perceptual discrimination task. Participants viewed two pairs of stimuli – one emotional, one abstract – and reported on one pair whether the items were the same or different, while ignoring the other pair. Hence, two experimental conditions were created: “attending emotion/ignoring abstract” and “ignoring emotion/attending abstract”. Emotional valence varied between negative, positive, and neutral. Across conditions, reaction times and error rates decreased and activation in the anterior cingulate and inferior frontal gyrus increased from age 14 to 16. In contrast, subcortical regions showed no developmental effect. Activation of the anterior insula increased across ages for attending positive and ignoring negative emotions. Results suggest an ongoing development of prefrontal top-down resources elicited by emotional attention from age 14 to 16 while activity of subcortical regions representing bottom-up processing remains stable.

  15. Investigating Deviance Distraction and the Impact of the Modality of the To-Be-Ignored Stimuli.

    Science.gov (United States)

    Marsja, Erik; Neely, Gregory; Ljungberg, Jessica K

    2018-03-01

    It has been suggested that deviance distraction is caused by unexpected sensory events in the to-be-ignored stimuli violating the cognitive system's predictions of incoming stimuli. The majority of research has used methods where the to-be-ignored expected (standards) and the unexpected (deviants) stimuli are presented within the same modality. Less is known about the behavioral impact of deviance distraction when the to-be-ignored stimuli are presented in different modalities (e.g., standard and deviants presented in different modalities). In three experiments using cross-modal oddball tasks with mixed-modality to-be-ignored stimuli, we examined the distractive role of unexpected auditory deviants presented in a continuous stream of expected standard vibrations. The results showed that deviance distraction seems to be dependent upon the to-be-ignored stimuli being presented within the same modality, and that the simplest omission of something expected; in this case, a standard vibration may be enough to capture attention and distract performance.

  16. Non-ignorable missingness item response theory models for choice effects in examinee-selected items.

    Science.gov (United States)

    Liu, Chen-Wei; Wang, Wen-Chung

    2017-11-01

    Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.

  17. Was the big bang hot

    International Nuclear Information System (INIS)

    Wright, E.L.

    1983-01-01

    The author considers experiments to confirm the substantial deviations from a Planck curve in the Woody and Richards spectrum of the microwave background, and search for conducting needles in our galaxy. Spectral deviations and needle-shaped grains are expected for a cold Big Bang, but are not required by a hot Big Bang. (Auth.)

  18. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  19. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  20. Human factors in Big Data

    NARCIS (Netherlands)

    Boer, J. de

    2016-01-01

    Since 2014 I am involved in various (research) projects that try to make the hype around Big Data more concrete and tangible for the industry and government. Big Data is about multiple sources of (real-time) data that can be analysed, transformed to information and be used to make 'smart' decisions.

  1. Passport to the Big Bang

    CERN Multimedia

    De Melis, Cinzia

    2013-01-01

    Le 2 juin 2013, le CERN inaugure le projet Passeport Big Bang lors d'un grand événement public. Affiche et programme. On 2 June 2013 CERN launches a scientific tourist trail through the Pays de Gex and the Canton of Geneva known as the Passport to the Big Bang. Poster and Programme.

  2. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  3. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  4. Ignorance Is Bliss, But for Whom? The Persistent Effect of Good Will on Cooperation

    Directory of Open Access Journals (Sweden)

    Mike Farjam

    2016-10-01

    Full Text Available Who benefits from the ignorance of others? We address this question from the point of view of a policy maker who can induce some ignorance into a system of agents competing for resources. Evolutionary game theory shows that when unconditional cooperators or ignorant agents compete with defectors in two-strategy settings, unconditional cooperators get exploited and are rendered extinct. In contrast, conditional cooperators, by utilizing some kind of reciprocity, are able to survive and sustain cooperation when competing with defectors. We study how cooperation thrives in a three-strategy setting where there are unconditional cooperators, conditional cooperators and defectors. By means of simulation on various kinds of graphs, we show that conditional cooperators benefit from the existence of unconditional cooperators in the majority of cases. However, in worlds that make cooperation hard to evolve, defectors benefit.

  5. A Quantum Universe Before the Big Bang(s)?

    Science.gov (United States)

    Veneziano, Gabriele

    2017-08-01

    The predictions of general relativity have been verified by now in a variety of different situations, setting strong constraints on any alternative theory of gravity. Nonetheless, there are strong indications that general relativity has to be regarded as an approximation of a more complete theory. Indeed theorists have long been looking for ways to connect general relativity, which describes the cosmos and the infinitely large, to quantum physics, which has been remarkably successful in explaining the infinitely small world of elementary particles. These two worlds, however, come closer and closer to each other as we go back in time all the way up to the big bang. Actually, modern cosmology has changed completely the old big bang paradigm: we now have to talk about (at least) two (big?) bangs. If we know quite something about the one closer to us, at the end of inflation, we are much more ignorant about the one that may have preceded inflation and possibly marked the beginning of time. No one doubts that quantum mechanics plays an essential role in answering these questions: unfortunately a unified theory of gravity and quantum mechanics is still under construction. Finding such a synthesis and confirming it experimentally will no doubt be one of the biggest challenges of this century’s physics.

  6. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    Energy Technology Data Exchange (ETDEWEB)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com [Wageningen University, P.O. Box 338, Wageningen 6700 AH (Netherlands); Heijungs, R. [Vrije Universiteit Amsterdam, De Boelelaan 1105, Amsterdam 1081 HV (Netherlands); Leiden University, Einsteinweg 2, Leiden 2333 CC (Netherlands)

    2017-01-15

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlations between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.

  7. BRIEF HISTORY OF FFAG ACCELERATORS

    International Nuclear Information System (INIS)

    RUGGIERO, A.

    2006-01-01

    Colleagues of mine have asked me few times why we have today so much interest in Fixed-Field Alternating-Gradient (FFAG) accelerators when these were invented a long time ago, and have always been ignored since then. I try here to give a reply with a short history of FFAG accelerators, at least as I know it. I take also the opportunity to clarify few definitions

  8. BRIEF HISTORY OF FFAG ACCELERATORS.

    Energy Technology Data Exchange (ETDEWEB)

    RUGGIERO, A.

    2006-12-04

    Colleagues of mine have asked me few times why we have today so much interest in Fixed-Field Alternating-Gradient (FFAG) accelerators when these were invented a long time ago, and have always been ignored since then. I try here to give a reply with a short history of FFAG accelerators, at least as I know it. I take also the opportunity to clarify few definitions.

  9. A History of Dark Matter

    Energy Technology Data Exchange (ETDEWEB)

    Bertone, Gianfranco [U. Amsterdam, GRAPPA; Hooper, Dan [Fermilab

    2016-05-16

    Although dark matter is a central element of modern cosmology, the history of how it became accepted as part of the dominant paradigm is often ignored or condensed into a brief anecdotical account focused around the work of a few pioneering scientists. The aim of this review is to provide the reader with a broader historical perspective on the observational discoveries and the theoretical arguments that led the scientific community to adopt dark matter as an essential part of the standard cosmological model.

  10. Roles of dark energy perturbations in dynamical dark energy models: can we ignore them?

    Science.gov (United States)

    Park, Chan-Gyung; Hwang, Jai-chan; Lee, Jae-heon; Noh, Hyerim

    2009-10-09

    We show the importance of properly including the perturbations of the dark energy component in the dynamical dark energy models based on a scalar field and modified gravity theories in order to meet with present and future observational precisions. Based on a simple scaling scalar field dark energy model, we show that observationally distinguishable substantial differences appear by ignoring the dark energy perturbation. By ignoring it the perturbed system of equations becomes inconsistent and deviations in (gauge-invariant) power spectra depend on the gauge choice.

  11. The big data telescope

    International Nuclear Information System (INIS)

    Finkel, Elizabeth

    2017-01-01

    On a flat, red mulga plain in the outback of Western Australia, preparations are under way to build the most audacious telescope astronomers have ever dreamed of - the Square Kilometre Array (SKA). Next-generation telescopes usually aim to double the performance of their predecessors. The Australian arm of SKA will deliver a 168-fold leap on the best technology available today, to show us the universe as never before. It will tune into signals emitted just a million years after the Big Bang, when the universe was a sea of hydrogen gas, slowly percolating with the first galaxies. Their starlight illuminated the fledgling universe in what is referred to as the “cosmic dawn”.

  12. The Big Optical Array

    International Nuclear Information System (INIS)

    Mozurkewich, D.; Johnston, K.J.; Simon, R.S.

    1990-01-01

    This paper describes the design and the capabilities of the Naval Research Laboratory Big Optical Array (BOA), an interferometric optical array for high-resolution imaging of stars, stellar systems, and other celestial objects. There are four important differences between the BOA design and the design of Mark III Optical Interferometer on Mount Wilson (California). These include a long passive delay line which will be used in BOA to do most of the delay compensation, so that the fast delay line will have a very short travel; the beam combination in BOA will be done in triplets, to allow measurement of closure phase; the same light will be used for both star and fringe tracking; and the fringe tracker will use several wavelength channels

  13. Big nuclear accidents

    International Nuclear Information System (INIS)

    Marshall, W.

    1983-01-01

    Much of the debate on the safety of nuclear power focuses on the large number of fatalities that could, in theory, be caused by extremely unlikely but imaginable reactor accidents. This, along with the nuclear industry's inappropriate use of vocabulary during public debate, has given the general public a distorted impression of the safety of nuclear power. The way in which the probability and consequences of big nuclear accidents have been presented in the past is reviewed and recommendations for the future are made including the presentation of the long-term consequences of such accidents in terms of 'reduction in life expectancy', 'increased chance of fatal cancer' and the equivalent pattern of compulsory cigarette smoking. (author)

  14. Nonstandard big bang models

    International Nuclear Information System (INIS)

    Calvao, M.O.; Lima, J.A.S.

    1989-01-01

    The usual FRW hot big-bang cosmologies have been generalized by considering the equation of state ρ = Anm +(γ-1) -1 p, where m is the rest mass of the fluid particles and A is a dimensionless constant. Explicit analytic solutions are given for the flat case (ε=O). For large cosmological times these extended models behave as the standard Einstein-de Sitter universes regardless of the values of A and γ. Unlike the usual FRW flat case the deceleration parameter q is a time-dependent function and its present value, q≅ 1, obtained from the luminosity distance versus redshift relation, may be fitted by taking, for instance, A=1 and γ = 5/3 (monatomic relativistic gas with >> k B T). In all cases the universe cools obeying the same temperature law of the FRW models and it is shown that the age of the universe is only slightly modified. (author) [pt

  15. A matrix big bang

    International Nuclear Information System (INIS)

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control

  16. A matrix big bang

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands); Sethi, Savdeep [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States); Verlinde, Erik [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)

    2005-10-15

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control.

  17. DPF Big One

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    At its latest venue at Fermilab from 10-14 November, the American Physical Society's Division of Particles and Fields meeting entered a new dimension. These regular meetings, which allow younger researchers to communicate with their peers, have been gaining popularity over the years (this was the seventh in the series), but nobody had expected almost a thousand participants and nearly 500 requests to give talks. Thus Fermilab's 800-seat auditorium had to be supplemented with another room with a video hookup, while the parallel sessions were organized into nine bewildering streams covering fourteen major physics topics. With the conventionality of the Standard Model virtually unchallenged, physics does not move fast these days. While most of the physics results had already been covered in principle at the International Conference on High Energy Physics held in Dallas in August (October, page 1), the Fermilab DPF meeting had a very different atmosphere. Major international meetings like Dallas attract big names from far and wide, and it is difficult in such an august atmosphere for young researchers to find a receptive audience. This was not the case at the DPF parallel sessions. The meeting also adopted a novel approach, with the parallels sandwiched between an initial day of plenaries to set the scene, and a final day of summaries. With the whole world waiting for the sixth ('top') quark to be discovered at Fermilab's Tevatron protonantiproton collider, the meeting began with updates from Avi Yagil and Ronald Madaras from the big detectors, CDF and DO respectively. Although rumours flew thick and fast, the Tevatron has not yet reached the top, although Yagil could show one intriguing event of a type expected from the heaviest quark

  18. DPF Big One

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1993-01-15

    At its latest venue at Fermilab from 10-14 November, the American Physical Society's Division of Particles and Fields meeting entered a new dimension. These regular meetings, which allow younger researchers to communicate with their peers, have been gaining popularity over the years (this was the seventh in the series), but nobody had expected almost a thousand participants and nearly 500 requests to give talks. Thus Fermilab's 800-seat auditorium had to be supplemented with another room with a video hookup, while the parallel sessions were organized into nine bewildering streams covering fourteen major physics topics. With the conventionality of the Standard Model virtually unchallenged, physics does not move fast these days. While most of the physics results had already been covered in principle at the International Conference on High Energy Physics held in Dallas in August (October, page 1), the Fermilab DPF meeting had a very different atmosphere. Major international meetings like Dallas attract big names from far and wide, and it is difficult in such an august atmosphere for young researchers to find a receptive audience. This was not the case at the DPF parallel sessions. The meeting also adopted a novel approach, with the parallels sandwiched between an initial day of plenaries to set the scene, and a final day of summaries. With the whole world waiting for the sixth ('top') quark to be discovered at Fermilab's Tevatron protonantiproton collider, the meeting began with updates from Avi Yagil and Ronald Madaras from the big detectors, CDF and DO respectively. Although rumours flew thick and fast, the Tevatron has not yet reached the top, although Yagil could show one intriguing event of a type expected from the heaviest quark.

  19. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  20. [Big data from clinical routine].

    Science.gov (United States)

    Mansmann, U

    2018-04-01

    Over the past 100 years, evidence-based medicine has undergone several fundamental changes. Through the field of physiology, medical doctors were introduced to the natural sciences. Since the late 1940s, randomized and epidemiological studies have come to provide the evidence for medical practice, which led to the emergence of clinical epidemiology as a new field in the medical sciences. Within the past few years, big data has become the driving force behind the vision for having a comprehensive set of health-related data which tracks individual healthcare histories and consequently that of large populations. The aim of this article is to discuss the implications of data-driven medicine, and to examine how it can find a place within clinical care. The EU-wide discussion on the development of data-driven medicine is presented. The following features and suggested actions were identified: harmonizing data formats, data processing and analysis, data exchange, related legal frameworks and ethical challenges. For the effective development of data-driven medicine, pilot projects need to be conducted to allow for open and transparent discussion on the advantages and challenges. The Federal Ministry of Education and Research ("Bundesministerium für Bildung und Forschung," BMBF) Arthromark project is an important example. Another example is the Medical Informatics Initiative of the BMBF. The digital revolution affects clinic practice. Data can be generated and stored in quantities that are almost unimaginable. It is possible to take advantage of this for development of a learning healthcare system if the principles of medical evidence generation are integrated into innovative IT-infrastructures and processes.

  1. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  2. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  3. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  4. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  5. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  6. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  7. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  8. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  9. Forgotten and Ignored: Special Education in First Nations Schools in Canada

    Science.gov (United States)

    Phillips, Ron

    2010-01-01

    Usually reviews of special education in Canada describe the special education programs, services, policies, and legislation that are provided by the provinces and territories. The reviews consistently ignore the special education programs, services, policies, and legislation that are provided by federal government of Canada. The federal government…

  10. The Capital Costs Conundrum: Why Are Capital Costs Ignored and What Are the Consequences?

    Science.gov (United States)

    Winston, Gordon C.

    1993-01-01

    Colleges and universities historically have ignored the capital costs associated with institutional administration in their estimates of overall and per-student costs. This neglect leads to distortion of data, misunderstandings, and uninformed decision making. The real costs should be recognized in institutional accounting. (MSE)

  11. Monitoring your friends, not your foes: strategic ignorance and the delegation of real authority

    NARCIS (Netherlands)

    Dominguez-Martinez, S.; Sloof, R.; von Siemens, F.

    2010-01-01

    In this laboratory experiment we study the use of strategic ignorance to delegate real authority within a firm. A worker can gather information on investment projects, while a manager makes the implementation decision. The manager can monitor the worker. This allows her to better exploit the

  12. Monitored by your friends, not your foes: Strategic ignorance and the delegation of real authority

    NARCIS (Netherlands)

    Dominguez-Martinez, S.; Sloof, R.; von Siemens, F.

    2012-01-01

    In this laboratory experiment we study the use of strategic ignorance to delegate real authority within a firm. A worker can gather information on investment projects, while a manager makes the implementation decision. The manager can monitor the worker. This allows her to exploit any information

  13. Mathematical Practice as Sculpture of Utopia: Models, Ignorance, and the Emancipated Spectator

    Science.gov (United States)

    Appelbaum, Peter

    2012-01-01

    This article uses Ranciere's notion of the ignorant schoolmaster and McElheny's differentiation of artist's models from those of the architect and scientist to propose the reconceptualization of mathematics education as the support of emancipated spectators and sculptors of utopia.

  14. The effects of systemic crises when investors can be crisis ignorant

    NARCIS (Netherlands)

    H.J.W.G. Kole (Erik); C.G. Koedijk (Kees); M.J.C.M. Verbeek (Marno)

    2004-01-01

    textabstractSystemic crises can largely affect asset allocations due to the rapid deterioration of the risk-return trade-off. We investigate the effects of systemic crises, interpreted as global simultaneous shocks to financial markets, by introducing an investor adopting a crisis ignorant or crisis

  15. Geographies of knowing, geographies of ignorance: jumping scale in Southeast Asia

    NARCIS (Netherlands)

    van Schendel, W.

    2002-01-01

    'Area studies' use a geographical metaphor to visualise and naturalise particular social spaces as well as a particular scale of analysis. They produce specific geographies of knowing but also create geographies of ignorance. Taking Southeast Asia as an example, in this paper I explore how areas are

  16. The Trust Game Behind the Veil of Ignorance : A Note on Gender Differences

    NARCIS (Netherlands)

    Vyrastekova, J.; Onderstal, A.M.

    2005-01-01

    We analyse gender differences in the trust game in a "behind the veil of ignorance" design.This method yields strategies that are consistent with actions observed in the classical trust game experiments.We observe that, on averge, men and women do not differ in "trust", and that women are slightly

  17. The trust game behind the veil of ignorance: A note on gender differences

    NARCIS (Netherlands)

    Vyrastekova, J.; Onderstal, S.

    2008-01-01

    We analyze gender differences in the trust game in a "behind the veil of ignorance" design. This method yields strategies that are consistent with actions observed in the classical trust game experiments. We observe that, on average, men and women do not differ in "trust", and that women are

  18. The Ignorant Facilitator: Education, Politics and Theatre in Co-Communities

    Science.gov (United States)

    Lev-Aladgem, Shulamith

    2015-01-01

    This article discusses the book "The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation" by the French philosopher, Jacques Rancière. Its intention is to study the potential contribution of this text to the discourse of applied theatre (theatre in co-communities) in general, and the role of the facilitator in particular. It…

  19. Ignoring Memory Hints: The Stubborn Influence of Environmental Cues on Recognition Memory

    Science.gov (United States)

    Selmeczy, Diana; Dobbins, Ian G.

    2017-01-01

    Recognition judgments can benefit from the use of environmental cues that signal the general likelihood of encountering familiar versus unfamiliar stimuli. While incorporating such cues is often adaptive, there are circumstances (e.g., eyewitness testimony) in which observers should fully ignore environmental cues in order to preserve memory…

  20. Uncertain Climate Forecasts From Multimodel Ensembles: When to Use Them and When to Ignore Them

    OpenAIRE

    Jewson, Stephen; Rowlands, Dan

    2010-01-01

    Uncertainty around multimodel ensemble forecasts of changes in future climate reduces the accuracy of those forecasts. For very uncertain forecasts this effect may mean that the forecasts should not be used. We investigate the use of the well-known Bayesian Information Criterion (BIC) to make the decision as to whether a forecast should be used or ignored.

  1. Inattentional blindness for ignored words: comparison of explicit and implicit memory tasks.

    Science.gov (United States)

    Butler, Beverly C; Klein, Raymond

    2009-09-01

    Inattentional blindness is described as the failure to perceive a supra-threshold stimulus when attention is directed away from that stimulus. Based on performance on an explicit recognition memory test and concurrent functional imaging data Rees, Russell, Frith, and Driver [Rees, G., Russell, C., Frith, C. D., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored words. Science, 286, 2504-2507] reported inattentional blindness for word stimuli that were fixated but ignored. The present study examined both explicit and implicit memory for fixated but ignored words using a selective-attention task in which overlapping picture/word stimuli were presented at fixation. No explicit awareness of the unattended words was apparent on a recognition memory test. Analysis of an implicit memory task, however, indicated that unattended words were perceived at a perceptual level. Thus, the selective-attention task did not result in perfect filtering as suggested by Rees et al. While there was no evidence of conscious perception, subjects were not blind to the implicit perceptual properties of fixated but ignored words.

  2. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  3. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  4. The universe before the Big Bang cosmology and string theory

    CERN Document Server

    Gasperini, Maurizio

    2008-01-01

    Terms such as "expanding Universe", "big bang", and "initial singularity", are nowadays part of our common language. The idea that the Universe we observe today originated from an enormous explosion (big bang) is now well known and widely accepted, at all levels, in modern popular culture. But what happens to the Universe before the big bang? And would it make any sense at all to ask such a question? In fact, recent progress in theoretical physics, and in particular in String Theory, suggests answers to the above questions, providing us with mathematical tools able in principle to reconstruct the history of the Universe even for times before the big bang. In the emerging cosmological scenario the Universe, at the epoch of the big bang, instead of being a "new born baby" was actually a rather "aged" creature in the middle of its possibly infinitely enduring evolution. The aim of this book is to convey this picture in non-technical language accessibile also to non-specialists. The author, himself a leading cosm...

  5. Surveillance, Snowden, and Big Data: Capacities, consequences, critique

    Directory of Open Access Journals (Sweden)

    David Lyon

    2014-07-01

    Full Text Available The Snowden revelations about National Security Agency surveillance, starting in 2013, along with the ambiguous complicity of internet companies and the international controversies that followed provide a perfect segue into contemporary conundrums of surveillance and Big Data. Attention has shifted from late C20th information technologies and networks to a C21st focus on data, currently crystallized in “Big Data.” Big Data intensifies certain surveillance trends associated with information technology and networks, and is thus implicated in fresh but fluid configurations. This is considered in three main ways: One, the capacities of Big Data (including metadata intensify surveillance by expanding interconnected datasets and analytical tools. Existing dynamics of influence, risk-management, and control increase their speed and scope through new techniques, especially predictive analytics. Two, while Big Data appears to be about size, qualitative change in surveillance practices is also perceptible, accenting consequences. Important trends persist – the control motif, faith in technology, public-private synergies, and user-involvement – but the future-orientation increasingly severs surveillance from history and memory and the quest for pattern-discovery is used to justify unprecedented access to data. Three, the ethical turn becomes more urgent as a mode of critique. Modernity's predilection for certain definitions of privacy betrays the subjects of surveillance who, so far from conforming to the abstract, disembodied image of both computing and legal practices, are engaged and embodied users-in-relation whose activities both fuel and foreclose surveillance.

  6. Big climate data analysis

    Science.gov (United States)

    Mudelsee, Manfred

    2015-04-01

    The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to

  7. Hey, big spender

    Energy Technology Data Exchange (ETDEWEB)

    Cope, G.

    2000-04-01

    Business to business electronic commerce is looming large in the future of the oil industry. It is estimated that by adopting e-commerce the industry could achieve bottom line savings of between $1.8 to $ 3.4 billion a year on annual gross revenues in excess of $ 30 billion. At present there are several teething problems to overcome such as inter-operability standards, which are at least two or three years away. Tying in electronically with specific suppliers is also an expensive proposition, although the big benefits are in fact in doing business with the same suppliers on a continuing basis. Despite these problems, 14 of the world's largest energy and petrochemical companies joined forces in mid-April to create a single Internet procurement marketplace for the industry's complex supply chain. The exchange was designed by B2B (business-to-business) software provider, Commerce One Inc., ; it will leverage the buying clout of these industry giants (BP Amoco, Royal Dutch Shell Group, Conoco, Occidental Petroleum, Phillips Petroleum, Unocal Corporation and Statoil among them), currently about $ 125 billion on procurement per year; they hope to save between 5 to 30 per cent depending on the product and the region involved. Other similar schemes such as Chevron and partners' Petrocosm Marketplace, Network Oil, a Houston-based Internet portal aimed at smaller petroleum companies, are also doing business in the $ 10 billion per annum range. e-Energy, a cooperative project between IBM Ericson and Telus Advertising is another neutral, virtual marketplace targeted at the oil and gas sector. PetroTRAX, a Calgary-based website plans to take online procurement and auction sales a big step forward by establishing a portal to handle any oil company's asset management needs. There are also a number of websites targeting specific needs: IndigoPool.com (acquisitions and divestitures) and WellBid.com (products related to upstream oil and gas operators) are just

  8. Hey, big spender

    International Nuclear Information System (INIS)

    Cope, G.

    2000-01-01

    Business to business electronic commerce is looming large in the future of the oil industry. It is estimated that by adopting e-commerce the industry could achieve bottom line savings of between $1.8 to $ 3.4 billion a year on annual gross revenues in excess of $ 30 billion. At present there are several teething problems to overcome such as inter-operability standards, which are at least two or three years away. Tying in electronically with specific suppliers is also an expensive proposition, although the big benefits are in fact in doing business with the same suppliers on a continuing basis. Despite these problems, 14 of the world's largest energy and petrochemical companies joined forces in mid-April to create a single Internet procurement marketplace for the industry's complex supply chain. The exchange was designed by B2B (business-to-business) software provider, Commerce One Inc., ; it will leverage the buying clout of these industry giants (BP Amoco, Royal Dutch Shell Group, Conoco, Occidental Petroleum, Phillips Petroleum, Unocal Corporation and Statoil among them), currently about $ 125 billion on procurement per year; they hope to save between 5 to 30 per cent depending on the product and the region involved. Other similar schemes such as Chevron and partners' Petrocosm Marketplace, Network Oil, a Houston-based Internet portal aimed at smaller petroleum companies, are also doing business in the $ 10 billion per annum range. e-Energy, a cooperative project between IBM Ericson and Telus Advertising is another neutral, virtual marketplace targeted at the oil and gas sector. PetroTRAX, a Calgary-based website plans to take online procurement and auction sales a big step forward by establishing a portal to handle any oil company's asset management needs. There are also a number of websites targeting specific needs: IndigoPool.com (acquisitions and divestitures) and WellBid.com (products related to upstream oil and gas operators) are just two examples. All in

  9. Small Artifacts - Big Technologies

    DEFF Research Database (Denmark)

    Kreiner, Kristian

    2005-01-01

    The computer IC is the heart of the information and telecommunication technology. It is a tiny artifact, but with incredible organizing powers. We use this physical artifact as the location for studying central problems of the knowledge economy. First, the paper describes the history of chip design...

  10. The Big Build

    Science.gov (United States)

    Haigh, Sarah; Bell, Christopher; Ruta, Chris

    2017-01-01

    This article provides details of a successful educational engineering project run in partnership between a group of ten schools and an international engineering, construction and technical services company. It covers the history and evolution of the project and highlights how the project has significant impact not only on the students involved but…

  11. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  12. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  13. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  14. Biophotonics: the big picture

    Science.gov (United States)

    Marcu, Laura; Boppart, Stephen A.; Hutchinson, Mark R.; Popp, Jürgen; Wilson, Brian C.

    2018-02-01

    The 5th International Conference on Biophotonics (ICOB) held April 30 to May 1, 2017, in Fremantle, Western Australia, brought together opinion leaders to discuss future directions for the field and opportunities to consider. The first session of the conference, "How to Set a Big Picture Biophotonics Agenda," was focused on setting the stage for developing a vision and strategies for translation and impact on society of biophotonic technologies. The invited speakers, panelists, and attendees engaged in discussions that focused on opportunities and promising applications for biophotonic techniques, challenges when working at the confluence of the physical and biological sciences, driving factors for advances of biophotonic technologies, and educational opportunities. We share a summary of the presentations and discussions. Three main themes from the conference are presented in this position paper that capture the current status, opportunities, challenges, and future directions of biophotonics research and key areas of applications: (1) biophotonics at the nano- to microscale level; (2) biophotonics at meso- to macroscale level; and (3) biophotonics and the clinical translation conundrum.

  15. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  16. Big bang darkleosynthesis

    Directory of Open Access Journals (Sweden)

    Gordan Krnjaic

    2015-12-01

    Full Text Available In a popular class of models, dark matter comprises an asymmetric population of composite particles with short range interactions arising from a confined nonabelian gauge group. We show that coupling this sector to a well-motivated light mediator particle yields efficient darkleosynthesis, a dark-sector version of big-bang nucleosynthesis (BBN, in generic regions of parameter space. Dark matter self-interaction bounds typically require the confinement scale to be above ΛQCD, which generically yields large (≫MeV/dark-nucleon binding energies. These bounds further suggest the mediator is relatively weakly coupled, so repulsive forces between dark-sector nuclei are much weaker than Coulomb repulsion between standard-model nuclei, which results in an exponential barrier-tunneling enhancement over standard BBN. Thus, darklei are easier to make and harder to break than visible species with comparable mass numbers. This process can efficiently yield a dominant population of states with masses significantly greater than the confinement scale and, in contrast to dark matter that is a fundamental particle, may allow the dominant form of dark matter to have high spin (S≫3/2, whose discovery would be smoking gun evidence for dark nuclei.

  17. Predicting big bang deuterium

    Energy Technology Data Exchange (ETDEWEB)

    Hata, N.; Scherrer, R.J.; Steigman, G.; Thomas, D.; Walker, T.P. [Department of Physics, Ohio State University, Columbus, Ohio 43210 (United States)

    1996-02-01

    We present new upper and lower bounds to the primordial abundances of deuterium and {sup 3}He based on observational data from the solar system and the interstellar medium. Independent of any model for the primordial production of the elements we find (at the 95{percent} C.L.): 1.5{times}10{sup {minus}5}{le}(D/H){sub {ital P}}{le}10.0{times}10{sup {minus}5} and ({sup 3}He/H){sub {ital P}}{le}2.6{times}10{sup {minus}5}. When combined with the predictions of standard big bang nucleosynthesis, these constraints lead to a 95{percent} C.L. bound on the primordial abundance deuterium: (D/H){sub best}=(3.5{sup +2.7}{sub {minus}1.8}){times}10{sup {minus}5}. Measurements of deuterium absorption in the spectra of high-redshift QSOs will directly test this prediction. The implications of this prediction for the primordial abundances of {sup 4}He and {sup 7}Li are discussed, as well as those for the universal density of baryons. {copyright} {ital 1996 The American Astronomical Society.}

  18. Big bang darkleosynthesis

    Science.gov (United States)

    Krnjaic, Gordan; Sigurdson, Kris

    2015-12-01

    In a popular class of models, dark matter comprises an asymmetric population of composite particles with short range interactions arising from a confined nonabelian gauge group. We show that coupling this sector to a well-motivated light mediator particle yields efficient darkleosynthesis, a dark-sector version of big-bang nucleosynthesis (BBN), in generic regions of parameter space. Dark matter self-interaction bounds typically require the confinement scale to be above ΛQCD, which generically yields large (≫MeV /dark-nucleon) binding energies. These bounds further suggest the mediator is relatively weakly coupled, so repulsive forces between dark-sector nuclei are much weaker than Coulomb repulsion between standard-model nuclei, which results in an exponential barrier-tunneling enhancement over standard BBN. Thus, darklei are easier to make and harder to break than visible species with comparable mass numbers. This process can efficiently yield a dominant population of states with masses significantly greater than the confinement scale and, in contrast to dark matter that is a fundamental particle, may allow the dominant form of dark matter to have high spin (S ≫ 3 / 2), whose discovery would be smoking gun evidence for dark nuclei.

  19. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  20. The role of big laboratories

    International Nuclear Information System (INIS)

    Heuer, R-D

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward. (paper)

  1. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  2. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  3. Big Sites, Big Questions, Big Data, Big Problems: Scales of Investigation and Changing Perceptions of Archaeological Practice in the Southeastern United States

    Directory of Open Access Journals (Sweden)

    Cameron B Wesson

    2014-08-01

    Full Text Available Since at least the 1930s, archaeological investigations in the southeastern United States have placed a priority on expansive, near-complete, excavations of major sites throughout the region. Although there are considerable advantages to such large–scale excavations, projects conducted at this scale are also accompanied by a series of challenges regarding the comparability, integrity, and consistency of data recovery, analysis, and publication. We examine the history of large–scale excavations in the southeast in light of traditional views within the discipline that the region has contributed little to the ‘big questions’ of American archaeology. Recently published analyses of decades old data derived from Southeastern sites reveal both the positive and negative aspects of field research conducted at scales much larger than normally undertaken in archaeology. Furthermore, given the present trend toward the use of big data in the social sciences, we predict an increased use of large pre–existing datasets developed during the New Deal and other earlier periods of archaeological practice throughout the region.

  4. Taking of history

    DEFF Research Database (Denmark)

    Langebæk, Rikke

    2007-01-01

    the smartest student. So the more familiar a student can become with these situations the better. Since september 2006, veterinary students at Faculty of Life Sciences, University of Copenhagen, have received training in the discipline of history taking, using innovative educational methods: Online......, the students won’t be intimidated by the situation, as they are already familiar with the ‘client’. The ‘client’/teacher must be able to perform as different types of clients to make the sessions more interesting, colourful and fun. During these Live Role sessions, the students will get help and good advice......Learning how to take a history is an extremely important discipline in the education of veterinary students. In our opinion the fact that this discipline is often neglected in traditional teaching is a big mistake. The mere thought of facing a real client can be almost paralysing to even...

  5. L'Univers avant le Big Bang

    CERN Document Server

    Rouat, Sylvie

    2003-01-01

    "Tout n'a pas commencé par une explosion. L'historie du cosmos avait débuté biena vant le Big Bang, si l'on suit la théorie défendue par les partisans d'une nouvelle cosmologie issue de la mystérieuse théorie des cordes. A l'heure où vacillent les scénarios classiques du XXe siècle, se prépare un grand chamboulement de nos idées sur la naissance de l'Univers et son devenir, sur l'existence possible d'univers parallèles. Des théories séduisantes qui seront mises à l'épreuve au cours de la prochaine décennie" (11 pages)

  6. Double jeopardy, the equal value of lives and the veil of ignorance: a rejoinder to Harris.

    Science.gov (United States)

    McKie, J; Kuhse, H; Richardson, J; Singer, P

    1996-08-01

    Harris levels two main criticisms against our original defence of QALYs (Quality Adjusted Life Years). First, he rejects the assumption implicit in the QALY approach that not all lives are of equal value. Second, he rejects our appeal to Rawls's veil of ignorance test in support of the QALY method. In the present article we defend QALYs against Harris's criticisms. We argue that some of the conclusions Harris draws from our view that resources should be allocated on the basis of potential improvements in quality of life and quantity of life are erroneous, and that others lack the moral implications Harris claims for them. On the other hand, we defend our claim that a rational egoist, behind a veil of ignorance, could consistently choose to allocate life-saving resources in accordance with the QALY method, despite Harris's claim that a rational egoist would allocate randomly if there is no better than a 50% chance of being the recipient.

  7. Crimes commited by indigeno us people in ignorance of the law

    Directory of Open Access Journals (Sweden)

    Diego Fernando Chimbo Villacorte

    2017-07-01

    Full Text Available This analysis focuses specifically When the Indian commits crimes in ignorance of the law, not only because it ignores absolutely the unlawfulness of their conduct but when he believes he is acting in strict accordance with their beliefs and ancestral customs which –in squabble some cases– with positive law. Likewise the impossibility of imposing a penalty –when the offense is committed outside the community– or indigenous purification –when it marks an act that disturbs social peace within the indigenous– community is committed but mainly focuses on the impossibility to impose a security measure when it has committed a crime outside their community, because doing so is as unimpeachable and returns to his community, generating a discriminating treatment that prevents the culturally different self-determination.

  8. The Insider Threat to Cybersecurity: How Group Process and Ignorance Affect Analyst Accuracy and Promptitude

    Science.gov (United States)

    2017-09-01

    McCarthy, J. (1980). Circumscription - A Form of Nonmonotonic Reasoning. Artificial Intelligence , 13, 27–39. McClure, S., Scambray, J., & Kurtz, G. (2012...THREAT TO CYBERSECURITY : HOW GROUP PROCESS AND IGNORANCE AFFECT ANALYST ACCURACY AND PROMPTITUDE by Ryan F. Kelly September 2017...September 2017 3. REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE THE INSIDER THREAT TO CYBERSECURITY : HOW GROUP PROCESS AND

  9. Geographies of knowing, geographies of ignorance: jumping scale in Southeast Asia

    OpenAIRE

    van Schendel, W.

    2002-01-01

    'Area studies' use a geographical metaphor to visualise and naturalise particular social spaces as well as a particular scale of analysis. They produce specific geographies of knowing but also create geographies of ignorance. Taking Southeast Asia as an example, in this paper I explore how areas are imagined and how area knowledge is structured to construct area 'heartlands' as well as area `borderlands'. This is illustrated by considering a large region of Asia (here named Zomiatf) that did ...

  10. Early humans' egalitarian politics: runaway synergistic competition under an adapted veil of ignorance.

    Science.gov (United States)

    Harvey, Marc

    2014-09-01

    This paper proposes a model of human uniqueness based on an unusual distinction between two contrasted kinds of political competition and political status: (1) antagonistic competition, in quest of dominance (antagonistic status), a zero-sum, self-limiting game whose stake--who takes what, when, how--summarizes a classical definition of politics (Lasswell 1936), and (2) synergistic competition, in quest of merit (synergistic status), a positive-sum, self-reinforcing game whose stake becomes "who brings what to a team's common good." In this view, Rawls's (1971) famous virtual "veil of ignorance" mainly conceals politics' antagonistic stakes so as to devise the principles of a just, egalitarian society, yet without providing any means to enforce these ideals (Sen 2009). Instead, this paper proposes that human uniqueness flourished under a real "adapted veil of ignorance" concealing the steady inflation of synergistic politics which resulted from early humans' sturdy egalitarianism. This proposition divides into four parts: (1) early humans first stumbled on a purely cultural means to enforce a unique kind of within-team antagonistic equality--dyadic balanced deterrence thanks to handheld weapons (Chapais 2008); (2) this cultural innovation is thus closely tied to humans' darkest side, but it also launched the cumulative evolution of humans' brightest qualities--egalitarian team synergy and solidarity, together with the associated synergistic intelligence, culture, and communications; (3) runaway synergistic competition for differential merit among antagonistically equal obligate teammates is the single politically selective mechanism behind the cumulative evolution of all these brighter qualities, but numerous factors to be clarified here conceal this mighty evolutionary driver; (4) this veil of ignorance persists today, which explains why humans' unique prosocial capacities are still not clearly understood by science. The purpose of this paper is to start lifting

  11. On the perpetuation of ignorance: system dependence, system justification, and the motivated avoidance of sociopolitical information.

    Science.gov (United States)

    Shepherd, Steven; Kay, Aaron C

    2012-02-01

    How do people cope when they feel uninformed or unable to understand important social issues, such as the environment, energy concerns, or the economy? Do they seek out information, or do they simply ignore the threatening issue at hand? One would intuitively expect that a lack of knowledge would motivate an increased, unbiased search for information, thereby facilitating participation and engagement in these issues-especially when they are consequential, pressing, and self-relevant. However, there appears to be a discrepancy between the importance/self-relevance of social issues and people's willingness to engage with and learn about them. Leveraging the literature on system justification theory (Jost & Banaji, 1994), the authors hypothesized that, rather than motivating an increased search for information, a lack of knowledge about a specific sociopolitical issue will (a) foster feelings of dependence on the government, which will (b) increase system justification and government trust, which will (c) increase desires to avoid learning about the relevant issue when information is negative or when information valence is unknown. In other words, the authors suggest that ignorance-as a function of the system justifying tendencies it may activate-may, ironically, breed more ignorance. In the contexts of energy, environmental, and economic issues, the authors present 5 studies that (a) provide evidence for this specific psychological chain (i.e., ignorance about an issue → dependence → government trust → avoidance of information about that issue); (b) shed light on the role of threat and motivation in driving the second and third links in this chain; and (c) illustrate the unfortunate consequences of this process for individual action in those contexts that may need it most.

  12. Ignoring versus updating in working memory reveal differential roles of attention and feature binding

    OpenAIRE

    Fallon, SJ; Mattiesing, RM; Dolfen, N; Manohar, SGM; Husain, M

    2017-01-01

    Ignoring distracting information and updating current contents are essential components of working memory (WM). Yet, although both require controlling irrelevant information, it is unclear whether they have the same effects on recall and produce the same level of misbinding errors (incorrectly joining the features of different memoranda). Moreover, the likelihood of misbinding may be affected by the feature similarity between the items already encoded into memory and the information that has ...

  13. Framing Big Data: The discursive construction of a radio cell query in Germany

    Directory of Open Access Journals (Sweden)

    Christian Pentzold

    2017-11-01

    Full Text Available The article examines the construction of “Big Data” in media discourse. Rather than asking what Big Data really is or is not, it deals with the discursive work that goes into making Big Data a socially relevant phenomenon and problem in the first place. It starts from the idea that in modern societies the public understanding of technology is largely driven by a media-based discourse, which is a key arena for circulating collectively shared meanings. This largely ignored dimension invites us to appreciate what matters to journalists and the wider public when discussing the collection and use of data. To this end, our study looks at how Big Data is framed in terms of the governmental use of large datasets as a contentious area of data application. It reconstructs the perspectives surrounding the so-called “Handygate” affair in Germany based on broadcast news and social media conversations. In this incident, state authorities collected and analyzed mobile phone data through a radio cell query during events to commemorate the Dresden bombing in February 2011. We employ a qualitative discourse analysis that allows us to reconstruct the conceptualizations of Big Data as a proper instrument for criminal prosecution or an unjustified infringement of constitutional rights.

  14. Ignoring alarming news brings indifference: Learning about the world and the self.

    Science.gov (United States)

    Paluck, Elizabeth Levy; Shafir, Eldar; Wu, Sherry Jueyu

    2017-10-01

    The broadcast of media reports about moral crises such as famine can subtly depress rather than activate moral concern. Whereas much research has examined the effects of media reports that people attend to, social psychological analysis suggests that what goes unattended can also have an impact. We test the idea that when vivid news accounts of human suffering are broadcast in the background but ignored, people infer from their choice to ignore these accounts that they care less about the issue, compared to those who pay attention and even to those who were not exposed. Consistent with research on self-perception and attribution, three experiments demonstrate that participants who were nudged to distract themselves in front of a television news program about famine in Niger (Study 1), or to skip an online promotional video for the Niger famine program (Study 2), or who chose to ignore the famine in Niger television program in more naturalistic settings (Study 3) all assigned lower importance to poverty and to hunger reduction compared to participants who watched with no distraction or opportunity to skip the program, or to those who did not watch at all. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  16. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  17. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  18. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  19. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  20. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  1. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  2. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  3. Big Data Analytics in Healthcare.

    Science.gov (United States)

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  4. The challenges of big data.

    Science.gov (United States)

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  5. Big Book of Windows Hacks

    CERN Document Server

    Gralla, Preston

    2008-01-01

    Bigger, better, and broader in scope, the Big Book of Windows Hacks gives you everything you need to get the most out of your Windows Vista or XP system, including its related applications and the hardware it runs on or connects to. Whether you want to tweak Vista's Aero interface, build customized sidebar gadgets and run them from a USB key, or hack the "unhackable" screensavers, you'll find quick and ingenious ways to bend these recalcitrant operating systems to your will. The Big Book of Windows Hacks focuses on Vista, the new bad boy on Microsoft's block, with hacks and workarounds that

  6. Sosiaalinen asiakassuhdejohtaminen ja big data

    OpenAIRE

    Toivonen, Topi-Antti

    2015-01-01

    Tässä tutkielmassa käsitellään sosiaalista asiakassuhdejohtamista sekä hyötyjä, joita siihen voidaan saada big datan avulla. Sosiaalinen asiakassuhdejohtaminen on terminä uusi ja monille tuntematon. Tutkimusta motivoi aiheen vähäinen tutkimus, suomenkielisen tutkimuksen puuttuminen kokonaan sekä sosiaalisen asiakassuhdejohtamisen mahdollinen olennainen rooli yritysten toiminnassa tulevaisuudessa. Big dataa käsittelevissä tutkimuksissa keskitytään monesti sen tekniseen puoleen, eikä sovellutuk...

  7. Do big gods cause anything?

    DEFF Research Database (Denmark)

    Geertz, Armin W.

    2014-01-01

    Dette er et bidrag til et review symposium vedrørende Ara Norenzayans bog Big Gods: How Religion Transformed Cooperation and Conflict (Princeton University Press 2013). Bogen er spændende men problematisk i forhold til kausalitet, ateisme og stereotyper om jægere-samlere.......Dette er et bidrag til et review symposium vedrørende Ara Norenzayans bog Big Gods: How Religion Transformed Cooperation and Conflict (Princeton University Press 2013). Bogen er spændende men problematisk i forhold til kausalitet, ateisme og stereotyper om jægere-samlere....

  8. Baryon symmetric big bang cosmology

    International Nuclear Information System (INIS)

    Stecker, F.W.

    1978-01-01

    It is stated that the framework of baryon symmetric big bang (BSBB) cosmology offers our greatest potential for deducting the evolution of the Universe because its physical laws and processes have the minimum number of arbitrary assumptions about initial conditions in the big-bang. In addition, it offers the possibility of explaining the photon-baryon ratio in the Universe and how galaxies and galaxy clusters are formed. BSBB cosmology also provides the only acceptable explanation at present for the origin of the cosmic γ-ray background radiation. (author)

  9. Release plan for Big Pete

    International Nuclear Information System (INIS)

    Edwards, T.A.

    1996-11-01

    This release plan is to provide instructions for the Radiological Control Technician (RCT) to conduct surveys for the unconditional release of ''Big Pete,'' which was used in the removal of ''Spacers'' from the N-Reactor. Prior to performing surveys on the rear end portion of ''Big Pete,'' it shall be cleaned (i.e., free of oil, grease, caked soil, heavy dust). If no contamination is found, the vehicle may be released with the permission of the area RCT Supervisor. If contamination is found by any of the surveys, contact the cognizant Radiological Engineer for decontamination instructions

  10. Small quarks make big nuggets

    International Nuclear Information System (INIS)

    Deligeorges, S.

    1985-01-01

    After a brief recall on the classification of subatomic particles, this paper deals with quark nuggets, particle with more than three quarks, a big bag, which is called ''nuclearite''. Neutron stars, in fact, are big sacks of quarks, gigantic nuggets. Now, physicists try to calculate which type of nuggets of strange quark matter is stable, what has been the influence of quark nuggets on the primordial nucleosynthesis. At the present time, one says that if these ''nuggets'' exist, and in a large proportion, they may be candidates for the missing mass [fr

  11. [Big Data- challenges and risks].

    Science.gov (United States)

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  12. Towards a big crunch dual

    Energy Technology Data Exchange (ETDEWEB)

    Hertog, Thomas E-mail: hertog@vulcan2.physics.ucsb.edu; Horowitz, Gary T

    2004-07-01

    We show there exist smooth asymptotically anti-de Sitter initial data which evolve to a big crunch singularity in a low energy supergravity limit of string theory. This opens up the possibility of using the dual conformal field theory to obtain a fully quantum description of the cosmological singularity. A preliminary study of this dual theory suggests that the big crunch is an endpoint of evolution even in the full string theory. We also show that any theory with scalar solitons must have negative energy solutions. The results presented here clarify our earlier work on cosmic censorship violation in N=8 supergravity. (author)

  13. The Inverted Big-Bang

    OpenAIRE

    Vaas, Ruediger

    2004-01-01

    Our universe appears to have been created not out of nothing but from a strange space-time dust. Quantum geometry (loop quantum gravity) makes it possible to avoid the ominous beginning of our universe with its physically unrealistic (i.e. infinite) curvature, extreme temperature, and energy density. This could be the long sought after explanation of the big-bang and perhaps even opens a window into a time before the big-bang: Space itself may have come from an earlier collapsing universe tha...

  14. Big Cities, Big Problems: Reason for the Elderly to Move?

    NARCIS (Netherlands)

    Fokkema, T.; de Jong-Gierveld, J.; Nijkamp, P.

    1996-01-01

    In many European countries, data on geographical patterns of internal elderly migration show that the elderly (55+) are more likely to leave than to move to the big cities. Besides emphasising the attractive features of the destination areas (pull factors), it is often assumed that this negative

  15. Big-Eyed Bugs Have Big Appetite for Pests

    Science.gov (United States)

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  16. Big Bang, Blowup, and Modular Curves: Algebraic Geometry in Cosmology

    Science.gov (United States)

    Manin, Yuri I.; Marcolli, Matilde

    2014-07-01

    We introduce some algebraic geometric models in cosmology related to the ''boundaries'' of space-time: Big Bang, Mixmaster Universe, Penrose's crossovers between aeons. We suggest to model the kinematics of Big Bang using the algebraic geometric (or analytic) blow up of a point x. This creates a boundary which consists of the projective space of tangent directions to x and possibly of the light cone of x. We argue that time on the boundary undergoes the Wick rotation and becomes purely imaginary. The Mixmaster (Bianchi IX) model of the early history of the universe is neatly explained in this picture by postulating that the reverse Wick rotation follows a hyperbolic geodesic connecting imaginary time axis to the real one. Penrose's idea to see the Big Bang as a sign of crossover from ''the end of previous aeon'' of the expanding and cooling Universe to the ''beginning of the next aeon'' is interpreted as an identification of a natural boundary of Minkowski space at infinity with the Big Bang boundary.

  17. A survey on Big Data Stream Mining

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Big Data can be static on one machine or distributed ... decision making, and process automation. Big data .... Concept Drifting: concept drifting mean the classifier .... transactions generated by a prefix tree structure. EstDec ...

  18. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    Science.gov (United States)

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  19. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  20. Burden of Circulatory System Diseases and Ignored Barriers ofKnowledge Translation

    Directory of Open Access Journals (Sweden)

    Hamed-Basir Ghafouri

    2012-10-01

    Full Text Available Circulatory system disease raise third highest disability-adjusted life years among Iranians and ischemic cardiac diseases are main causes for such burden. Despite available evidences on risk factors of the disease, no effective intervention was implemented to control and prevent the disease. This paper non-systematically reviews available literature on the problem, solutions, and barriers of implementation of knowledge translation in Iran. It seems that there are ignored factors such as cultural and motivational issues in knowledge translation interventions but there are hopes for implementation of started projects and preparation of students as next generation of knowledge transferors.

  1. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  2. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  3. A brief history of the universe

    International Nuclear Information System (INIS)

    Sinha, K.K.

    1992-10-01

    The failure of the Big Bang Theory to explain the experimental (cosmological) data is well known. Attempts have been made to give a new interpretation of the above theory to explain the existing cosmological problems and a brief history of the universe. (author). 2 refs, 1 fig

  4. China: Big Changes Coming Soon

    Science.gov (United States)

    Rowen, Henry S.

    2011-01-01

    Big changes are ahead for China, probably abrupt ones. The economy has grown so rapidly for many years, over 30 years at an average of nine percent a year, that its size makes it a major player in trade and finance and increasingly in political and military matters. This growth is not only of great importance internationally, it is already having…

  5. Big data and urban governance

    NARCIS (Netherlands)

    Taylor, L.; Richter, C.; Gupta, J.; Pfeffer, K.; Verrest, H.; Ros-Tonen, M.

    2015-01-01

    This chapter examines the ways in which big data is involved in the rise of smart cities. Mobile phones, sensors and online applications produce streams of data which are used to regulate and plan the city, often in real time, but which presents challenges as to how the city’s functions are seen and

  6. Big Data for personalized healthcare

    NARCIS (Netherlands)

    Siemons, Liseth; Sieverink, Floor; Vollenbroek, Wouter; van de Wijngaert, Lidwien; Braakman-Jansen, Annemarie; van Gemert-Pijnen, Lisette

    2016-01-01

    Big Data, often defined according to the 5V model (volume, velocity, variety, veracity and value), is seen as the key towards personalized healthcare. However, it also confronts us with new technological and ethical challenges that require more sophisticated data management tools and data analysis

  7. Big data en gelijke behandeling

    NARCIS (Netherlands)

    Lammerant, Hans; de Hert, Paul; Blok, P.H.; Blok, P.H.

    2017-01-01

    In dit hoofdstuk bekijken we allereerst de voornaamste basisbegrippen inzake gelijke behandeling en discriminatie (paragraaf 6.2). Vervolgens kijken we haar het Nederlandse en Europese juridisch kader inzake non-discriminatie (paragraaf 6.3-6.5) en hoe die regels moeten worden toegepast op big

  8. Research Ethics in Big Data.

    Science.gov (United States)

    Hammer, Marilyn J

    2017-05-01

    The ethical conduct of research includes, in part, patient agreement to participate in studies and the protection of health information. In the evolving world of data science and the accessibility of large quantities of web-based data created by millions of individuals, novel methodologic approaches to answering research questions are emerging. This article explores research ethics in the context of big data.

  9. Big data e data science

    OpenAIRE

    Cavique, Luís

    2014-01-01

    Neste artigo foram apresentados os conceitos básicos de Big Data e a nova área a que deu origem, a Data Science. Em Data Science foi discutida e exemplificada a noção de redução da dimensionalidade dos dados.

  10. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  11. Sampling Operations on Big Data

    Science.gov (United States)

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  12. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  13. Big Math for Little Kids

    Science.gov (United States)

    Greenes, Carole; Ginsburg, Herbert P.; Balfanz, Robert

    2004-01-01

    "Big Math for Little Kids," a comprehensive program for 4- and 5-year-olds, develops and expands on the mathematics that children know and are capable of doing. The program uses activities and stories to develop ideas about number, shape, pattern, logical reasoning, measurement, operations on numbers, and space. The activities introduce the…

  14. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  15. From Big Bang to Eternity?

    Indian Academy of Sciences (India)

    at different distances (that is, at different epochs in the past) to come to this ... that the expansion started billions of years ago from an explosive Big Bang. Recent research sheds new light on the key cosmological question about the distant ...

  16. Banking Wyoming big sagebrush seeds

    Science.gov (United States)

    Robert P. Karrfalt; Nancy Shaw

    2013-01-01

    Five commercially produced seed lots of Wyoming big sagebrush (Artemisia tridentata Nutt. var. wyomingensis (Beetle & Young) S.L. Welsh [Asteraceae]) were stored under various conditions for 5 y. Purity, moisture content as measured by equilibrium relative humidity, and storage temperature were all important factors to successful seed storage. Our results indicate...

  17. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  18. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  19. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  20. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  1. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  2. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  3. Bohmian histories and decoherent histories

    International Nuclear Information System (INIS)

    Hartle, James B.

    2004-01-01

    The predictions of the Bohmian and the decoherent (or consistent) histories formulations of the quantum mechanics of a closed system are compared for histories--sequences of alternatives at a series of times. For certain kinds of histories, Bohmian mechanics and decoherent histories may both be formulated in the same mathematical framework within which they can be compared. In that framework, Bohmian mechanics and decoherent histories represent a given history by different operators. Their predictions for the probabilities of histories of a closed system therefore generally differ. However, in an idealized model of measurement, the predictions of Bohmian mechanics and decoherent histories coincide for the probabilities of records of measurement outcomes. The formulations are thus difficult to distinguish experimentally. They may differ in their accounts of the past history of the Universe in quantum cosmology

  4. Big Data - Smart Health Strategies

    Science.gov (United States)

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  5. Postmodernism, historical denial, and history education:

    Directory of Open Access Journals (Sweden)

    Robert John Parkes

    2015-12-01

    Full Text Available History educators frequently ignore, or engage only reluctantly and cautiously with postmodernism. This is arguably because postmodernism is frequently accused of assaulting the epistemological foundations of history as an academic discipline, fostering a climate of cultural relativism, encouraging the proliferation of revisionist histories, and providing fertile ground for historical denial. In the Philosophy of History discipline, Frank Ankersmit has become one of those scholars most closely associated with ‘postmodern history’. This paper explores Ankersmit’s ‘postmodern’ philosophy of history, particularly his key notion of ‘narrative substances’; what it might do for our approach to a problem such as historical denial; and what possibilities it presents for history didactics.

  6. Allocating health care: cost-utility analysis, informed democratic decision making, or the veil of ignorance?

    Science.gov (United States)

    Goold, S D

    1996-01-01

    Assuming that rationing health care is unavoidable, and that it requires moral reasoning, how should we allocate limited health care resources? This question is difficult because our pluralistic, liberal society has no consensus on a conception of distributive justice. In this article I focus on an alternative: Who shall decide how to ration health care, and how shall this be done to respect autonomy, pluralism, liberalism, and fairness? I explore three processes for making rationing decisions: cost-utility analysis, informed democratic decision making, and applications of the veil of ignorance. I evaluate these processes as examples of procedural justice, assuming that there is no outcome considered the most just. I use consent as a criterion to judge competing processes so that rationing decisions are, to some extent, self-imposed. I also examine the processes' feasibility in our current health care system. Cost-utility analysis does not meet criteria for actual or presumed consent, even if costs and health-related utility could be measured perfectly. Existing structures of government cannot creditably assimilate the information required for sound rationing decisions, and grassroots efforts are not representative. Applications of the veil of ignorance are more useful for identifying principles relevant to health care rationing than for making concrete rationing decisions. I outline a process of decision making, specifically for health care, that relies on substantive, selected representation, respects pluralism, liberalism, and deliberative democracy, and could be implemented at the community or organizational level.

  7. Excitatory and inhibitory priming by attended and ignored non-recycled words with monolinguals and bilinguals.

    Science.gov (United States)

    Neumann, Ewald; Nkrumah, Ivy K; Chen, Zhe

    2018-03-03

    Experiments examining identity priming from attended and ignored novel words (words that are used only once except when repetition is required due to experimental manipulation) in a lexical decision task are reported. Experiment 1 tested English monolinguals whereas Experiment 2 tested Twi (a native language of Ghana, Africa)-English bilinguals. Participants were presented with sequential pairs of stimuli composed of a prime followed by a probe, with each containing two items. The participants were required to name the target word in the prime display, and to make a lexical decision to the target item in the probe display. On attended repetition (AR) trials the probe target item was identical to the target word on the preceding attentional display. On ignored repetition (IR) trials the probe target item was the same as the distractor word in the preceding attentional display. The experiments produced facilitated (positive) priming in the AR trials and delayed (negative) priming in the IR trials. Significantly, the positive and negative priming effects also replicated across both monolingual and bilingual groups of participants, despite the fact that the bilinguals were responding to the task in their non-dominant language.

  8. Illiteracy, Ignorance, and Willingness to Quit Smoking among Villagers in India

    Science.gov (United States)

    Gorty, Prasad V. S. N. R.; Allam, Apparao

    1992-01-01

    During the field work to control oral cancer, difficulty in communication was encountered with illiterates. A study to define the role of illiteracy, ignorance and willingness to quit smoking among the villagers was undertaken in a rural area surrounding Doddipatla Village, A.P., India. Out of a total population of 3,550, 272 (7.7%) persons, mostly in the age range of 21–50 years, attended a cancer detection camp. There were 173 (63.6%) females and 99 (36.4%) males, among whom 66 (M53 + F13) were smokers; 36.4% of males and 63% of females were illiterate. Among the illiterates, it was observed that smoking rate was high (56%) and 47.7% were ignorant of health effects of smoking. The attitude of illiterate smokers was encouraging, as 83.6% were willing to quit smoking. Further research is necessary to design health education material for 413.5 million illiterates living in India (1991 Indian Census). A community health worker, trained in the use of mass media coupled with a person‐to‐person approach, may help the smoker to quit smoking. PMID:1506267

  9. History Matters

    Institute of Scientific and Technical Information of China (English)

    2017-01-01

    In 2002, she began working as alecturer at Minzu University of China.Now, she teaches English, historicalliterature, ancient Chinese history,historical theory and method, ancientsocial history of China, ancient palacepolitical history of China and the historyof the Sui and Tang dynasties and thePeriod of Five Dynasties.

  10. Data as an asset: What the oil and gas sector can learn from other industries about “Big Data”

    International Nuclear Information System (INIS)

    Perrons, Robert K.; Jensen, Jesse W.

    2015-01-01

    The upstream oil and gas industry has been contending with massive data sets and monolithic files for many years, but “Big Data” is a relatively new concept that has the potential to significantly re-shape the industry. Despite the impressive amount of value that is being realized by Big Data technologies in other parts of the marketplace, however, much of the data collected within the oil and gas sector tends to be discarded, ignored, or analyzed in a very cursory way. This viewpoint examines existing data management practices in the upstream oil and gas industry, and compares them to practices and philosophies that have emerged in organizations that are leading the way in Big Data. The comparison shows that, in companies that are widely considered to be leaders in Big Data analytics, data is regarded as a valuable asset—but this is usually not true within the oil and gas industry insofar as data is frequently regarded there as descriptive information about a physical asset rather than something that is valuable in and of itself. The paper then discusses how the industry could potentially extract more value from data, and concludes with a series of policy-related questions to this end. -- Highlights: •Upstream oil and gas industry frequently discards or ignores the data it collects •The sector tends to view data as descriptive information about the state of assets •Leaders in Big Data, by stark contrast, regard data as an asset in and of itself •Industry should use Big Data tools to extract more value from digital information

  11. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  12. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    CERN Multimedia

    2008-01-01

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  13. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  14. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  15. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  16. Solution of a braneworld big crunch/big bang cosmology

    International Nuclear Information System (INIS)

    McFadden, Paul L.; Turok, Neil; Steinhardt, Paul J.

    2007-01-01

    We solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed V divided by the speed of light c. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in (V/c) 2 . At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios

  17. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  18. Big Data – Big Deal for Organization Design?

    OpenAIRE

    Janne J. Korhonen

    2014-01-01

    Analytics is an increasingly important source of competitive advantage. It has even been posited that big data will be the next strategic emphasis of organizations and that analytics capability will be manifested in organizational structure. In this article, I explore how analytics capability might be reflected in organizational structure using the notion of  “requisite organization” developed by Jaques (1998). Requisite organization argues that a new strategic emphasis requires the addition ...

  19. Nowcasting using news topics Big Data versus big bank

    OpenAIRE

    Thorsrud, Leif Anders

    2016-01-01

    The agents in the economy use a plethora of high frequency information, including news media, to guide their actions and thereby shape aggregate economic fluctuations. Traditional nowcasting approches have to a relatively little degree made use of such information. In this paper, I show how unstructured textual information in a business newspaper can be decomposed into daily news topics and used to nowcast quarterly GDP growth. Compared with a big bank of experts, here represented by o cial c...

  20. Histories electromagnetism

    International Nuclear Information System (INIS)

    Burch, Aidan

    2004-01-01

    Working within the HPO (History Projection Operator) Consistent Histories formalism, we follow the work of Savvidou on (scalar) field theory [J. Math. Phys. 43, 3053 (2002)] and that of Savvidou and Anastopoulos on (first-class) constrained systems [Class. Quantum Gravt. 17, 2463 (2000)] to write a histories theory (both classical and quantum) of Electromagnetism. We focus particularly on the foliation-dependence of the histories phase space/Hilbert space and the action thereon of the two Poincare groups that arise in histories field theory. We quantize in the spirit of the Dirac scheme for constrained systems

  1. Maximizing the Educational Power of History Movies in the Classroom

    Science.gov (United States)

    Metzger, Scott Alan

    2010-01-01

    Cinematic feature films are a big part of youth popular culture. When blockbuster movies are about historical topics, it is reasonable for teachers to be drawn to using them in the classroom to motivate students interest. This article overviews research on film in the history classroom and describes three learning functions that history movies can…

  2. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  3. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  4. Big Data for Precision Medicine

    Directory of Open Access Journals (Sweden)

    Daniel Richard Leff

    2015-09-01

    Full Text Available This article focuses on the potential impact of big data analysis to improve health, prevent and detect disease at an earlier stage, and personalize interventions. The role that big data analytics may have in interrogating the patient electronic health record toward improved clinical decision support is discussed. We examine developments in pharmacogenetics that have increased our appreciation of the reasons why patients respond differently to chemotherapy. We also assess the expansion of online health communications and the way in which this data may be capitalized on in order to detect public health threats and control or contain epidemics. Finally, we describe how a new generation of wearable and implantable body sensors may improve wellbeing, streamline management of chronic diseases, and improve the quality of surgical implants.

  5. Big Data hvor N=1

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind

    2017-01-01

    Forskningen vedrørende anvendelsen af ’big data’ indenfor sundhed er kun lige begyndt, og kan på sigt blive en stor hjælp i forhold til at tilrettelægge en mere personlig og helhedsorienteret sundhedsindsats for multisyge. Personlig sundhedsteknologi, som kort præsenteres i dette kapital, rummer et...... stor potentiale for at gennemføre ’big data’ analyser for den enkelte person, det vil sige hvor N=1. Der er store teknologiske udfordringer i at få lavet teknologier og metoder til at indsamle og håndtere personlige data, som kan deles, på tværs på en standardiseret, forsvarlig, robust, sikker og ikke...

  6. George and the big bang

    CERN Document Server

    Hawking, Lucy; Parsons, Gary

    2012-01-01

    George has problems. He has twin baby sisters at home who demand his parents’ attention. His beloved pig Freddy has been exiled to a farm, where he’s miserable. And worst of all, his best friend, Annie, has made a new friend whom she seems to like more than George. So George jumps at the chance to help Eric with his plans to run a big experiment in Switzerland that seeks to explore the earliest moment of the universe. But there is a conspiracy afoot, and a group of evildoers is planning to sabotage the experiment. Can George repair his friendship with Annie and piece together the clues before Eric’s experiment is destroyed forever? This engaging adventure features essays by Professor Stephen Hawking and other eminent physicists about the origins of the universe and ends with a twenty-page graphic novel that explains how the Big Bang happened—in reverse!

  7. Did the Big Bang begin?

    International Nuclear Information System (INIS)

    Levy-Leblond, J.

    1990-01-01

    It is argued that the age of the universe may well be numerically finite (20 billion years or so) and conceptually infinite. A new and natural time scale is defined on a physical basis using group-theoretical arguments. An additive notion of time is obtained according to which the age of the universe is indeed infinite. In other words, never did the Big Bang begin. This new time scale is not supposed to replace the ordinary cosmic time scale, but to supplement it (in the same way as rapidity has taken a place by the side of velocity in Einsteinian relativity). The question is discussed within the framework of conventional (big-bang) and classical (nonquantum) cosmology, but could easily be extended to more elaborate views, as the purpose is not so much to modify present theories as to reach a deeper understanding of their meaning

  8. Big Data in Drug Discovery.

    Science.gov (United States)

    Brown, Nathan; Cambruzzi, Jean; Cox, Peter J; Davies, Mark; Dunbar, James; Plumbley, Dean; Sellwood, Matthew A; Sim, Aaron; Williams-Jones, Bryn I; Zwierzyna, Magdalena; Sheppard, David W

    2018-01-01

    Interpretation of Big Data in the drug discovery community should enhance project timelines and reduce clinical attrition through improved early decision making. The issues we encounter start with the sheer volume of data and how we first ingest it before building an infrastructure to house it to make use of the data in an efficient and productive way. There are many problems associated with the data itself including general reproducibility, but often, it is the context surrounding an experiment that is critical to success. Help, in the form of artificial intelligence (AI), is required to understand and translate the context. On the back of natural language processing pipelines, AI is also used to prospectively generate new hypotheses by linking data together. We explain Big Data from the context of biology, chemistry and clinical trials, showcasing some of the impressive public domain sources and initiatives now available for interrogation. © 2018 Elsevier B.V. All rights reserved.

  9. The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts.

    Science.gov (United States)

    Mittelstadt, Brent Daniel; Floridi, Luciano

    2016-04-01

    The capacity to collect and analyse data is growing exponentially. Referred to as 'Big Data', this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: (1) informed consent, (2) privacy (including anonymisation and data protection), (3) ownership, (4) epistemology and objectivity, and (5) 'Big Data Divides' created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: (6) the dangers of ignoring group-level ethical harms; (7) the importance of epistemology in assessing the ethics of Big Data; (8) the changing nature of fiduciary relationships that

  10. Big Data and central banks

    OpenAIRE

    David Bholat

    2015-01-01

    This commentary recaps a Centre for Central Banking Studies event held at the Bank of England on 2–3 July 2014. The article covers three main points. First, it situates the Centre for Central Banking Studies event within the context of the Bank’s Strategic Plan and initiatives. Second, it summarises and reflects on major themes from the event. Third, the article links central banks’ emerging interest in Big Data approaches with their broader uptake by other economic agents.

  11. Big bang is not needed

    Energy Technology Data Exchange (ETDEWEB)

    Allen, A.D.

    1976-02-01

    Recent computer simulations indicate that a system of n gravitating masses breaks up, even when the total energy is negative. As a result, almost any initial phase-space distribution results in a universe that eventually expands under the Hubble law. Hence Hubble expansion implies little regarding an initial cosmic state. Especially it does not imply the singularly dense superpositioned state used in the big bang model.

  12. Spin masters how the media ignored the real news and helped reelect Barack Obama

    CERN Document Server

    Freddoso, David

    2013-01-01

    The biggest story of the election was how the media ignored the biggest story of the election.Amid all the breathless coverage of a non-existent War on Women, there was little or no coverage of Obama's war on the economy?how, for instance, part-time work is replacing full-time work; how low-wage jobs are replacing high-wage ones; how for Americans between the ages of 25 and 54 there are fewer jobs today than there were when the recession officially ended in 2009, and fewer, in fact, than at any time since mid-1997.The downsizing of the American economy wasn't the only stor

  13. On Moderator Detection in Anchoring Research: Implications of Ignoring Estimate Direction

    Directory of Open Access Journals (Sweden)

    Nathan N. Cheek

    2018-05-01

    Full Text Available Anchoring, whereby judgments assimilate to previously considered standards, is one of the most reliable effects in psychology. In the last decade, researchers have become increasingly interested in identifying moderators of anchoring effects. We argue that a drawback of traditional moderator analyses in the standard anchoring paradigm is that they ignore estimate direction—whether participants’ estimates are higher or lower than the anchor value. We suggest that failing to consider estimate direction can sometimes obscure moderation in anchoring tasks, and discuss three potential analytic solutions that take estimate direction into account. Understanding moderators of anchoring effects is essential for a basic understanding of anchoring and for applied research on reducing the influence of anchoring in real-world judgments. Considering estimate direction reduces the risk of failing to detect moderation.

  14. Effects of ignoring baseline on modeling transitions from intact cognition to dementia.

    Science.gov (United States)

    Yu, Lei; Tyas, Suzanne L; Snowdon, David A; Kryscio, Richard J

    2009-07-01

    This paper evaluates the effect of ignoring baseline when modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. Transitions among states are modeled by a discrete-time Markov chain having three transient (intact cognition, MCI, and GI) and two competing absorbing states (death and dementia). Transition probabilities depend on two covariates, age and the presence/absence of an apolipoprotein E-epsilon4 allele, through a multinomial logistic model with shared random effects. Results are illustrated with an application to the Nun Study, a cohort of 678 participants 75+ years of age at baseline and followed longitudinally with up to ten cognitive assessments per nun.

  15. The wisdom of ignorant crowds: Predicting sport outcomes by mere recognition

    Directory of Open Access Journals (Sweden)

    Stefan M. Herzog

    2011-02-01

    Full Text Available that bets on the fact that people's recognition knowledge of names is a proxy for their competitiveness: In sports, it predicts that the better-known team or player wins a game. We present two studies on the predictive power of recognition in forecasting soccer games (World Cup 2006 and UEFA Euro 2008 and analyze previously published results. The performance of the collective recognition heuristic is compared to two benchmarks: predictions based on official rankings and aggregated betting odds. Across three soccer and two tennis tournaments, the predictions based on recognition performed similar to those based on rankings; when compared with betting odds, the heuristic fared reasonably well. Forecasts based on rankings---but not on betting odds---were improved by incorporating collective recognition information. We discuss the use of recognition for forecasting in sports and conclude that aggregating across individual ignorance spawns collective wisdom.

  16. Big data: the management revolution.

    Science.gov (United States)

    McAfee, Andrew; Brynjolfsson, Erik

    2012-10-01

    Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.

  17. Big Data Comes to School

    Directory of Open Access Journals (Sweden)

    Bill Cope

    2016-03-01

    Full Text Available The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.

  18. Entangled histories

    International Nuclear Information System (INIS)

    Cotler, Jordan; Wilczek, Frank

    2016-01-01

    We introduce quantum history states and their mathematical framework, thereby reinterpreting and extending the consistent histories approach to quantum theory. Through thought experiments, we demonstrate that our formalism allows us to analyze a quantum version of history in which we reconstruct the past by observations. In particular, we can pass from measurements to inferences about ‘what happened’ in a way that is sensible and free of paradox. Our framework allows for a richer understanding of the temporal structure of quantum theory, and we construct history states that embody peculiar, non-classical correlations in time. (paper)

  19. Food Quality Certificates and Research on Effect of Food Quality Certificates to Determinate Ignored Level of Buying Behavioral: A Case Study in Hitit University Feas Business Department

    Directory of Open Access Journals (Sweden)

    Hulya CAGIRAN KENDIRLI

    2014-12-01

    According to result of research, there is no relationship between demographic specialties of students and ignored of food and quality legislation. But there is relationship between sexuality and ignored of food and quality legislation.

  20. A Perplexed Economist Confronts 'too Big to Fail'

    Directory of Open Access Journals (Sweden)

    Scherer, F. M.

    2010-12-01

    Full Text Available This paper examines premises and data underlying the assertion that some financial institutions in the U.S. economy were "too big to fail" and hence warranted government bailout. It traces the merger histories enhancing the dominance of six leading firms in the U. S. banking industry and he sharp increases in the concentration of financial institution assets accompanying that merger wave. Financial institution profits are found to have soared in tandem with rising concentration. The paper advances hypotheses why these phenomena might be related and surveys relevant empirical literature on the relationships between market concentration, interest rates received and charged by banks, and economies of scale in banking.

  1. Tachyon cosmology, supernovae data, and the big brake singularity

    International Nuclear Information System (INIS)

    Keresztes, Z.; Gergely, L. A.; Gorini, V.; Moschella, U.; Kamenshchik, A. Yu.

    2009-01-01

    We compare the existing observational data on type Ia supernovae with the evolutions of the Universe predicted by a one-parameter family of tachyon models which we have introduced recently [Phys. Rev. D 69, 123512 (2004)]. Among the set of the trajectories of the model which are compatible with the data there is a consistent subset for which the Universe ends up in a new type of soft cosmological singularity dubbed big brake. This opens up yet another scenario for the future history of the Universe besides the one predicted by the standard ΛCDM model.

  2. Turning big bang into big bounce. I. Classical dynamics

    Science.gov (United States)

    Dzierżak, Piotr; Małkiewicz, Przemysław; Piechocki, Włodzimierz

    2009-11-01

    The big bounce (BB) transition within a flat Friedmann-Robertson-Walker model is analyzed in the setting of loop geometry underlying the loop cosmology. We solve the constraint of the theory at the classical level to identify physical phase space and find the Lie algebra of the Dirac observables. We express energy density of matter and geometrical functions in terms of the observables. It is the modification of classical theory by the loop geometry that is responsible for BB. The classical energy scale specific to BB depends on a parameter that should be fixed either by cosmological data or determined theoretically at quantum level, otherwise the energy scale stays unknown.

  3. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  4. Intellectual History

    DEFF Research Database (Denmark)

    In the 5 Questions book series, this volume presents a range of leading scholars in Intellectual History and the History of Ideas through their answers to a brief questionnaire. Respondents include Michael Friedman, Jacques le Goff, Hans Ulrich Gumbrecht, Jonathan Israel, Phiip Pettit, John Pocock...

  5. Supernova Cosmology in the Big Data Era

    Science.gov (United States)

    Kessler, Richard

    Here we describe large "Big Data" Supernova (SN) Ia surveys, past and present, used to make precision measurements of cosmological parameters that describe the expansion history of the universe. In particular, we focus on surveys designed to measure the dark energy equation of state parameter w and its dependence on cosmic time. These large surveys have at least four photometric bands, and they use a rolling search strategy in which the same instrument is used for both discovery and photometric follow-up observations. These surveys include the Supernova Legacy Survey (SNLS), Sloan Digital Sky Survey II (SDSS-II), Pan-STARRS 1 (PS1), Dark Energy Survey (DES), and Large Synoptic Survey Telescope (LSST). We discuss the development of how systematic uncertainties are evaluated, and how methods to reduce them play a major role is designing new surveys. The key systematic effects that we discuss are (1) calibration, measuring the telescope efficiency in each filter band, (2) biases from a magnitude-limited survey and from the analysis, and (3) photometric SN classification for current surveys that don't have enough resources to spectroscopically confirm each SN candidate.

  6. Psycho-informatics: Big Data shaping modern psychometrics.

    Science.gov (United States)

    Markowetz, Alexander; Błaszkiewicz, Konrad; Montag, Christian; Switala, Christina; Schlaepfer, Thomas E

    2014-04-01

    For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Big Data Strategy for Telco: Network Transformation

    OpenAIRE

    F. Amin; S. Feizi

    2014-01-01

    Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and ...

  8. Big Data in Shipping - Challenges and Opportunities

    OpenAIRE

    Rødseth, Ørnulf Jan; Perera, Lokukaluge Prasad; Mo, Brage

    2016-01-01

    Big Data is getting popular in shipping where large amounts of information is collected to better understand and improve logistics, emissions, energy consumption and maintenance. Constraints to the use of big data include cost and quality of on-board sensors and data acquisition systems, satellite communication, data ownership and technical obstacles to effective collection and use of big data. New protocol standards may simplify the process of collecting and organizing the data, including in...

  9. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  10. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  11. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  12. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  13. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  14. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  15. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  16. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  17. Tuberculosis control in big cities and urban risk groups in the European Union: a consensus statement.

    Science.gov (United States)

    van Hest, N A; Aldridge, R W; de Vries, G; Sandgren, A; Hauer, B; Hayward, A; Arrazola de Oñate, W; Haas, W; Codecasa, L R; Caylà, J A; Story, A; Antoine, D; Gori, A; Quabeck, L; Jonsson, J; Wanlin, M; Orcau, Å; Rodes, A; Dedicoat, M; Antoun, F; van Deutekom, H; Keizer, St; Abubakar, I

    2014-03-06

    In low-incidence countries in the European Union (EU), tuberculosis (TB) is concentrated in big cities, especially among certain urban high-risk groups including immigrants from TB high-incidence countries, homeless people, and those with a history of drug and alcohol misuse. Elimination of TB in European big cities requires control measures focused on multiple layers of the urban population. The particular complexities of major EU metropolises, for example high population density and social structure, create specific opportunities for transmission, but also enable targeted TB control interventions, not efficient in the general population, to be effective or cost effective. Lessons can be learnt from across the EU and this consensus statement on TB control in big cities and urban risk groups was prepared by a working group representing various EU big cities, brought together on the initiative of the European Centre for Disease Prevention and Control. The consensus statement describes general and specific social, educational, operational, organisational, legal and monitoring TB control interventions in EU big cities, as well as providing recommendations for big city TB control, based upon a conceptual TB transmission and control model.

  18. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  19. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  20. Earth Science Data Analysis in the Era of Big Data

    Science.gov (United States)

    Kuo, K.-S.; Clune, T. L.; Ramachandran, R.

    2014-01-01

    Anyone with even a cursory interest in information technology cannot help but recognize that "Big Data" is one of the most fashionable catchphrases of late. From accurate voice and facial recognition, language translation, and airfare prediction and comparison, to monitoring the real-time spread of flu, Big Data techniques have been applied to many seemingly intractable problems with spectacular successes. They appear to be a rewarding way to approach many currently unsolved problems. Few fields of research can claim a longer history with problems involving voluminous data than Earth science. The problems we are facing today with our Earth's future are more complex and carry potentially graver consequences than the examples given above. How has our climate changed? Beside natural variations, what is causing these changes? What are the processes involved and through what mechanisms are these connected? How will they impact life as we know it? In attempts to answer these questions, we have resorted to observations and numerical simulations with ever-finer resolutions, which continue to feed the "data deluge." Plausibly, many Earth scientists are wondering: How will Big Data technologies benefit Earth science research? As an example from the global water cycle, one subdomain among many in Earth science, how would these technologies accelerate the analysis of decades of global precipitation to ascertain the changes in its characteristics, to validate these changes in predictive climate models, and to infer the implications of these changes to ecosystems, economies, and public health? Earth science researchers need a viable way to harness the power of Big Data technologies to analyze large volumes and varieties of data with velocity and veracity. Beyond providing speedy data analysis capabilities, Big Data technologies can also play a crucial, albeit indirect, role in boosting scientific productivity by facilitating effective collaboration within an analysis environment

  1. The faces of Big Science.

    Science.gov (United States)

    Schatz, Gottfried

    2014-06-01

    Fifty years ago, academic science was a calling with few regulations or financial rewards. Today, it is a huge enterprise confronted by a plethora of bureaucratic and political controls. This change was not triggered by specific events or decisions but reflects the explosive 'knee' in the exponential growth that science has sustained during the past three-and-a-half centuries. Coming to terms with the demands and benefits of 'Big Science' is a major challenge for today's scientific generation. Since its foundation 50 years ago, the European Molecular Biology Organization (EMBO) has been of invaluable help in meeting this challenge.

  2. Big Data and central banks

    Directory of Open Access Journals (Sweden)

    David Bholat

    2015-04-01

    Full Text Available This commentary recaps a Centre for Central Banking Studies event held at the Bank of England on 2–3 July 2014. The article covers three main points. First, it situates the Centre for Central Banking Studies event within the context of the Bank’s Strategic Plan and initiatives. Second, it summarises and reflects on major themes from the event. Third, the article links central banks’ emerging interest in Big Data approaches with their broader uptake by other economic agents.

  3. Inhomogeneous Big Bang Nucleosynthesis Revisited

    OpenAIRE

    Lara, J. F.; Kajino, T.; Mathews, G. J.

    2006-01-01

    We reanalyze the allowed parameters for inhomogeneous big bang nucleosynthesis in light of the WMAP constraints on the baryon-to-photon ratio and a recent measurement which has set the neutron lifetime to be 878.5 +/- 0.7 +/- 0.3 seconds. For a set baryon-to-photon ratio the new lifetime reduces the mass fraction of He4 by 0.0015 but does not significantly change the abundances of other isotopes. This enlarges the region of concordance between He4 and deuterium in the parameter space of the b...

  4. [Blood transfusion, an investigation on its brief history].

    Science.gov (United States)

    Wang, B; Peng, X

    2000-07-01

    Transfusion has developed as a practical clinical technique. Its development has experienced from ignorance to science and from cruelty to civilization for hundreds of year. Transfusion has made great contribution for saving lives and expanding operation coverage. To understand the history of transfusion, we can have reference to promote again the development of transfusion technique.

  5. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  6. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  7. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  8. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  9. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  10. Phonological processing of ignored distractor pictures, an fMRI investigation.

    Science.gov (United States)

    Bles, Mart; Jansma, Bernadette M

    2008-02-11

    Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.

  11. Tobacco Usage in Uttarakhand: A Dangerous Combination of High Prevalence, Widespread Ignorance, and Resistance to Quitting

    Directory of Open Access Journals (Sweden)

    Nathan John Grills

    2015-01-01

    Full Text Available Background. Nearly one-third of adults in India use tobacco, resulting in 1.2 million deaths. However, little is known about knowledge, attitudes, and practices (KAP related to smoking in the impoverished state of Uttarakhand. Methods. A cross-sectional epidemiological prevalence survey was undertaken. Multistage cluster sampling selected 20 villages and 50 households to survey from which 1853 people were interviewed. Tobacco prevalence and KAP were analyzed by income level, occupation, age, and sex. 95% confidence intervals were calculated using standard formulas and incorporating assumptions in relation to the clustering effect. Results. The overall prevalence of tobacco usage, defined using WHO criteria, was 38.9%. 93% of smokers and 86% of tobacco chewers were male. Prevalence of tobacco use, controlling for other factors, was associated with lower education, older age, and male sex. 97.6% of users and 98.1% of nonusers wanted less tobacco. Except for lung cancer (89% awareness, awareness of diseases caused by tobacco usage was low (cardiac: 67%; infertility: 32.5%; stroke: 40.5%. Conclusion. A dangerous combination of high tobacco usage prevalence, ignorance about its dangers, and few quit attempts being made suggests the need to develop effective and evidence based interventions to prevent a health and development disaster in Uttarakhand.

  12. Reassessing insurers' access to genetic information: genetic privacy, ignorance, and injustice.

    Science.gov (United States)

    Feiring, Eli

    2009-06-01

    Many countries have imposed strict regulations on the genetic information to which insurers have access. Commentators have warned against the emerging body of legislation for different reasons. This paper demonstrates that, when confronted with the argument that genetic information should be available to insurers for health insurance underwriting purposes, one should avoid appeals to rights of genetic privacy and genetic ignorance. The principle of equality of opportunity may nevertheless warrant restrictions. A choice-based account of this principle implies that it is unfair to hold people responsible for the consequences of the genetic lottery, since we have no choice in selecting our genotype or the expression of it. However appealing, this view does not take us all the way to an adequate justification of inaccessibility of genetic information. A contractarian account, suggesting that health is a condition of opportunity and that healthcare is an essential good, seems more promising. I conclude that if or when predictive medical tests (such as genetic tests) are developed with significant actuarial value, individuals have less reason to accept as fair institutions that limit access to healthcare on the grounds of risk status. Given the assumption that a division of risk pools in accordance with a rough estimate of people's level of (genetic) risk will occur, fairness and justice favour universal health insurance based on solidarity.

  13. Phonological processing of ignored distractor pictures, an fMRI investigation

    Directory of Open Access Journals (Sweden)

    Bles Mart

    2008-02-01

    Full Text Available Abstract Background Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction, or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures. Results Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Conclusion Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.

  14. IGNORING CHILDREN'S BEDTIME CRYING: THE POWER OF WESTERN-ORIENTED BELIEFS.

    Science.gov (United States)

    Maute, Monique; Perren, Sonja

    2018-03-01

    Ignoring children's bedtime crying (ICBC) is an issue that polarizes parents as well as pediatricians. While most studies have focused on the effectiveness of sleep interventions, no study has yet questioned which parents use ICBC. Parents often find children's sleep difficulties to be very challenging, but factors such as the influence of Western approaches to infant care, stress, and sensitivity have not been analyzed in terms of ICBC. A sample of 586 parents completed a questionnaire to investigate the relationships between parental factors and the method of ICBC. Data were analyzed using structural equation modeling. Latent variables were used to measure parental stress (Parental Stress Scale; J.O. Berry & W.H. Jones, 1995), sensitivity (Situation-Reaction-Questionnaire; Y. Hänggi, K. Schweinberger, N. Gugger, & M. Perrez, 2010), Western-oriented parental beliefs (Rigidity), and children's temperament (Parenting Stress Index; H. Tröster & R.R. Abidin). ICBC was used by 32.6% (n = 191) of parents in this study. Parents' Western-oriented beliefs predicted ICBC. Attitudes such as feeding a child on a time schedule and not carrying it out to prevent dependence were associated with letting the child cry to fall asleep. Low-sensitivity parents as well as parents of children with a difficult temperament used ICBC more frequently. Path analysis shows that parental stress did not predict ICBC. The results suggest that ICBC has become part of Western childrearing tradition. © 2018 Michigan Association for Infant Mental Health.

  15. Experimental amplification of an entangled photon: what if the detection loophole is ignored?

    International Nuclear Information System (INIS)

    Pomarico, Enrico; Sanguinetti, Bruno; Sekatski, Pavel; Zbinden, Hugo; Gisin, Nicolas

    2011-01-01

    The experimental verification of quantum features, such as entanglement, at large scales is extremely challenging because of environment-induced decoherence. Indeed, measurement techniques for demonstrating the quantumness of multiparticle systems in the presence of losses are difficult to define, and if they are not sufficiently accurate they can provide wrong conclusions. We present a Bell test where one photon of an entangled pair is amplified and then detected by threshold detectors, whose signals undergo postselection. The amplification is performed by a classical machine, which produces a fully separable micro-macro state. However, by adopting such a technique one can surprisingly observe a violation of the Clauser-Horne-Shimony-Holt inequality. This is due to the fact that ignoring the detection loophole opened by the postselection and the system losses can lead to misinterpretations, such as claiming micro-macro entanglement in a setup where evidently it is not present. By using threshold detectors and postselection, one can only infer the entanglement of the initial pair of photons, and so micro-micro entanglement, as is further confirmed by the violation of a nonseparability criterion for bipartite systems. How to detect photonic micro-macro entanglement in the presence of losses with the currently available technology remains an open question.

  16. Commentary: Ignorance as Bias: Radiolab, Yellow Rain, and “The Fact of the Matter”

    Directory of Open Access Journals (Sweden)

    Paul Hillmer

    2017-12-01

    Full Text Available In 2012 the National Public Radio show “Radiolab” released a podcast (later broadcast on air essentially asserting that Hmong victims of a suspected chemical agent known as “yellow rain” were ignorant of their surroundings and the facts, and were merely victims of exposure, dysentery, tainted water, and other natural causes. Relying heavily on the work of Dr. Matthew Meselson, Dr. Thomas Seeley, and former CIA officer Merle Pribbenow, Radiolab asserted that Hmong victims mistook bee droppings, defecated en masse from flying Asian honey bees, as “yellow rain.” They brought their foregone conclusions to an interview with Eng Yang, a self-described yellow rain survivor, and his niece, memoirist Kao Kalia Yang, who served as translator. The interview went horribly wrong when their dogged belief in the “bee dung hypothesis” was met with stiff and ultimately impassioned opposition. Radiolab’s confirmation bias led them to dismiss contradictory scientific evidence and mislead their audience. While the authors remain agnostic about the potential use of yellow rain in Southeast Asia, they believe the evidence shows that further study is needed before a final conclusion can be reached.

  17. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  18. Baryon symmetric big bang cosmology

    Science.gov (United States)

    Stecker, F. W.

    1978-01-01

    Both the quantum theory and Einsteins theory of special relativity lead to the supposition that matter and antimatter were produced in equal quantities during the big bang. It is noted that local matter/antimatter asymmetries may be reconciled with universal symmetry by assuming (1) a slight imbalance of matter over antimatter in the early universe, annihilation, and a subsequent remainder of matter; (2) localized regions of excess for one or the other type of matter as an initial condition; and (3) an extremely dense, high temperature state with zero net baryon number; i.e., matter/antimatter symmetry. Attention is given to the third assumption, which is the simplest and the most in keeping with current knowledge of the cosmos, especially as pertains the universality of 3 K background radiation. Mechanisms of galaxy formation are discussed, whereby matter and antimatter might have collided and annihilated each other, or have coexisted (and continue to coexist) at vast distances. It is pointed out that baryon symmetric big bang cosmology could probably be proved if an antinucleus could be detected in cosmic radiation.

  19. Georges et le big bang

    CERN Document Server

    Hawking, Lucy; Parsons, Gary

    2011-01-01

    Georges et Annie, sa meilleure amie, sont sur le point d'assister à l'une des plus importantes expériences scientifiques de tous les temps : explorer les premiers instants de l'Univers, le Big Bang ! Grâce à Cosmos, leur super ordinateur, et au Grand Collisionneur de hadrons créé par Éric, le père d'Annie, ils vont enfin pouvoir répondre à cette question essentielle : pourquoi existons nous ? Mais Georges et Annie découvrent qu'un complot diabolique se trame. Pire, c'est toute la recherche scientifique qui est en péril ! Entraîné dans d'incroyables aventures, Georges ira jusqu'aux confins de la galaxie pour sauver ses amis...Une plongée passionnante au coeur du Big Bang. Les toutes dernières théories de Stephen Hawking et des plus grands scientifiques actuels.

  20. Astronomical Surveys and Big Data

    Directory of Open Access Journals (Sweden)

    Mickaelian Areg M.

    2016-03-01

    Full Text Available Recent all-sky and large-area astronomical surveys and their catalogued data over the whole range of electromagnetic spectrum, from γ-rays to radio waves, are reviewed, including such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and POSS II-based catalogues (APM, MAPS, USNO, GSC in the optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio range, and many others, as well as the most important surveys giving optical images (DSS I and II, SDSS, etc., proper motions (Tycho, USNO, Gaia, variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS, and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA. An overall understanding of the coverage along the whole wavelength range and comparisons between various surveys are given: galaxy redshift surveys, QSO/AGN, radio, Galactic structure, and Dark Energy surveys. Astronomy has entered the Big Data era, with Astrophysical Virtual Observatories and Computational Astrophysics playing an important role in using and analyzing big data for new discoveries.

  1. Big data in oncologic imaging.

    Science.gov (United States)

    Regge, Daniele; Mazzetti, Simone; Giannini, Valentina; Bracco, Christian; Stasi, Michele

    2017-06-01

    Cancer is a complex disease and unfortunately understanding how the components of the cancer system work does not help understand the behavior of the system as a whole. In the words of the Greek philosopher Aristotle "the whole is greater than the sum of parts." To date, thanks to improved information technology infrastructures, it is possible to store data from each single cancer patient, including clinical data, medical images, laboratory tests, and pathological and genomic information. Indeed, medical archive storage constitutes approximately one-third of total global storage demand and a large part of the data are in the form of medical images. The opportunity is now to draw insight on the whole to the benefit of each individual patient. In the oncologic patient, big data analysis is at the beginning but several useful applications can be envisaged including development of imaging biomarkers to predict disease outcome, assessing the risk of X-ray dose exposure or of renal damage following the administration of contrast agents, and tracking and optimizing patient workflow. The aim of this review is to present current evidence of how big data derived from medical images may impact on the diagnostic pathway of the oncologic patient.

  2. From the Big Bang to sustainable societies.

    Science.gov (United States)

    Eriksson, K E; Robèrt, K H

    1991-01-01

    A series of events in the history of cosmos has created the prerequisites for life on Earth. With respect to matter, the earth is a closed system. However, it receives light from the sun and emits infrared radiation into space. The difference in thermodynamic potential between these two flows has provided the physical conditions for self-organization. The transformation of lifeless matter into modern life forms, with their high degree of order and complexity, has occurred in the context of the earth's natural cycles, including the water cycle and the biochemical cycles between plants and animals. Primary production units, the cells of green plants, can use the thermodynamic potential of the energy balance in a very direct way, i.e. in photosynthesis. Plant cells are unique in their ability to synthesize more structure than is broken down elsewhere in the biosphere. The perpetuation of this process requires the recycling of wastes. However, modern industrial societies are obsessed with the supply side, ignoring the principle of matter's conservation and neglecting to plan for the entire material flow. As a result there has been an accumulation of both visible and invisible garbage (pollution), which disturbs the biosphere and reduces stocks of natural resources. Furthermore, due to complexity and delay mechanisms, we usually cannot predict time parameters for the resulting socio-economic consequences or the development of disease. To continue along this path of folly is not compatible with the maintenance of wealth, nor with the health of humans or the biosphere. Rather than address the millions of environmental problems one at a time, we need to approach them at the systemic level. It is essential to convert to human life-styles and forms of societal organization that are based on cyclic processes compatible with the earth's natural cycles. The challenge to the developed countries is not only to decrease their own emissions of pollutants but to develop the cyclic

  3. Family History

    Science.gov (United States)

    Your family history includes health information about you and your close relatives. Families have many factors in common, including their genes, ... as heart disease, stroke, and cancer. Having a family member with a disease raises your risk, but ...

  4. The Shepherd, the Doctor and the Big Data

    Directory of Open Access Journals (Sweden)

    Alejandro Segura Vázquez

    2014-08-01

    Full Text Available The rise of the Internet as a space for global interaction makes the political subjectivation of automated surveillance especially important for understanding the mechanisms of social control in our current culture. This essay approaches mass electronic surveillance from the perspective of the processes of production of subjectivities among users of the Net. It will briefly outline a history of the production of subjects as related to the necessity of expressing a certain truth about themselves, of confessing and conforming to a normality regulated by the strategic frameworks of power. At the same time, it will attempt to associate the logic of that history with the rise of Big Data as a device for the tracking, discrimination and management of personal data. Finally, the paper will provide some reflections on the significance of resistance to the loss of privacy in digital culture.

  5. Black Pete, "smug ignorance," and the value of the black body in postcolonial Netherlands

    NARCIS (Netherlands)

    Van Der Pijl, Yvon; Goulordava, Karina

    2014-01-01

    This article discusses the controversies over the blackface figure Black Pete (Zwarte Piet)-central to the popular Dutch Saint Nicholas holiday tradition-and the public uproar surrounding the Saint Nicholas feast in 2013. It combines history, social theory, and patchwork ethnography, and draws on

  6. Leveraging Mobile Network Big Data for Developmental Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Some argue that big data and big data users offer advantages to generate evidence. ... Supported by IDRC, this research focused on transportation planning in urban ... Using mobile network big data for land use classification CPRsouth 2015.

  7. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  8. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  9. Big data and software defined networks

    CERN Document Server

    Taheri, Javid

    2018-01-01

    Big Data Analytics and Software Defined Networking (SDN) are helping to drive the management of data usage of the extraordinary increase of computer processing power provided by Cloud Data Centres (CDCs). This new book investigates areas where Big-Data and SDN can help each other in delivering more efficient services.

  10. Big Science and Long-tail Science

    CERN Document Server

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  11. The Death of the Big Men

    DEFF Research Database (Denmark)

    Martin, Keir

    2010-01-01

    Recently Tolai people og Papua New Guinea have adopted the term 'Big Shot' to decribe an emerging post-colonial political elite. The mergence of the term is a negative moral evaluation of new social possibilities that have arisen as a consequence of the Big Shots' privileged position within a glo...

  12. An embedding for the big bang

    Science.gov (United States)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  13. Probing the pre-big bang universe

    International Nuclear Information System (INIS)

    Veneziano, G.

    2000-01-01

    Superstring theory suggests a new cosmology whereby a long inflationary phase preceded a non singular big bang-like event. After discussing how pre-big bang inflation naturally arises from an almost trivial initial state of the Universe, I will describe how present or near-future experiments can provide sensitive probes of how the Universe behaved in the pre-bang era

  14. Starting Small, Thinking Big - Continuum Magazine | NREL

    Science.gov (United States)

    , Thinking Big Stories NREL Helps Agencies Target New Federal Sustainability Goals Student Engagements Help solar power in the territory. Photo by Don Buchanan, VIEO Starting Small, Thinking Big NREL helps have used these actions to optimize that energy use.'" NREL's cross-organizational work supports

  15. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  16. Practice variation in Big-4 transparency reports

    NARCIS (Netherlands)

    Girdhar, Sakshi; Jeppesen, K.K.

    2018-01-01

    Purpose The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach The study draws on a

  17. Big data analysis for smart farming

    NARCIS (Netherlands)

    Kempenaar, C.; Lokhorst, C.; Bleumer, E.J.B.; Veerkamp, R.F.; Been, Th.; Evert, van F.K.; Boogaardt, M.J.; Ge, L.; Wolfert, J.; Verdouw, C.N.; Bekkum, van Michael; Feldbrugge, L.; Verhoosel, Jack P.C.; Waaij, B.D.; Persie, van M.; Noorbergen, H.

    2016-01-01

    In this report we describe results of a one-year TO2 institutes project on the development of big data technologies within the milk production chain. The goal of this project is to ‘create’ an integration platform for big data analysis for smart farming and to develop a show case. This includes both

  18. Cloud Based Big Data Infrastructure: Architectural Components and Automated Provisioning

    OpenAIRE

    Demchenko, Yuri; Turkmen, Fatih; Blanchet, Christophe; Loomis, Charles; Laat, Caees de

    2016-01-01

    This paper describes the general architecture and functional components of the cloud based Big Data Infrastructure (BDI). The proposed BDI architecture is based on the analysis of the emerging Big Data and data intensive technologies and supported by the definition of the Big Data Architecture Framework (BDAF) that defines the following components of the Big Data technologies: Big Data definition, Data Management including data lifecycle and data structures, Big Data Infrastructure (generical...

  19. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    Science.gov (United States)

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  20. Ignoring imperfect detection in biological surveys is dangerous: a response to 'fitting and interpreting occupancy models'.

    Directory of Open Access Journals (Sweden)

    Gurutzeta Guillera-Arroita

    Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.

  1. Ignorance is no excuse for directors minimizing information asymmetry affecting boards

    Directory of Open Access Journals (Sweden)

    Eythor Ivar Jonsson

    2006-11-01

    Full Text Available This paper looks at information asymmetry at the board level and how lack of information has played a part in undermining the power of the board of directors. Information is power, and at board level, information is essential to keep the board knowledgeable about the failures and successes of the organization that it is supposed to govern. Although lack of information has become a popular excuse for boards, the mantra could –and should –be changing to, “Ignorance is no excuse” (Mueller, 1993. This paper explores some of these information system solutions that have the aim of resolving some of the problems of information asymmetry. Furthermore, three case studies are used to explore the problem of asymmetric information at board level and the how the boards are trying to solve the problem. The focus of the discussion is to a describe how directors experience the information asymmetry and if they find it troublesome, b how important information is for the control and strategy role of the board and c find out how boards can minimize the problem of asymmetric information. The research is conducted through semi-structured interviews with directors, managers and accountants. This paper offers an interesting exploration into information, or the lack of information, at board level. It describes both from a theoretical and practical viewpoint the problem of information asymmetry at board level and how companies are trying to solve this problem. It is an issue that has only been lightly touched upon in the corporate governance literature but is likely to attract more attention and research in the future.

  2. On the practice of ignoring center-patient interactions in evaluating hospital performance.

    Science.gov (United States)

    Varewyck, Machteld; Vansteelandt, Stijn; Eriksson, Marie; Goetghebeur, Els

    2016-01-30

    We evaluate the performance of medical centers based on a continuous or binary patient outcome (e.g., 30-day mortality). Common practice adjusts for differences in patient mix through outcome regression models, which include patient-specific baseline covariates (e.g., age and disease stage) besides center effects. Because a large number of centers may need to be evaluated, the typical model postulates that the effect of a center on outcome is constant over patient characteristics. This may be violated, for example, when some centers are specialized in children or geriatric patients. Including interactions between certain patient characteristics and the many fixed center effects in the model increases the risk for overfitting, however, and could imply a loss of power for detecting centers with deviating mortality. Therefore, we assess how the common practice of ignoring such interactions impacts the bias and precision of directly and indirectly standardized risks. The reassuring conclusion is that the common practice of working with the main effects of a center has minor impact on hospital evaluation, unless some centers actually perform substantially better on a specific group of patients and there is strong confounding through the corresponding patient characteristic. The bias is then driven by an interplay of the relative center size, the overlap between covariate distributions, and the magnitude of the interaction effect. Interestingly, the bias on indirectly standardized risks is smaller than on directly standardized risks. We illustrate our findings by simulation and in an analysis of 30-day mortality on Riksstroke. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  3. Ignoring detailed fast-changing dynamics of land use overestimates regional terrestrial carbon sequestration

    Directory of Open Access Journals (Sweden)

    S. Q. Zhao

    2009-08-01

    Full Text Available Land use change is critical in determining the distribution, magnitude and mechanisms of terrestrial carbon budgets at the local to global scales. To date, almost all regional to global carbon cycle studies are driven by a static land use map or land use change statistics with decadal time intervals. The biases in quantifying carbon exchange between the terrestrial ecosystems and the atmosphere caused by using such land use change information have not been investigated. Here, we used the General Ensemble biogeochemical Modeling System (GEMS, along with consistent and spatially explicit land use change scenarios with different intervals (1 yr, 5 yrs, 10 yrs and static, respectively, to evaluate the impacts of land use change data frequency on estimating regional carbon sequestration in the southeastern United States. Our results indicate that ignoring the detailed fast-changing dynamics of land use can lead to a significant overestimation of carbon uptake by the terrestrial ecosystem. Regional carbon sequestration increased from 0.27 to 0.69, 0.80 and 0.97 Mg C ha−1 yr−1 when land use change data frequency shifting from 1 year to 5 years, 10 years interval and static land use information, respectively. Carbon removal by forest harvesting and prolonged cumulative impacts of historical land use change on carbon cycle accounted for the differences in carbon sequestration between static and dynamic land use change scenarios. The results suggest that it is critical to incorporate the detailed dynamics of land use change into local to global carbon cycle studies. Otherwise, it is impossible to accurately quantify the geographic distributions, magnitudes, and mechanisms of terrestrial carbon sequestration at the local to global scales.

  4. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  5. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  6. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  7. Practice Variation in Big-4 Transparency Reports

    DEFF Research Database (Denmark)

    Girdhar, Sakshi; Klarskov Jeppesen, Kim

    2018-01-01

    Purpose: The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach: The study draws...... on a qualitative research approach, in which the content of transparency reports is analyzed and semi-structured interviews are conducted with key people from the Big-4 firms who are responsible for developing the transparency reports. Findings: The findings show that the content of transparency reports...... is inconsistent and the transparency reporting practice is not uniform within the Big-4 networks. Differences were found in the way in which the transparency reporting practices are coordinated globally by the respective central governing bodies of the Big-4. The content of the transparency reports...

  8. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  9. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  10. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  11. Questioning the "big assumptions". Part II: recognizing organizational contradictions that impede institutional change.

    Science.gov (United States)

    Bowe, Constance M; Lahey, Lisa; Kegan, Robert; Armstrong, Elizabeth

    2003-08-01

    Well-designed medical curriculum reforms can fall short of their primary objectives during implementation when unanticipated or unaddressed organizational resistance surfaces. This typically occurs if the agents for change ignore faculty concerns during the planning stage or when the provision of essential institutional safeguards to support new behaviors are neglected. Disappointing outcomes in curriculum reforms then result in the perpetuation of or reversion to the status quo despite the loftiest of goals. Institutional resistance to change, much like that observed during personal development, does not necessarily indicate a communal lack of commitment to the organization's newly stated goals. It may reflect the existence of competing organizational objectives that must be addressed before substantive advances in a new direction can be accomplished. The authors describe how the Big Assumptions process (see previous article) was adapted and applied at the institutional level during a school of medicine's curriculum reform. Reform leaders encouraged faculty participants to articulate their reservations about considered changes to provided insights into the organization's competing commitments. The line of discussion provided an opportunity for faculty to appreciate the gridlock that existed until appropriate test of the school's long held Big Assumptions could be conducted. The Big Assumptions process proved useful in moving faculty groups to recognize and questions the validity of unchallenged institutional beliefs that were likely to undermine efforts toward change. The process also allowed the organization to put essential institutional safeguards in place that ultimately insured that substantive reforms could be sustained.

  12. Sophisticated Approval Voting, Ignorance Priors, and Plurality Heuristics: A Behavioral Social Choice Analysis in a Thurstonian Framework

    Science.gov (United States)

    Regenwetter, Michel; Ho, Moon-Ho R.; Tsetlin, Ilia

    2007-01-01

    This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two…

  13. Learning to Ignore: A Modeling Study of a Decremental Cholinergic Pathway and Its Influence on Attention and Learning

    Science.gov (United States)

    Oros, Nicolas; Chiba, Andrea A.; Nitz, Douglas A.; Krichmar, Jeffrey L.

    2014-01-01

    Learning to ignore irrelevant stimuli is essential to achieving efficient and fluid attention, and serves as the complement to increasing attention to relevant stimuli. The different cholinergic (ACh) subsystems within the basal forebrain regulate attention in distinct but complementary ways. ACh projections from the substantia innominata/nucleus…

  14. Settlers Unsettled: Using Field Schools and Digital Stories to Transform Geographies of Ignorance about Indigenous Peoples in Canada

    Science.gov (United States)

    Castleden, Heather; Daley, Kiley; Sloan Morgan, Vanessa; Sylvestre, Paul

    2013-01-01

    Geography is a product of colonial processes, and in Canada, the exclusion from educational curricula of Indigenous worldviews and their lived realities has produced "geographies of ignorance". Transformative learning is an approach geographers can use to initiate changes in non-Indigenous student attitudes about Indigenous…

  15. The Ignorant Environmental Education Teacher: Students Get Empowered and Teach Philosophy of Nature Inspired by Ancient Greek Philosophy

    Science.gov (United States)

    Tsevreni, Irida

    2018-01-01

    This paper presents an attempt to apply Jacques Rancière's emancipatory pedagogy of "the ignorant schoolmaster" to environmental education, which emphasises environmental ethics. The paper tells the story of a philosophy of nature project in the framework of an environmental adult education course at a Second Chance School in Greece,…

  16. Intelligent search in Big Data

    Science.gov (United States)

    Birialtsev, E.; Bukharaev, N.; Gusenkov, A.

    2017-10-01

    An approach to data integration, aimed on the ontology-based intelligent search in Big Data, is considered in the case when information objects are represented in the form of relational databases (RDB), structurally marked by their schemes. The source of information for constructing an ontology and, later on, the organization of the search are texts in natural language, treated as semi-structured data. For the RDBs, these are comments on the names of tables and their attributes. Formal definition of RDBs integration model in terms of ontologies is given. Within framework of the model universal RDB representation ontology, oil production subject domain ontology and linguistic thesaurus of subject domain language are built. Technique of automatic SQL queries generation for subject domain specialists is proposed. On the base of it, information system for TATNEFT oil-producing company RDBs was implemented. Exploitation of the system showed good relevance with majority of queries.

  17. Big Data in Transport Geography

    DEFF Research Database (Denmark)

    Reinau, Kristian Hegner; Agerholm, Niels; Lahrmann, Harry Spaabæk

    for studies that explicitly compare the quality of this new type of data to traditional data sources. With the current focus on Big Data in the transport field, public transport planners are increasingly looking towards smart card data to analyze and optimize flows of passengers. However, in many cases...... it is not all public transport passengers in a city, region or country with a smart card system that uses the system, and in such cases, it is important to know what biases smart card data has in relation to giving a complete view upon passenger flows. This paper therefore analyses the quality and biases...... of smart card data in Denmark, where public transport passengers may use a smart card, may pay with cash for individual trips or may hold a season ticket for a certain route. By analyzing smart card data collected in Denmark in relation to data on sales of cash tickets, sales of season tickets, manual...

  18. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  19. Was the Big Bang hot?

    Science.gov (United States)

    Wright, E. L.

    1983-01-01

    Techniques for verifying the spectrum defined by Woody and Richards (WR, 1981), which serves as a base for dust-distorted models of the 3 K background, are discussed. WR detected a sharp deviation from the Planck curve in the 3 K background. The absolute intensity of the background may be determined by the frequency dependence of the dipole anisotropy of the background or the frequency dependence effect in galactic clusters. Both methods involve the Doppler shift; analytical formulae are defined for characterization of the dipole anisotropy. The measurement of the 30-300 GHz spectra of cold galactic dust may reveal the presence of significant amounts of needle-shaped grains, which would in turn support a theory of a cold Big Bang.

  20. Big Bang nucleosynthesis in crisis?

    International Nuclear Information System (INIS)

    Hata, N.; Scherrer, R.J.; Steigman, G.; Thomas, D.; Walker, T.P.; Bludman, S.; Langacker, P.

    1995-01-01

    A new evaluation of the constraint on the number of light neutrino species (N ν ) from big bang nucleosynthesis suggests a discrepancy between the predicted light element abundances and those inferred from observations, unless the inferred primordial 4 He abundance has been underestimated by 0.014±0.004 (1σ) or less than 10% (95% C.L.) of 3 He survives stellar processing. With the quoted systematic errors in the observed abundances and a conservative chemical evolution parametrization, the best fit to the combined data is N ν =2.1±0.3 (1σ) and the upper limit is N ν ν =3) at the 98.6% C.L. copyright 1995 The American Physical Society

  1. Inflationary and deflationary branches in extended pre-big-bang cosmology

    International Nuclear Information System (INIS)

    Lidsey, J.E.

    1997-01-01

    The pre-big-bang cosmological scenario is studied within the context of the Brans-Dicke theory of gravity. An epoch of superinflationary expansion may occur in the pre-big-bang phase of the Universe close-quote s history in a certain region of parameter space. Two models are considered that contain a cosmological constant in the gravitational and matter sectors of the theory, respectively. Classical pre- and post-big-bang solutions are found for both models. The existence of a curvature singularity forbids a classical transition between the two branches. On the other hand, a quantum cosmological approach based on the tunneling boundary condition results in a nonzero transition probability. The transition may be interpreted as a spatial reflection of the wave function in minisuperspace. copyright 1997 The American Physical Society

  2. Inflationary and deflationary branches in extended pre-big-bang cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Lidsey, J.E. [Astronomy Unit, School of Mathematical Sciences, Queen Mary Westfield, Mile End Road, London, E1 4NS (United Kingdom)

    1997-03-01

    The pre-big-bang cosmological scenario is studied within the context of the Brans-Dicke theory of gravity. An epoch of superinflationary expansion may occur in the pre-big-bang phase of the Universe{close_quote}s history in a certain region of parameter space. Two models are considered that contain a cosmological constant in the gravitational and matter sectors of the theory, respectively. Classical pre- and post-big-bang solutions are found for both models. The existence of a curvature singularity forbids a classical transition between the two branches. On the other hand, a quantum cosmological approach based on the tunneling boundary condition results in a nonzero transition probability. The transition may be interpreted as a spatial reflection of the wave function in minisuperspace. {copyright} {ital 1997} {ital The American Physical Society}

  3. Environmental history

    DEFF Research Database (Denmark)

    Pawson, Eric; Christensen, Andreas Aagaard

    2017-01-01

    Environmental history is an interdisciplinary pursuit that has developed as a form of conscience to counter an increasingly powerful, forward-looking liberal theory of the environment. It deals with the relations between environmental ideas and materialities, from the work of the geographers George...... risks”. These are exposed by environmental history’s focus on long-run analysis and its narrative form that identifies the stories that we tell ourselves about nature. How a better understanding of past environmental transformations helps to analyse society and agency, and what this can mean...... for solutions and policies, is the agenda for an engaged environmental history from now on....

  4. Ildens historier

    DEFF Research Database (Denmark)

    Lassen, Henrik Roesgaard

    have been written by Andersen. In several chapters the curiously forgotten history of fire-lighting technology is outlined, and it is demonstrated that "Tællelyset" is written by a person with a modern perspective on how to light a candle - among other things. The central argument in the book springs...... from a point-by-point tracing of 'the origins and history' of Hans Christian Andersen's famous fairy tales. Where did the come from? How did they become the iconic texts that we know today? On this background it becomes quite clear that "Tællelyset" is a modern pastiche and not a genuine Hans Christian...

  5. Cartography in the Age of Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    WANG Jiayao

    2017-10-01

    Full Text Available Cartography is an ancient science with almost the same long history as the world's oldest culture.Since ancient times,the movement and change of anything and any phenomena,including human activities,have been carried out in a certain time and space.The development of science and technology and the progress of social civilization have made social management and governance more and more dependent on time and space.The information source,theme,content,carrier,form,production methods and application methods of map are different in different historical periods,so that its all-round value is different. With the arrival of the big data age,the scientific paradigm has now entered the era of "data-intensive" paradigm,so is the cartography,with obvious characteristics of big data science.All big data are caused by movement and change of all things and phenomena in the geographic world,so they have space and time characteristics and thus cannot be separated from the spatial reference and time reference.Therefore,big data is big spatio-temporal data essentially.Since the late 1950s and early 1960s,modern cartography,that is,the cartography in the information age,takes spatio-temporal data as the object,and focuses on the processing and expression of spatio-temporal data,but not in the face of the large scale multi-source heterogeneous and multi-dimensional dynamic data flow(or flow datafrom sky to the sea.The real-time dynamic nature,the theme pertinence,the content complexity,the carrier diversification,the expression form personalization,the production method modernization,the application ubiquity of the map,is incomparable in the past period,which leads to the great changes of the theory,technology and application system of cartography.And all these changes happen to occur in the 60 years since the late 1950s and early 1960s,so this article was written to commemorate the 60th anniversary of the "Acta Geodaetica et Cartographica Sinica".

  6. Ignorance, Vulnerability and the Occurrence of "Radical Surprises": Theoretical Reflections and Empirical Findings

    Science.gov (United States)

    Kuhlicke, C.

    2009-04-01

    By definition natural disasters always contain a moment of surprise. Their occurrence is mostly unforeseen and unexpected. They hit people unprepared, overwhelm them and expose their helplessness. Yet, there is surprisingly little known on the reasons for their being surprised. Aren't natural disasters expectable and foreseeable after all? Aren't the return rates of most hazards well known and shouldn't people be better prepared? The central question of this presentation is hence: Why do natural disasters so often radically surprise people at all (and how can we explain this being surprised)? In the first part of the presentation, it is argued that most approaches to vulnerability are not able to grasp this moment of surprise. On the contrary, they have their strength in unravelling the expectable: A person who is marginalized or even oppressed in everyday life is also vulnerable during times of crisis and stress, at least this is the central assumption of most vulnerability studies. In the second part, an understanding of vulnerability is developed, which allows taking into account such radical surprises. First, two forms of the unknown are differentiated: An area of the unknown an actor is more or less aware of (ignorance), and an area, which is not even known to be not known (nescience). The discovery of the latter is mostly associated with a "radical surprise", since it is per definition impossible to prepare for it. Second, a definition of vulnerability is proposed, which allows capturing the dynamics of surprises: People are vulnerable when they discover their nescience exceeding by definition previously established routines, stocks of knowledge and resources—in a general sense their capacities—to deal with their physical and/or social environment. This definition explicitly takes the view of different actors serious and departs from their being surprised. In the third part findings of a case study are presented, the 2002 flood in Germany. It is shown

  7. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  8. [Relevance of big data for molecular diagnostics].

    Science.gov (United States)

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  9. Business History

    DEFF Research Database (Denmark)

    Hansen, Per H.

    2012-01-01

    This article argues that a cultural and narrative perspective can enrich the business history field, encourage new and different questions and answers, and provide new ways of thinking about methods and empirical material. It discusses what culture is and how it relates to narratives. Taking...

  10. LCA History

    DEFF Research Database (Denmark)

    Bjørn, Anders; Owsianiak, Mikołaj; Molin, Christine

    2018-01-01

    The idea of LCA was conceived in the 1960s when environmental degradation and in particular the limited access to resources started becoming a concern. This chapter gives a brief summary of the history of LCA since then with a focus on the fields of methodological development, application...

  11. Rewriting History.

    Science.gov (United States)

    Ramirez, Catherine Clark

    1994-01-01

    Suggests that the telling of vivid stories can help engage elementary students' emotions and increase the chances of fostering an interest in Texas history. Suggests that incorporating elements of the process approach to writing can merge with social studies objectives in creating a curriculum for wisdom. (RS)

  12. Processing Solutions for Big Data in Astronomy

    Science.gov (United States)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  13. Big Data as Governmentality in International Development

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    2017-01-01

    Statistics have long shaped the field of visibility for the governance of development projects. The introduction of big data has altered the field of visibility. Employing Dean's “analytics of government” framework, we analyze two cases—malaria tracking in Kenya and monitoring of food prices...... in Indonesia. Our analysis shows that big data introduces a bias toward particular types of visualizations. What problems are being made visible through big data depends to some degree on how the underlying data is visualized and who is captured in the visualizations. It is also influenced by technical factors...

  14. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  15. Astroinformatics: the big data of the universe

    OpenAIRE

    Barmby, Pauline

    2016-01-01

    In astrophysics we like to think that our field was the originator of big data, back when it had to be carried around in big sky charts and books full of tables. These days, it's easier to move astrophysics data around, but we still have a lot of it, and upcoming telescope  facilities will generate even more. I discuss how astrophysicists approach big data in general, and give examples from some Western Physics & Astronomy research projects.  I also give an overview of ho...

  16. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text Available “Big Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  17. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  18. Smart Information Management in Health Big Data.

    Science.gov (United States)

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  19. Big questions, big science: meeting the challenges of global ecology.

    Science.gov (United States)

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  20. Infectious Disease Surveillance in the Big Data Era: Towards Faster and Locally Relevant Systems

    Science.gov (United States)

    Simonsen, Lone; Gog, Julia R.; Olson, Don; Viboud, Cécile

    2016-01-01

    While big data have proven immensely useful in fields such as marketing and earth sciences, public health is still relying on more traditional surveillance systems and awaiting the fruits of a big data revolution. A new generation of big data surveillance systems is needed to achieve rapid, flexible, and local tracking of infectious diseases, especially for emerging pathogens. In this opinion piece, we reflect on the long and distinguished history of disease surveillance and discuss recent developments related to use of big data. We start with a brief review of traditional systems relying on clinical and laboratory reports. We then examine how large-volume medical claims data can, with great spatiotemporal resolution, help elucidate local disease patterns. Finally, we review efforts to develop surveillance systems based on digital and social data streams, including the recent rise and fall of Google Flu Trends. We conclude by advocating for increased use of hybrid systems combining information from traditional surveillance and big data sources, which seems the most promising option moving forward. Throughout the article, we use influenza as an exemplar of an emerging and reemerging infection which has traditionally been considered a model system for surveillance and modeling. PMID:28830112