WorldWideScience

Sample records for human big iv

  1. Human factors in Big Data

    NARCIS (Netherlands)

    Boer, J. de

    2016-01-01

    Since 2014 I am involved in various (research) projects that try to make the hype around Big Data more concrete and tangible for the industry and government. Big Data is about multiple sources of (real-time) data that can be analysed, transformed to information and be used to make 'smart' decisions.

  2. Statistical Challenges in "Big Data" Human Neuroimaging.

    Science.gov (United States)

    Smith, Stephen M; Nichols, Thomas E

    2018-01-17

    Smith and Nichols discuss "big data" human neuroimaging studies, with very large subject numbers and amounts of data. These studies provide great opportunities for making new discoveries about the brain but raise many new analytical challenges and interpretational risks. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Exploring Data in Human Resources Big Data

    Directory of Open Access Journals (Sweden)

    Adela BARA

    2016-01-01

    Full Text Available Nowadays, social networks and informatics technologies and infrastructures are constantly developing and affect each other. In this context, the HR recruitment process became complex and many multinational organizations have encountered selection issues. The objective of the paper is to develop a prototype system for assisting the selection of candidates for an intelligent management of human resources. Such a system can be a starting point for the efficient organization of semi-structured and unstructured data on recruitment activities. The article extends the research presented at the 14th International Conference on Informatics in Economy (IE 2015 in the scientific paper "Big Data challenges for human resources management".

  4. Big Data Analysis of Human Genome Variations

    KAUST Repository

    Gojobori, Takashi

    2016-01-25

    Since the human genome draft sequence was in public for the first time in 2000, genomic analyses have been intensively extended to the population level. The following three international projects are good examples for large-scale studies of human genome variations: 1) HapMap Data (1,417 individuals) (http://hapmap.ncbi.nlm.nih.gov/downloads/genotypes/2010-08_phaseII+III/forward/), 2) HGDP (Human Genome Diversity Project) Data (940 individuals) (http://www.hagsc.org/hgdp/files.html), 3) 1000 genomes Data (2,504 individuals) http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ If we can integrate all three data into a single volume of data, we should be able to conduct a more detailed analysis of human genome variations for a total number of 4,861 individuals (= 1,417+940+2,504 individuals). In fact, we successfully integrated these three data sets by use of information on the reference human genome sequence, and we conducted the big data analysis. In particular, we constructed a phylogenetic tree of about 5,000 human individuals at the genome level. As a result, we were able to identify clusters of ethnic groups, with detectable admixture, that were not possible by an analysis of each of the three data sets. Here, we report the outcome of this kind of big data analyses and discuss evolutionary significance of human genomic variations. Note that the present study was conducted in collaboration with Katsuhiko Mineta and Kosuke Goto at KAUST.

  5. BigBOSS: The Ground-Based Stage IV BAO Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, David; Bebek, Chris; Heetderks, Henry; Ho, Shirley; Lampton, Michael; Levi, Michael; Mostek, Nick; Padmanabhan, Nikhil; Perlmutter, Saul; Roe, Natalie; Sholl, Michael; Smoot, George; White, Martin; Dey, Arjun; Abraham, Tony; Jannuzi, Buell; Joyce, Dick; Liang, Ming; Merrill, Mike; Olsen, Knut; Salim, Samir

    2009-04-01

    The BigBOSS experiment is a proposed DOE-NSF Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with an all-sky galaxy redshift survey. The project is designed to unlock the mystery of dark energy using existing ground-based facilities operated by NOAO. A new 4000-fiber R=5000 spectrograph covering a 3-degree diameter field will measure BAO and redshift space distortions in the distribution of galaxies and hydrogen gas spanning redshifts from 0.2< z< 3.5. The Dark Energy Task Force figure of merit (DETF FoM) for this experiment is expected to be equal to that of a JDEM mission for BAO with the lower risk and cost typical of a ground-based experiment.

  6. The Human Genome Project: big science transforms biology and medicine

    OpenAIRE

    Hood, Leroy; Rowen, Lee

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called ‘big science’ - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and a...

  7. Big Data Analysis of Human Genome Variations

    KAUST Repository

    Gojobori, Takashi

    2016-01-01

    Since the human genome draft sequence was in public for the first time in 2000, genomic analyses have been intensively extended to the population level. The following three international projects are good examples for large-scale studies of human

  8. Degradation of type IV collagen by neoplastic human skin fibroblasts

    International Nuclear Information System (INIS)

    Sheela, S.; Barrett, J.C.

    1985-01-01

    An assay for the degradation of type IV (basement membrane) collagen was developed as a biochemical marker for neoplastic cells from chemically transformed human skin fibroblasts. Type IV collagen was isolated from basement membrane of Syrian hamster lung and type I collagen was isolated from rat tails; the collagens were radioactively labelled by reductive alkylation. The abilities of normal (KD) and chemically transformed (Hut-11A) human skin fibroblasts to degrade the collagens were studied. A cell-associated assay was performed by growing either normal or transformed cells in the presence of radioactively labelled type IV collagen and measuring the released soluble peptides in the medium. This assay also demonstrated that KD cells failed to synthesize an activity capable of degrading type IV collagen whereas Hut-11A cells degraded type IV collagen in a linear manner for up to 4 h. Human serum at very low concentrations, EDTA and L-cysteine inhibited the enzyme activity, whereas protease inhibitors like phenylmethyl sulfonyl fluoride, N-ethyl maleimide or soybean trypsin inhibitor did not inhibit the enzyme from Hut-11A cells. These results suggest that the ability to degrade specifically type IV collagen may be an important marker for neoplastic human fibroblasts and supports a role for this collagenase in tumor cell invasion

  9. Dipeptidyl peptidase IV in two human glioma cell lines

    Directory of Open Access Journals (Sweden)

    A Sedo

    2009-12-01

    Full Text Available There is growing evidence that dipeptidyl peptidase IV [DPP-IV, EC 3.4.14.5] takes part in the metabolism of biologically active peptides participating in the regulation of growth and transformation of glial cells. However, the knowledge on the DPP-IV expression in human glial and glioma cells is still very limited. In this study, using histochemical and biochemical techniques, the DPP-IV activity was demonstrated in two commercially available human glioma cell lines of different transformation degree, as represented by U373 astrocytoma (Grade III and U87 glioblastoma multiforme (Grade IV lines. Higher total activity of the enzyme, as well as its preferential localisation in the plasma membrane, was observed in U87 cells. Compared to U373 population, U87 cells were morphologically more pleiomorphic, they were cycling at lower rate and expressing less Glial Fibrillary Acidic Protein. The data revealed positive correlation between the degree of transformation of cells and activity of DPP-IV. Great difference in expression of this enzyme, together with the phenotypic differences of cells, makes these lines a suitable standard model for further 57 studies of function of this enzyme in human glioma cells.

  10. Plasma metabolism of apolipoprotein A-IV in humans

    International Nuclear Information System (INIS)

    Ghiselli, G.; Krishnan, S.; Beigel, Y.; Gotto, A.M. Jr.

    1986-01-01

    As assessed by molecular sieve chromatography and quantitation by a specific radioimmunoassay, apoA-IV is associated in plasma with the triglyceride-rich lipoproteins, to a high density lipoprotein (HDL) subfraction of smaller size than HDL3, and to the plasma lipoprotein-free fraction (LFF). In this study, the turnover of apoA-IV associated to the triglyceride-rich lipoproteins, HDL and LFF was investigated in vivo in normal volunteers. Human apoA-IV isolated from the thoracic duct lymph chylomicrons was radioiodinated and incubated with plasma withdrawn from normal volunteers after a fatty meal. Radioiodinated apoA-IV-labeled triglyceride-rich lipoproteins, HDL, and LFF were then isolated by chromatography on an AcA 34 column. Shortly after the injection of the radioiodinated apoA-IV-labeled triglyceride-rich lipoproteins, most of the radioactivity could be recovered in the HDL and LFF column fractions. On the other hand, when radioiodinated apoA-IV-labeled HDL or LFF were injected, the radioactivity remained with the originally injected fractions at all times. The residence time in plasma of 125 I-labeled apoA-IV, when injected in association with HDL or LFF, was 1.61 and 0.55 days, respectively. When 125 I-labeled apoA-IV was injected as a free protein, the radioactivity distributed rapidly among the three plasma pools in proportion to their mass. The overall fractional catabolic rate of apoA-IV in plasma was measured in the three normal subjects and averaged 1.56 pools per day. The mean degradation rate of apoA-IV was 8.69 mg/kg X day

  11. Think Big! The Human Condition Project

    Science.gov (United States)

    Metcalfe, Gareth

    2014-01-01

    How can educators provide children with a genuine experience of carrying out an extended scientific investigation? And can teachers change the perception of what it means to be a scientist? These were key questions that lay behind "The Human Condition" project, an initiative funded by the Primary Science Teaching Trust to explore a new…

  12. The Human Genome Project: big science transforms biology and medicine.

    Science.gov (United States)

    Hood, Leroy; Rowen, Lee

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called 'big science' - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and analytical tools, and how it brought the expertise of engineers, computer scientists and mathematicians together with biologists. It established an open approach to data sharing and open-source software, thereby making the data resulting from the project accessible to all. The genome sequences of microbes, plants and animals have revolutionized many fields of science, including microbiology, virology, infectious disease and plant biology. Moreover, deeper knowledge of human sequence variation has begun to alter the practice of medicine. The Human Genome Project has inspired subsequent large-scale data acquisition initiatives such as the International HapMap Project, 1000 Genomes, and The Cancer Genome Atlas, as well as the recently announced Human Brain Project and the emerging Human Proteome Project.

  13. Streptococcus agalactiae Serotype IV in Humans and Cattle, Northern Europe

    DEFF Research Database (Denmark)

    Lyhs, Ulrike; Kulkas, Laura; Katholm, Jorgen

    2016-01-01

    not differentiate between populations isolated from different host species. Isolates from humans and cattle differed in lactose fermentation, which is encoded on the accessory genome and represents an adaptation to the bovine mammary gland. Serotype IV-ST196 isolates were obtained from multiple dairy herds in both...

  14. Big Data and Intelligence: Applications, Human Capital, and Education

    Directory of Open Access Journals (Sweden)

    Michael Landon-Murray

    2016-06-01

    Full Text Available The potential for big data to contribute to the US intelligence mission goes beyond bulk collection, social media and counterterrorism. Applications will speak to a range of issues of major concern to intelligence agencies, from military operations to climate change to cyber security. There are challenges too: procurement lags, data stovepiping, separating signal from noise, sources and methods, a range of normative issues, and central to managing these challenges, human capital. These potential applications and challenges are discussed and a closer look at what data scientists do in the Intelligence Community (IC is offered. Effectively filling the ranks of the IC’s data science workforce will depend on the provision of well-trained data scientists from the higher education system. Program offerings at America’s top fifty universities will thus be surveyed (just a few years ago there were reportedly no degrees in data science. One Master’s program that has melded data science with intelligence is examined as well as a university big data research center focused on security and intelligence. This discussion goes a long way to clarify the prospective uses of data science in intelligence while probing perhaps the key challenge to optimal application of big data in the IC.

  15. Big History or the 13800 million years from the Big Bang to the Human Brain

    Science.gov (United States)

    Gústafsson, Ludvik E.

    2017-04-01

    Big History is the integrated history of the Cosmos, Earth, Life, and Humanity. It is an attempt to understand our existence as a continuous unfolding of processes leading to ever more complex structures. Three major steps in the development of the Universe can be distinguished, the first being the creation of matter/energy and forces in the context of an expanding universe, while the second and third steps were reached when completely new qualities of matter came into existence. 1. Matter comes out of nothing Quantum fluctuations and the inflation event are thought to be responsible for the creation of stable matter particles in what is called the Big Bang. Along with simple particles the universe is formed. Later larger particles like atoms and the most simple chemical elements hydrogen and helium evolved. Gravitational contraction of hydrogen and helium formed the first stars und later on the first galaxies. Massive stars ended their lives in violent explosions releasing heavier elements like carbon, oxygen, nitrogen, sulfur and iron into the universe. Subsequent star formation led to star systems with bodies containing these heavier elements. 2. Matter starts to live About 9200 million years after the Big Bang a rather inconspicous star of middle size formed in one of a billion galaxies. The leftovers of the star formation clumped into bodies rotating around the central star. In some of them elements like silicon, oxygen, iron and many other became the dominant matter. On the third of these bodies from the central star much of the surface was covered with an already very common chemical compound in the universe, water. Fluid water and plenty of various elements, especially carbon, were the ingredients of very complex chemical compounds that made up even more complex structures. These were able to replicate themselves. Life had appeared, the only occasion that we human beings know of. Life evolved subsequently leading eventually to the formation of multicellular

  16. A Big Bang model of human colorectal tumor growth.

    Science.gov (United States)

    Sottoriva, Andrea; Kang, Haeyoun; Ma, Zhicheng; Graham, Trevor A; Salomon, Matthew P; Zhao, Junsong; Marjoram, Paul; Siegmund, Kimberly; Press, Michael F; Shibata, Darryl; Curtis, Christina

    2015-03-01

    What happens in early, still undetectable human malignancies is unknown because direct observations are impractical. Here we present and validate a 'Big Bang' model, whereby tumors grow predominantly as a single expansion producing numerous intermixed subclones that are not subject to stringent selection and where both public (clonal) and most detectable private (subclonal) alterations arise early during growth. Genomic profiling of 349 individual glands from 15 colorectal tumors showed an absence of selective sweeps, uniformly high intratumoral heterogeneity (ITH) and subclone mixing in distant regions, as postulated by our model. We also verified the prediction that most detectable ITH originates from early private alterations and not from later clonal expansions, thus exposing the profile of the primordial tumor. Moreover, some tumors appear 'born to be bad', with subclone mixing indicative of early malignant potential. This new model provides a quantitative framework to interpret tumor growth dynamics and the origins of ITH, with important clinical implications.

  17. How Big Data Fast Tracked Human Mobility Research and the Lessons for Animal Movement Ecology

    Directory of Open Access Journals (Sweden)

    Michele Thums

    2018-02-01

    Full Text Available The rise of the internet coupled with technological innovations such as smartphones have generated massive volumes of geo-referenced data (big data on human mobility. This has allowed the number of studies of human mobility to rapidly overtake those of animal movement. Today, telemetry studies of animals are also approaching big data status. Here, we review recent advances in studies of human mobility and identify the opportunities they present for advancing our understanding of animal movement. We describe key analytical techniques, potential bottlenecks and a roadmap for progress toward a synthesis of movement patterns of wild animals.

  18. How Big Data Fast Tracked Human Mobility Research and the Lessons for Animal Movement Ecology

    KAUST Repository

    Thums, Michele; Ferná ndez-Gracia, Juan; Sequeira, Ana M. M.; Eguí luz, Ví ctor M.; Duarte, Carlos M.; Meekan, Mark G.

    2018-01-01

    The rise of the internet coupled with technological innovations such as smartphones have generated massive volumes of geo-referenced data (big data) on human mobility. This has allowed the number of studies of human mobility to rapidly overtake those of animal movement. Today, telemetry studies of animals are also approaching big data status. Here, we review recent advances in studies of human mobility and identify the opportunities they present for advancing our understanding of animal movement. We describe key analytical techniques, potential bottlenecks and a roadmap for progress toward a synthesis of movement patterns of wild animals.

  19. How Big Data Fast Tracked Human Mobility Research and the Lessons for Animal Movement Ecology

    KAUST Repository

    Thums, Michele

    2018-02-13

    The rise of the internet coupled with technological innovations such as smartphones have generated massive volumes of geo-referenced data (big data) on human mobility. This has allowed the number of studies of human mobility to rapidly overtake those of animal movement. Today, telemetry studies of animals are also approaching big data status. Here, we review recent advances in studies of human mobility and identify the opportunities they present for advancing our understanding of animal movement. We describe key analytical techniques, potential bottlenecks and a roadmap for progress toward a synthesis of movement patterns of wild animals.

  20. Using Big Data to Understand the Human Condition: The Kavli HUMAN Project.

    Science.gov (United States)

    Azmak, Okan; Bayer, Hannah; Caplin, Andrew; Chun, Miyoung; Glimcher, Paul; Koonin, Steven; Patrinos, Aristides

    2015-09-01

    Until now, most large-scale studies of humans have either focused on very specific domains of inquiry or have relied on between-subjects approaches. While these previous studies have been invaluable for revealing important biological factors in cardiac health or social factors in retirement choices, no single repository contains anything like a complete record of the health, education, genetics, environmental, and lifestyle profiles of a large group of individuals at the within-subject level. This seems critical today because emerging evidence about the dynamic interplay between biology, behavior, and the environment point to a pressing need for just the kind of large-scale, long-term synoptic dataset that does not yet exist at the within-subject level. At the same time that the need for such a dataset is becoming clear, there is also growing evidence that just such a synoptic dataset may now be obtainable-at least at moderate scale-using contemporary big data approaches. To this end, we introduce the Kavli HUMAN Project (KHP), an effort to aggregate data from 2,500 New York City households in all five boroughs (roughly 10,000 individuals) whose biology and behavior will be measured using an unprecedented array of modalities over 20 years. It will also richly measure environmental conditions and events that KHP members experience using a geographic information system database of unparalleled scale, currently under construction in New York. In this manner, KHP will offer both synoptic and granular views of how human health and behavior coevolve over the life cycle and why they evolve differently for different people. In turn, we argue that this will allow for new discovery-based scientific approaches, rooted in big data analytics, to improving the health and quality of human life, particularly in urban contexts.

  1. Deserts in the Deluge: TerraPopulus and Big Human-Environment Data.

    Science.gov (United States)

    Manson, S M; Kugler, T A; Haynes, D

    2016-01-01

    Terra Populus, or TerraPop, is a cyberinfrastructure project that integrates, preserves, and disseminates massive data collections describing characteristics of the human population and environment over the last six decades. TerraPop has made a number of GIScience advances in the handling of big spatial data to make information interoperable between formats and across scientific communities. In this paper, we describe challenges of these data, or 'deserts in the deluge' of data, that are common to spatial big data more broadly, and explore computational solutions specific to microdata, raster, and vector data models.

  2. Big Hat, No Cattle: Managing Human Resources, Part 2.

    Science.gov (United States)

    Skinner, Wickham

    1982-01-01

    The author discusses why business has difficulty in motivating its employees and proposes a new approach to developing human resources. Discusses mistaken premises, personnel and supervision, setting a long-term goal, changing management's philosophy, and selling human resource development as a company priority. (CT)

  3. Big Hat, No Cattle: Managing Human Resources, Part 1.

    Science.gov (United States)

    Skinner, Wickham

    1982-01-01

    Presents an in-depth analysis of problems and a suggested approach to developing human resources which goes beyond identifying symptoms and provides a comprehensive perspective for building an effective work force. (JOW)

  4. Schooling for Humanity: When Big Brother Isn't Watching.

    Science.gov (United States)

    Solmitz, David O.

    Most educational reform initiatives of the past 20 years are geared towards ensuring that the United States dominates the emerging global economy. What is lost in this rush to the top of the materialist heap is an education for the more enduring human values: creativity, intellectual development, care, social justice, and democracy. In this book,…

  5. The dynamics of big data and human rights: the case of scientific research.

    Science.gov (United States)

    Vayena, Effy; Tasioulas, John

    2016-12-28

    In this paper, we address the complex relationship between big data and human rights. Because this is a vast terrain, we restrict our focus in two main ways. First, we concentrate on big data applications in scientific research, mostly health-related research. And, second, we concentrate on two human rights: the familiar right to privacy and the less well-known right to science. Our contention is that human rights interact in potentially complex ways with big data, not only constraining it, but also enabling it in various ways; and that such rights are dynamic in character, rather than fixed once and for all, changing in their implications over time in line with changes in the context we inhabit, and also as they interact among themselves in jointly responding to the opportunities and risks thrown up by a changing world. Understanding this dynamic interaction of human rights is crucial for formulating an ethic tailored to the realities-the new capabilities and risks-of the rapidly evolving digital environment.This article is part of the themed issue 'The ethical impact of data science'. © 2016 The Author(s).

  6. Short faces, big tongues: developmental origin of the human chin.

    Directory of Open Access Journals (Sweden)

    Michael Coquerelle

    Full Text Available During the course of human evolution, the retraction of the face underneath the braincase, and closer to the cervical column, has reduced the horizontal dimension of the vocal tract. By contrast, the relative size of the tongue has not been reduced, implying a rearrangement of the space at the back of the vocal tract to allow breathing and swallowing. This may have left a morphological signature such as a chin (mental prominence that can potentially be interpreted in Homo. Long considered an autopomorphic trait of Homo sapiens, various extinct hominins show different forms of mental prominence. These features may be the evolutionary by-product of equivalent developmental constraints correlated with an enlarged tongue. In order to investigate developmental mechanisms related to this hypothesis, we compare modern 34 human infants against 8 chimpanzee fetuses, whom development of the mandibular symphysis passes through similar stages. The study sets out to test that the shared ontogenetic shape changes of the symphysis observed in both species are driven by the same factor--space restriction at the back of the vocal tract and the associated arrangement of the tongue and hyoid bone. We apply geometric morphometric methods to extensive three-dimensional anatomical landmarks and semilandmarks configuration, capturing the geometry of the cervico-craniofacial complex including the hyoid bone, tongue muscle and the mandible. We demonstrate that in both species, the forward displacement of the mental region derives from the arrangement of the tongue and hyoid bone, in order to cope with the relative horizontal narrowing of the oral cavity. Because humans and chimpanzees share this pattern of developmental integration, the different forms of mental prominence seen in some extinct hominids likely originate from equivalent ontogenetic constraints. Variations in this process could account for similar morphologies.

  7. Human antibody recognition of antigenic site IV on Pneumovirus fusion proteins.

    Science.gov (United States)

    Mousa, Jarrod J; Binshtein, Elad; Human, Stacey; Fong, Rachel H; Alvarado, Gabriela; Doranz, Benjamin J; Moore, Martin L; Ohi, Melanie D; Crowe, James E

    2018-02-01

    Respiratory syncytial virus (RSV) is a major human pathogen that infects the majority of children by two years of age. The RSV fusion (F) protein is a primary target of human antibodies, and it has several antigenic regions capable of inducing neutralizing antibodies. Antigenic site IV is preserved in both the pre-fusion and post-fusion conformations of RSV F. Antibodies to antigenic site IV have been described that bind and neutralize both RSV and human metapneumovirus (hMPV). To explore the diversity of binding modes at antigenic site IV, we generated a panel of four new human monoclonal antibodies (mAbs) and competition-binding suggested the mAbs bind at antigenic site IV. Mutagenesis experiments revealed that binding and neutralization of two mAbs (3M3 and 6F18) depended on arginine (R) residue R429. We discovered two R429-independent mAbs (17E10 and 2N6) at this site that neutralized an RSV R429A mutant strain, and one of these mAbs (17E10) neutralized both RSV and hMPV. To determine the mechanism of cross-reactivity, we performed competition-binding, recombinant protein mutagenesis, peptide binding, and electron microscopy experiments. It was determined that the human cross-reactive mAb 17E10 binds to RSV F with a binding pose similar to 101F, which may be indicative of cross-reactivity with hMPV F. The data presented provide new concepts in RSV immune recognition and vaccine design, as we describe the novel idea that binding pose may influence mAb cross-reactivity between RSV and hMPV. Characterization of the site IV epitope bound by human antibodies may inform the design of a pan-Pneumovirus vaccine.

  8. How They Move Reveals What Is Happening: Understanding the Dynamics of Big Events from Human Mobility Pattern

    Directory of Open Access Journals (Sweden)

    Jean Damascène Mazimpaka

    2017-01-01

    Full Text Available The context in which a moving object moves contributes to the movement pattern observed. Likewise, the movement pattern reflects the properties of the movement context. In particular, big events influence human mobility depending on the dynamics of the events. However, this influence has not been explored to understand big events. In this paper, we propose a methodology for learning about big events from human mobility pattern. The methodology involves extracting and analysing the stopping, approaching, and moving-away interactions between public transportation vehicles and the geographic context. The analysis is carried out at two different temporal granularity levels to discover global and local patterns. The results of evaluating this methodology on bus trajectories demonstrate that it can discover occurrences of big events from mobility patterns, roughly estimate the event start and end time, and reveal the temporal patterns of arrival and departure of event attendees. This knowledge can be usefully applied in transportation and event planning and management.

  9. Sandwich-type enzyme immunoassay for big endothelin-I in plasma: concentrations in healthy human subjects unaffected by sex or posture.

    Science.gov (United States)

    Aubin, P; Le Brun, G; Moldovan, F; Villette, J M; Créminon, C; Dumas, J; Homyrda, L; Soliman, H; Azizi, M; Fiet, J

    1997-01-01

    A sandwich-type enzyme immunoassay has been developed for measuring human big endothelin-1 (big ET-1) in human plasma and supernatant fluids from human cell cultures. Big ET-1 is the precursor of endothelin 1 (ET-1), the most potent vasoconstrictor known. A rabbit antibody raised against the big ET-1 COOH-terminus fragment was used as an immobilized antibody (anti-P16). The Fab' fragment of a monoclonal antibody (1B3) raised against the ET-1 loop fragment was used as the enzyme-labeled antibody, after being coupled to acetylcholinesterase. The lowest detectable value in the assay was 1.2 pg/mL (0.12 pg/well). The assay was highly specific for big ET-1, demonstrating no cross-reactivity with ET-1, big endothelin-2 (big ET-2), and big endothelin-3 (big ET-3). We used this assay to evaluate the effect of two different postural positions (supine and standing) on plasma big ET-1 concentrations in 11 male and 11 female healthy subjects. Data analysis revealed that neither sex nor body position influenced plasma big ET-1 concentrations. This assay should thus permit the detection of possible variations in plasma concentrations of big ET-1 in certain pathologies and, in association with ET-1 assay, make possible in vitro study of endothelin-converting enzyme activity in cell models. Such studies could clarify the physiological and clinical roles of this family of peptides.

  10. MCF-7 human mammary adenocarcinoma cells exhibit augmented responses to human insulin on a collagen IV surface

    DEFF Research Database (Denmark)

    Listov-Saabye, Nicolai; Jensen, Marianne Blirup; Kiehr, Benedicte

    2009-01-01

    Human mammary cell lines are extensively used for preclinical safety assessment of insulin analogs. However, it is essentially unknown how mitogenic responses can be optimized in mammary cell-based systems. We developed an insulin mitogenicity assay in MCF-7 human mammary adenocarcinoma cells......, under low serum (0.1% FCS) and phenol red-free conditions, with 3H thymidine incorporation as endpoint. Based on EC50 values determined from 10-fold dilution series, beta-estradiol was the most potent mitogen, followed by human IGF-1, human AspB10 insulin and native human insulin. AspB10 insulin...... was significantly more mitogenic than native insulin, validating the ability of the assay to identify hypermitogenic human insulin analogs. With MCF-7 cells on a collagen IV surface, the ranking of mitogens was maintained, but fold mitogenic responses and dynamic range and steepness of dose-response curves were...

  11. Human factors/ergonomics implications of big data analytics: Chartered Institute of Ergonomics and Human Factors annual lecture.

    Science.gov (United States)

    Drury, Colin G

    2015-01-01

    In recent years, advances in sensor technology, connectedness and computational power have come together to produce huge data-sets. The treatment and analysis of these data-sets is known as big data analytics (BDA), and the somewhat related term data mining. Fields allied to human factors/ergonomics (HFE), e.g. statistics, have developed computational methods to derive meaningful, actionable conclusions from these data bases. This paper examines BDA, often characterised by volume, velocity and variety, giving examples of successful BDA use. This examination provides context by considering examples of using BDA on human data, using BDA in HFE studies, and studies of how people perform BDA. Significant issues for HFE are the reliance of BDA on correlation rather than hypotheses and theory, the ethics of BDA and the use of HFE in data visualisation.

  12. A big data approach to the concordance of the toxicity of pharmaceuticals in animals and humans.

    Science.gov (United States)

    Clark, Matthew; Steger-Hartmann, Thomas

    2018-07-01

    Although lack of efficacy is an important cause of late stage attrition in drug development the shortcomings in the translation of toxicities observed during the preclinical development to observations in clinical trials or post-approval is an ongoing topic of research. The concordance between preclinical and clinical safety observations has been analyzed only on relatively small data sets, mostly over short time periods of drug approvals. We therefore explored the feasibility of a big-data analysis on a set of 3,290 approved drugs and formulations for which 1,637,449 adverse events were reported for both humans animal species in regulatory submissions over a period of more than 70 years. The events reported in five species - rat, dog, mouse, rabbit, and cynomolgus monkey - were treated as diagnostic tests for human events and the diagnostic power was computed for each event/species pair using likelihood ratios. The animal-human translation of many key observations is confirmed as being predictive, such as QT prolongation and arrhythmias in dog. Our study confirmed the general predictivity of animal safety observations for humans, but also identified issues of such automated analyses which are on the one hand related to data curation and controlled vocabularies, on the other hand to methodological changes over the course of time. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  13. Angiotensin IV and the human esophageal mucosa: An exploratory study in healthy subjects and gastroesophageal reflux disease patients.

    Science.gov (United States)

    Björkman, Eleonora; Edebo, Anders; Fändriks, Lars; Casselbrant, Anna

    2015-09-01

    The human esophageal mucosa expresses various components of the renin-angiotensin system (RAS), e.g. the main effector peptide angiotensin II (AngII). The aim of this study was to investigate the esophageal presence of angiotensin III (AngIII) and angiotensin IV (AngIV) forming enzymes and the AngIV receptor (AT4R). The aim was also to study the actions of AngIV and to look for aberrations in patients with gastroesophageal reflux disease (GERD). Esophageal biopsies were collected from healthy volunteers (n: 19) and individuals with erosive reflux disease (n: 14). Gene transcripts and protein expression of aminopeptidase A, -B and -M, and the AT4R were investigated by reverse transcriptase polymerase chain reaction (rt-PCR), western blot (WB) and immunohistochemistry (IHC). The functional impact of AngIV was examined in an Ussing chamber. Aminopeptidase A, -B and -M and the AT4R were expressed in the esophageal epithelium. The AT4R was less prominent in certain areas in the mucosa of reflux patients. AngIV influenced the esophageal epithelial ion transport. The impact was lower in patients with GERD. The AT4R and formation enzymes of AngIII and AngIV are present in the human esophageal epithelium. Moreover, the present results suggest that AngIV exert regulatory impact on the epithelium and that RAS is involved in mucosal aberrations associated with GERD. © The Author(s) 2014.

  14. Where are human subjects in Big Data research? The emerging ethics divide

    Directory of Open Access Journals (Sweden)

    Jacob Metcalf

    2016-06-01

    Full Text Available There are growing discontinuities between the research practices of data science and established tools of research ethics regulation. Some of the core commitments of existing research ethics regulations, such as the distinction between research and practice, cannot be cleanly exported from biomedical research to data science research. Such discontinuities have led some data science practitioners and researchers to move toward rejecting ethics regulations outright. These shifts occur at the same time as a proposal for major revisions to the Common Rule—the primary regulation governing human-subjects research in the USA—is under consideration for the first time in decades. We contextualize these revisions in long-running complaints about regulation of social science research and argue data science should be understood as continuous with social sciences in this regard. The proposed regulations are more flexible and scalable to the methods of non-biomedical research, yet problematically largely exclude data science methods from human-subjects regulation, particularly uses of public datasets. The ethical frameworks for Big Data research are highly contested and in flux, and the potential harms of data science research are unpredictable. We examine several contentious cases of research harms in data science, including the 2014 Facebook emotional contagion study and the 2016 use of geographical data techniques to identify the pseudonymous artist Banksy. To address disputes about application of human-subjects research ethics in data science, critical data studies should offer a historically nuanced theory of “data subjectivity” responsive to the epistemic methods, harms and benefits of data science and commerce.

  15. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  16. Expression and enzymatic activity of dipeptidyl peptidase-IV in human astrocytic tumours are associated with tumour grade

    Czech Academy of Sciences Publication Activity Database

    Stremeňová, J.; Křepela, E.; Mareš, Vladislav; Trim, J.; Dbalý, V.; Marek, J.; Vaníčková, Z.; Lisá, Věra; Yea, Ch.; Šedo, A.

    2007-01-01

    Roč. 31, č. 4 (2007), s. 785-792 ISSN 1019-6439 R&D Projects: GA MZd NR8105 Institutional research plan: CEZ:AV0Z50110509 Keywords : Dipeptidyl peptidase-IV * human brain tumors * DASH molecules Subject RIV: FD - Oncology ; Hematology Impact factor: 2.295, year: 2007

  17. Combining Human Computing and Machine Learning to Make Sense of Big (Aerial) Data for Disaster Response.

    Science.gov (United States)

    Ofli, Ferda; Meier, Patrick; Imran, Muhammad; Castillo, Carlos; Tuia, Devis; Rey, Nicolas; Briant, Julien; Millet, Pauline; Reinhard, Friedrich; Parkan, Matthew; Joost, Stéphane

    2016-03-01

    Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The

  18. Identification of RegIV as a novel GLI1 target gene in human pancreatic cancer.

    Directory of Open Access Journals (Sweden)

    Feng Wang

    2011-04-01

    Full Text Available GLI1 is the key transcriptional factor in the Hedgehog signaling pathway in pancreatic cancer. RegIV is associated with regeneration, and cell growth, survival, adhesion and resistance to apoptosis. We aimed to study RegIV expression in pancreatic cancer and its relationship to GLI1.GLI1 and RegIV expression were evaluated in tumor tissue and adjacent normal tissues of pancreatic cancer patients and 5 pancreatic cancer cell lines by qRT-PCR, Western blot, and immunohistochemistry (IHC, and the correlation between them. The GLI1-shRNA lentiviral vector was constructed and transfected into PANC-1, and lentiviral vector containing the GLI1 expression sequence was constructed and transfected into BxPC-3. GLI1 and RegIV expression were evaluated by qRT-PCR and Western blot. Finally we demonstrated RegIV to be the target of GLI1 by chromatin immunoprecipitation (CHIP and electrophoretic mobility shift assays (EMSA.The results of IHC and qRT-PCR showed that RegIV and GLI1 expression was higher in pancreatic cancer tissues versus adjacent normal tissues (p<0.001. RegIV expression correlated with GLI1 expression in these tissues (R = 0.795, p<0.0001. These results were verified for protein (R = 0.939, p = 0.018 and mRNA expression (R = 0.959, p = 0.011 in 5 pancreatic cancer cell lines. RegIV mRNA and protein expression was decreased (94.7±0.3%, 84.1±0.5%; respectively when GLI1 was knocked down (82.1±3.2%, 76.7±2.2%; respectively by the RNAi technique. GLI1 overexpression in mRNA and protein level (924.5±5.3%, 362.1±3.5%; respectively induced RegIV overexpression (729.1±4.3%, 339.0±3.7%; respectively. Moreover, CHIP and EMSA assays showed GLI1 protein bound to RegIV promotor regions (GATCATCCA in pancreatic cancer cells.GLI1 promotes RegIV transcription by binding to the RegIV gene promoter in pancreatic cancer.

  19. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  20. Near Real Time Analytics of Human Sensor Networks in the Realm of Big Data

    Science.gov (United States)

    Aulov, O.; Halem, M.

    2012-12-01

    With the prolific development of social media, emergency responders have an increasing interest in harvesting social media from outlets such as Flickr, Twitter, and Facebook, in order to assess the scale and specifics of extreme events including wild fires, earthquakes, terrorist attacks, oil spills, etc. A number of experimental platforms have successfully been implemented to demonstrate the utilization of social media data in extreme events, including Twitter Earthquake Detector, which relied on tweets for earthquake monitoring; AirTwitter, which used tweets for air quality reporting; and our previous work, using Flickr data as boundary value forcings to improve the forecast of oil beaching in the aftermath of the Deepwater Horizon oil spill. The majority of these platforms addressed a narrow, specific type of emergency and harvested data from a particular outlet. We demonstrate an interactive framework for monitoring, mining and analyzing a plethora of heterogeneous social media sources for a diverse range of extreme events. Our framework consists of three major parts: a real time social media aggregator, a data processing and analysis engine, and a web-based visualization and reporting tool. The aggregator gathers tweets, Facebook comments from fan pages, Google+ posts, forum discussions, blog posts (such as LiveJournal and Blogger.com), images from photo-sharing platforms (such as Flickr, Picasa), videos from video-sharing platforms (youtube, Vimeo), and so forth. The data processing and analysis engine pre-processes the aggregated information and annotates it with geolocation and sentiment information. In many cases, the metadata of the social media posts does not contain geolocation information—-however, a human reader can easily guess from the body of the text what location is discussed. We are automating this task by use of Named Entity Recognition (NER) algorithms and a gazetteer service. The visualization and reporting tool provides a web-based, user

  1. Sexual dimorphism in relation to big-game hunting and economy in modern human populations.

    Science.gov (United States)

    Collier, S

    1993-08-01

    Postcranial skeletal data from two recent Eskimo populations are used to test David Frayer's model of sexual dimorphism reduction in Europe between the Upper Paleolithic and Mesolithic. Frayer argued that a change from big-game hunting and adoption of new technology in the Mesolithic reduced selection for large body size in males and led to a reduction in skeletal sexual dimorphism. Though aspects of Frayer's work have been criticized in the literature, the association of big-game hunting and high sexual dimorphism is untested. This study employs univariate and multivariate analysis to test that association by examining sexual dimorphism of cranial and postcranial bones of two recent Alaskan Eskimo populations, one being big-game (whale and other large marine mammal) hunting people, and the second being salmon fishing, riverine people. While big-game hunting influences skeletal robusticity, it cannot be said to lead to greater sexual dimorphism generally. The two populations had different relative sexual dimorphism levels for different parts of the body. Notably, the big-game hunting (whaling) Eskimos had the lower multivariate dimorphism in the humerus, which could be expected to be the structure under greatest exertion by such hunting in males. While the exertions of the whale hunting economic activities led to high skeletal robusticity, as predicted by Frayer's model, this was true of the females as well as the males, resulting in low sexual dimorphism in some features. Females are half the sexual dimorphism equation, and they cannot be seen as constants in any model of economic behavior.

  2. Septicemia caused by the gram-negative bacterium CDC IV c-2 in an immunocompromised human.

    OpenAIRE

    Dan, M; Berger, S A; Aderka, D; Levo, Y

    1986-01-01

    A 37-year-old man with plasma cell leukemia developed nonfatal septicemia caused by the gram-negative bacterium CDC IV c-2. Recovery followed appropriate treatment with antibiotics. The biochemical features of this organism are reviewed.

  3. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  4. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  5. A data analysis framework for biomedical big data: Application on mesoderm differentiation of human pluripotent stem cells.

    Science.gov (United States)

    Ulfenborg, Benjamin; Karlsson, Alexander; Riveiro, Maria; Améen, Caroline; Åkesson, Karolina; Andersson, Christian X; Sartipy, Peter; Synnergren, Jane

    2017-01-01

    The development of high-throughput biomolecular technologies has resulted in generation of vast omics data at an unprecedented rate. This is transforming biomedical research into a big data discipline, where the main challenges relate to the analysis and interpretation of data into new biological knowledge. The aim of this study was to develop a framework for biomedical big data analytics, and apply it for analyzing transcriptomics time series data from early differentiation of human pluripotent stem cells towards the mesoderm and cardiac lineages. To this end, transcriptome profiling by microarray was performed on differentiating human pluripotent stem cells sampled at eleven consecutive days. The gene expression data was analyzed using the five-stage analysis framework proposed in this study, including data preparation, exploratory data analysis, confirmatory analysis, biological knowledge discovery, and visualization of the results. Clustering analysis revealed several distinct expression profiles during differentiation. Genes with an early transient response were strongly related to embryonic- and mesendoderm development, for example CER1 and NODAL. Pluripotency genes, such as NANOG and SOX2, exhibited substantial downregulation shortly after onset of differentiation. Rapid induction of genes related to metal ion response, cardiac tissue development, and muscle contraction were observed around day five and six. Several transcription factors were identified as potential regulators of these processes, e.g. POU1F1, TCF4 and TBP for muscle contraction genes. Pathway analysis revealed temporal activity of several signaling pathways, for example the inhibition of WNT signaling on day 2 and its reactivation on day 4. This study provides a comprehensive characterization of biological events and key regulators of the early differentiation of human pluripotent stem cells towards the mesoderm and cardiac lineages. The proposed analysis framework can be used to structure

  6. Big Data for Global History: The Transformative Promise of Digital Humanities

    Directory of Open Access Journals (Sweden)

    Joris van Eijnatten

    2013-12-01

    Full Text Available This article discusses the promises and challenges of digital humanitiesmethodologies for historical inquiry. In order to address the great outstanding question whether big data will re-invigorate macro-history, a number of research projects are described that use cultural text mining to explore big data repositories of digitised newspapers. The advantages of quantitative analysis, visualisation and named entity recognition in both exploration and analysis are illustrated in the study of public debates on drugs, drug trafficking, and drug users in the early twentieth century (wahsp, the comparative study of discourses about heredity, genetics, and eugenics in Dutch and German newspapers, 1863-1940 (biland and the study of trans-Atlantic discourses (Translantis. While many technological and practical obstacles remain, advantages over traditional hermeneutic methodology are found in heuristics, analytics, quantitative trans-disciplinarity, and reproducibility, offering a quantitative and trans-national perspective on the history of mentalities.

  7. DNA Delivery and Genomic Integration into Mammalian Target Cells through Type IV A and B Secretion Systems of Human Pathogens

    Directory of Open Access Journals (Sweden)

    Dolores L. Guzmán-Herrador

    2017-08-01

    Full Text Available We explore the potential of bacterial secretion systems as tools for genomic modification of human cells. We previously showed that foreign DNA can be introduced into human cells through the Type IV A secretion system of the human pathogen Bartonella henselae. Moreover, the DNA is delivered covalently attached to the conjugative relaxase TrwC, which promotes its integration into the recipient genome. In this work, we report that this tool can be adapted to other target cells by using different relaxases and secretion systems. The promiscuous relaxase MobA from plasmid RSF1010 can be used to deliver DNA into human cells with higher efficiency than TrwC. MobA also promotes DNA integration, albeit at lower rates than TrwC. Notably, we report that DNA transfer to human cells can also take place through the Type IV secretion system of two intracellular human pathogens, Legionella pneumophila and Coxiella burnetii, which code for a distantly related Dot/Icm Type IV B secretion system. This suggests that DNA transfer could be an intrinsic ability of this family of secretion systems, expanding the range of target human cells. Further analysis of the DNA transfer process showed that recruitment of MobA by Dot/Icm was dependent on the IcmSW chaperone, which may explain the higher DNA transfer rates obtained. Finally, we observed that the presence of MobA negatively affected the intracellular replication of C. burnetii, suggesting an interference with Dot/Icm translocation of virulence factors.

  8. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  9. High frequency of Fredrickson's phenotypes IV and IIb in Brazilians infected by human immunodeficiency virus

    Directory of Open Access Journals (Sweden)

    Oliveira Helena CF

    2005-06-01

    Full Text Available Abstract Background Human immunodeficiency virus (HIV infection is very prevalent in Brazil. HIV therapy has been recently associated with coronary heart disease (CHD. Dyslipidemia is a major risk factor for CHD that is frequently described in HIV positive patients, but very few studies have been conducted in Brazilian patients evaluating their lipid profiles. Methods In the present work, we evaluated the frequency and severity of dyslipidemia in 257 Brazilian HIV positive patients. Two hundred and thirty-eight (93% were submitted to antiretroviral therapy (224 treated with protease inhibitors plus nucleoside reverse transcriptase inhibitors, 14 treated only with the latter, 12 naive and 7 had no records of treatment. The average time on drug treatment with antiretroviral therapy was 20 months. None of the patients was under lipid lowering drugs. Cholesterol, triglyceride, phospholipid and free fatty acids were determined by enzymatic colorimetric methods. Lipoprotein profile was estimated by the Friedewald formula and Fredrickson's phenotyping was obtained by serum electrophoresis on agarose. Apolipoprotein B and AI and lipoprotein "a" were measured by nephelometry. Results The Fredrickson phenotypes were: type IIb (51%, IV (41%, IIa (7%. In addition one patient was type III and another type V. Thirty-three percent of all HIV+ patients presented serum cholesterol levels ≥ 200 mg/dL, 61% LDL-cholesterol ≥ 100 mg/dL, 65% HDL-cholesterol below 40 mg/dL, 46% triglycerides ≥ 150 mg/dL and 10% have all these parameters above the limits. Eighty-six percent of patients had cholesterol/HDL-cholesterol ratio ≥ 3.5, 22% increased lipoprotein "a", 79% increased free fatty acids and 9% increased phospholipids. The treatment with protease inhibitors plus nucleoside reverse transcriptase inhibitors increased the levels of cholesterol and triglycerides in these patients when compared with naïve patients. The HDL-cholesterol (p = 0.01 and

  10. Big data challenges in decoding cortical activity in a human with quadriplegia to inform a brain computer interface.

    Science.gov (United States)

    Friedenberg, David A; Bouton, Chad E; Annetta, Nicholas V; Skomrock, Nicholas; Mingming Zhang; Schwemmer, Michael; Bockbrader, Marcia A; Mysiw, W Jerry; Rezai, Ali R; Bresler, Herbert S; Sharma, Gaurav

    2016-08-01

    Recent advances in Brain Computer Interfaces (BCIs) have created hope that one day paralyzed patients will be able to regain control of their paralyzed limbs. As part of an ongoing clinical study, we have implanted a 96-electrode Utah array in the motor cortex of a paralyzed human. The array generates almost 3 million data points from the brain every second. This presents several big data challenges towards developing algorithms that should not only process the data in real-time (for the BCI to be responsive) but are also robust to temporal variations and non-stationarities in the sensor data. We demonstrate an algorithmic approach to analyze such data and present a novel method to evaluate such algorithms. We present our methodology with examples of decoding human brain data in real-time to inform a BCI.

  11. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  12. Dependence of the Ce(iii)/Ce(iv) ratio on intracellular localization in ceria nanoparticles internalized by human cells

    KAUST Repository

    Ferraro, Daniela; Tredici, Ilenia G.; Ghigna, Paolo; Castillio-Michel, Hiram; Falqui, Andrea; Di Benedetto, Cristiano; Alberti, Giancarla; Ricci, Vittorio; Anselmi-Tamburini, Umberto; Sommi, Patrizia

    2017-01-01

    CeO2 nanoparticles (CNPs) have been investigated as promising antioxidant agents with significant activity in the therapy of diseases involving free radicals or oxidative stress. However, the exact mechanism responsible for CNP activity has not been completely elucidated. In particular, in situ evidence of modification of the oxidative state of CNPs in human cells and their evolution during cell internalization and subsequent intracellular distribution has never been presented. In this study we investigated modification of the Ce(iii)/Ce(iv) ratio following internalization in human cells by X-ray absorption near edge spectroscopy (XANES). From this analysis on cell pellets, we observed that CNPs incubated for 24 h showed a significant increase in Ce(iii). By coupling on individual cells synchrotron micro-X-ray fluorescence (μXRF) with micro-XANES (μXANES) we demonstrated that the Ce(iii)/Ce(iv) ratio is also dependent on CNP intracellular localization. The regions with the highest CNP concentrations, suggested to be endolysosomes by transmission electron microscopy, were characterized by Ce atoms in the Ce(iv) oxidation state, while a higher Ce(iii) content was observed in regions surrounding these areas. These observations suggest that the interaction of CNPs with cells involves a complex mechanism in which different cellular areas play different roles.

  13. Dependence of the Ce(iii)/Ce(iv) ratio on intracellular localization in ceria nanoparticles internalized by human cells

    KAUST Repository

    Ferraro, Daniela

    2017-01-09

    CeO2 nanoparticles (CNPs) have been investigated as promising antioxidant agents with significant activity in the therapy of diseases involving free radicals or oxidative stress. However, the exact mechanism responsible for CNP activity has not been completely elucidated. In particular, in situ evidence of modification of the oxidative state of CNPs in human cells and their evolution during cell internalization and subsequent intracellular distribution has never been presented. In this study we investigated modification of the Ce(iii)/Ce(iv) ratio following internalization in human cells by X-ray absorption near edge spectroscopy (XANES). From this analysis on cell pellets, we observed that CNPs incubated for 24 h showed a significant increase in Ce(iii). By coupling on individual cells synchrotron micro-X-ray fluorescence (μXRF) with micro-XANES (μXANES) we demonstrated that the Ce(iii)/Ce(iv) ratio is also dependent on CNP intracellular localization. The regions with the highest CNP concentrations, suggested to be endolysosomes by transmission electron microscopy, were characterized by Ce atoms in the Ce(iv) oxidation state, while a higher Ce(iii) content was observed in regions surrounding these areas. These observations suggest that the interaction of CNPs with cells involves a complex mechanism in which different cellular areas play different roles.

  14. Is DTPA a good competing chelating agent for Th(IV) in human serum and suitable in targeted alpha therapy?

    Science.gov (United States)

    Le Du, Alicia; Sabatié-Gogova, Andrea; Morgenstern, Alfred; Montavon, Gilles

    2012-04-01

    The interaction between thorium and human serum components was studied using difference ultraviolet spectroscopy (DUS), ultrafiltration and high-pressure-anion exchange chromatography (HPAEC) with external inductively conducted plasma mass spectrometry (ICP-MS) analysis. Experimental data are compared with modelling results based on the law of mass action. Human serum transferrin (HSTF) interacts strongly with Th(IV), forming a ternary complex including two synergistic carbonate anions. This complex governs Th(IV) speciation under blood serum conditions. Considering the generally used Langmuir-type model, values of 10(33.5) and 10(32.5) were obtained for strong and weak sites, respectively. We showed that trace amounts of diethylene triamine pentaacetic acid (DTPA) cannot complex Th(IV) in the blood serum at equilibrium. Unexpectedly this effect is not related to the competition with HSTF but is due to the strong competition with major divalent metal ions for DTPA. However, Th-DTPA complex was shown to be stable for a few hours when it is formed before addition in the biological medium; this is related to the high kinetic stability of the complex. This makes DTPA a potential chelating agent for synthesis of (226)Th-labelled biomolecules for application in targeted alpha therapy. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Crystallization and preliminary crystallographic analysis of the fourth FAS1 domain of human BigH3

    International Nuclear Information System (INIS)

    Yoo, Ji-Ho; Kim, EungKweon; Kim, Jongsun; Cho, Hyun-Soo

    2007-01-01

    The crystallization and X-ray diffraction analysis of the fourth FAS1 domain of human BigH3 are reported. The protein BigH3 is a cell-adhesion molecule induced by transforming growth factor-β (TGF-β). It consists of four homologous repeat domains known as FAS1 domains; mutations in these domains have been linked to corneal dystrophy. The fourth FAS1 domain was expressed in Escherichia coli B834 (DE3) (a methionine auxotroph) and purified by DEAE anion-exchange and gel-filtration chromatography. The FAS1 domain was crystallized using the vapour-diffusion method. A SAD diffraction data set was collected to a resolution of 2.5 Å at 100 K. The crystal belonged to space group P6 1 or P6 5 and had two molecules per asymmetric unit, with unit-cell parameters a = b = 62.93, c = 143.27 Å, α = β = 90.0, γ = 120.0°

  16. LLNL's Big Science Capabilities Help Spur Over $796 Billion in U.S. Economic Activity Sequencing the Human Genome

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-28

    LLNL’s successful history of taking on big science projects spans beyond national security and has helped create billions of dollars per year in new economic activity. One example is LLNL’s role in helping sequence the human genome. Over $796 billion in new economic activity in over half a dozen fields has been documented since LLNL successfully completed this Grand Challenge.

  17. Modulation of Kingella kingae adherence to human epithelial cells by type IV Pili, capsule, and a novel trimeric autotransporter.

    Science.gov (United States)

    Porsch, Eric A; Kehl-Fie, Thomas E; St Geme, Joseph W

    2012-10-23

    Kingella kingae is an emerging bacterial pathogen that is being recognized increasingly as an important etiology of septic arthritis, osteomyelitis, and bacteremia, especially in young children. Colonization of the posterior pharynx is a key step in the pathogenesis of K. kingae disease. Previous work established that type IV pili are necessary for K. kingae adherence to the respiratory epithelium. In this study, we set out to identify additional factors that influence K. kingae interactions with human epithelial cells. We found that genetic disruption of the gene encoding a predicted trimeric autotransporter protein called Knh (Kingella NhhA homolog) resulted in reduced adherence to human epithelial cells. In addition, we established that K. kingae elaborates a surface-associated polysaccharide capsule that requires a predicted ABC-type transporter export operon called ctrABCD for surface presentation. Furthermore, we discovered that the presence of a surface capsule interferes with Knh-mediated adherence to human epithelial cells by nonpiliated organisms and that maximal adherence in the presence of a capsule requires the predicted type IV pilus retraction machinery, PilT/PilU. On the basis of the data presented here, we propose a novel adherence mechanism that allows K. kingae to adhere efficiently to human epithelial cells while remaining encapsulated and more resistant to immune clearance. Kingella kingae is a Gram-negative bacterium that is being recognized increasingly as a cause of joint and bone infections in young children. The pathogenesis of disease due to K. kingae begins with bacterial colonization of the upper respiratory tract, and previous work established that surface hair-like fibers called type IV pili are necessary for K. kingae adherence to respiratory epithelial cells. In this study, we set out to identify additional factors that influence K. kingae interactions with respiratory epithelial cells. We discovered a novel surface protein called

  18. The big challenges in modeling human and environmental well-being.

    Science.gov (United States)

    Tuljapurkar, Shripad

    2016-01-01

    This article is a selective review of quantitative research, historical and prospective, that is needed to inform sustainable development policy. I start with a simple framework to highlight how demography and productivity shape human well-being. I use that to discuss three sets of issues and corresponding challenges to modeling: first, population prehistory and early human development and their implications for the future; second, the multiple distinct dimensions of human and environmental well-being and the meaning of sustainability; and, third, inequality as a phenomenon triggered by development and models to examine changing inequality and its consequences. I conclude with a few words about other important factors: political, institutional, and cultural.

  19. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  20. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  1. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  2. Zooniverse: Combining Human and Machine Classifiers for the Big Survey Era

    Science.gov (United States)

    Fortson, Lucy; Wright, Darryl; Beck, Melanie; Lintott, Chris; Scarlata, Claudia; Dickinson, Hugh; Trouille, Laura; Willi, Marco; Laraia, Michael; Boyer, Amy; Veldhuis, Marten; Zooniverse

    2018-01-01

    Many analyses of astronomical data sets, ranging from morphological classification of galaxies to identification of supernova candidates, have relied on humans to classify data into distinct categories. Crowdsourced galaxy classifications via the Galaxy Zoo project provided a solution that scaled visual classification for extant surveys by harnessing the combined power of thousands of volunteers. However, the much larger data sets anticipated from upcoming surveys will require a different approach. Automated classifiers using supervised machine learning have improved considerably over the past decade but their increasing sophistication comes at the expense of needing ever more training data. Crowdsourced classification by human volunteers is a critical technique for obtaining these training data. But several improvements can be made on this zeroth order solution. Efficiency gains can be achieved by implementing a “cascade filtering” approach whereby the task structure is reduced to a set of binary questions that are more suited to simpler machines while demanding lower cognitive loads for humans.Intelligent subject retirement based on quantitative metrics of volunteer skill and subject label reliability also leads to dramatic improvements in efficiency. We note that human and machine classifiers may retire subjects differently leading to trade-offs in performance space. Drawing on work with several Zooniverse projects including Galaxy Zoo and Supernova Hunter, we will present recent findings from experiments that combine cohorts of human and machine classifiers. We show that the most efficient system results when appropriate subsets of the data are intelligently assigned to each group according to their particular capabilities.With sufficient online training, simple machines can quickly classify “easy” subjects, leaving more difficult (and discovery-oriented) tasks for volunteers. We also find humans achieve higher classification purity while samples

  3. Big cats in our backyards: persistence of large carnivores in a human dominated landscape in India.

    Directory of Open Access Journals (Sweden)

    Vidya Athreya

    Full Text Available Protected areas are extremely important for the long term viability of biodiversity in a densely populated country like India where land is a scarce resource. However, protected areas cover only 5% of the land area in India and in the case of large carnivores that range widely, human use landscapes will function as important habitats required for gene flow to occur between protected areas. In this study, we used photographic capture recapture analysis to assess the density of large carnivores in a human-dominated agricultural landscape with density >300 people/km(2 in western Maharashtra, India. We found evidence of a wide suite of wild carnivores inhabiting a cropland landscape devoid of wilderness and wild herbivore prey. Furthermore, the large carnivores; leopard (Panthera pardus and striped hyaena (Hyaena hyaena occurred at relatively high density of 4.8±1.2 (sd adults/100 km(2 and 5.03±1.3 (sd adults/100 km(2 respectively. This situation has never been reported before where 10 large carnivores/100 km(2 are sharing space with dense human populations in a completely modified landscape. Human attacks by leopards were rare despite a potentially volatile situation considering that the leopard has been involved in serious conflict, including human deaths in adjoining areas. The results of our work push the frontiers of our understanding of the adaptability of both, humans and wildlife to each other's presence. The results also highlight the urgent need to shift from a PA centric to a landscape level conservation approach, where issues are more complex, and the potential for conflict is also very high. It also highlights the need for a serious rethink of conservation policy, law and practice where the current management focus is restricted to wildlife inside Protected Areas.

  4. How Do Small Things Make a Big Difference? Activities to Teach about Human-Microbe Interactions.

    Science.gov (United States)

    Jasti, Chandana; Hug, Barbara; Waters, Jillian L; Whitaker, Rachel J

    2014-11-01

    Recent scientific studies are providing increasing evidence for how microbes living in and on us are essential to our good health. However, many students still think of microbes only as germs that harm us. The classroom activities presented here are designed to shift student thinking on this topic. In these guided inquiry activities, students investigate human-microbe interactions as they work together to interpret and analyze authentic data from published articles and develop scientific models. Through the activities, students learn and apply ecological concepts as they come to see the human body as a fascinatingly complex ecosystem.

  5. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  6. A big blank white canvas? Mapping and modeling human impact in Antarctica

    Science.gov (United States)

    Steve Carver; Tina Tin

    2015-01-01

    Antarctica is certainly what most people would consider being the world's last great wilderness; largely untouched and undeveloped by humans. Yet it is not inviolate - there are scientific bases, tourist operations, expeditions, airstrips and even roads. Although these impacts are by and large limited in extent, their very presence in an otherwise "blank...

  7. Digital Humanities: the Next Big Thing? Enkele notities bij een ontluikend debat

    NARCIS (Netherlands)

    Besser, S.; Vaessens, T.

    2013-01-01

    In the form of provisional notes, the authors offer suggestions for an intensification of the theoretical debate on the digital humanities and computational literary studies in particular. From the perspective of poststructuralist theory, they address some of the epistemological underpinnings of

  8. A murine monoclonal anti-idiotypic antibody detects a common idiotope on human, mouse and rabbit antibodies to allergen Lol p IV.

    Science.gov (United States)

    Zhou, E M; Dzuba-Fischer, J M; Rector, E S; Sehon, A H; Kisil, F T

    1991-09-01

    A syngeneic mouse monoclonal anti-idiotypic antibody (anti-Id), designated as B1/1, was generated against a monoclonal antibody (MoAb 91) specific for Ryegrass pollen allergen Lol p IV. This anti-Id recognized an idiotope (Id) that was also present on other monoclonal antibodies with the same specificity as MoAb 91. Observations that (i) the anti-Id inhibited the binding of MoAb 91 to Lol p IV and (ii) the Id-anti-Id interaction could be inhibited by Lol p IV indicated that the Id was located within or near the antigen combining site. These properties served to characterize B1/1 as an internal image anti-Id. Evidence that an immune response in different species to Lol p IV elicits the formation of antibodies which express a common Id was provided by the observations that (i) the Id-anti-Id interactions could be inhibited by mouse, human and rabbit antisera to Lol p IV and (ii) the binding of these antisera to Lol p IV could be inhibited by the anti-Id. Interestingly, the internal image anti-Id B1/1 also recognized an Id on a monoclonal antibody which was directed to an epitope of Lol p IV, different from that recognized by MoAb 91.

  9. NNDSS - Table IV. Tuberculosis

    Data.gov (United States)

    U.S. Department of Health & Human Services — NNDSS - Table IV. Tuberculosis - 2016.This Table includes total number of cases reported in the United States, by region and by states, in accordance with the...

  10. NNDSS - Table IV. Tuberculosis

    Data.gov (United States)

    U.S. Department of Health & Human Services — NNDSS - Table IV. Tuberculosis - 2014.This Table includes total number of cases reported in the United States, by region and by states, in accordance with the...

  11. NNDSS - Table IV. Tuberculosis

    Data.gov (United States)

    U.S. Department of Health & Human Services — NNDSS - Table IV. Tuberculosis - 2015.This Table includes total number of cases reported in the United States, by region and by states, in accordance with the...

  12. The impact of whole human blood on the kinetic inertness of platinum(iv) prodrugs - an HPLC-ICP-MS study.

    Science.gov (United States)

    Theiner, Sarah; Grabarics, Márkó; Galvez, Luis; Varbanov, Hristo P; Sommerfeld, Nadine S; Galanski, Markus; Keppler, Bernhard K; Koellensperger, Gunda

    2018-04-17

    The potential advantage of platinum(iv) complexes as alternatives to classical platinum(ii)-based drugs relies on their kinetic stability in the body before reaching the tumor site and on their activation by reduction inside cancer cells. In this study, an analytical workflow has been developed to investigate the reductive biotransformation and kinetic inertness of platinum(iv) prodrugs comprising different ligand coordination spheres (respectively, lipophilicity and redox behavior) in whole human blood. The distribution of platinum(iv) complexes in blood pellets and plasma was determined by inductively coupled plasma-mass spectrometry (ICP-MS) after microwave digestion. An analytical approach based on reversed-phase (RP)-ICP-MS was used to monitor the parent compound and the formation of metabolites using two different extraction procedures. The ligand coordination sphere of the platinum(iv) complexes had a significant impact on their accumulation in red blood cells and on their degree of kinetic inertness in whole human blood. The most lipophilic platinum(iv) compound featuring equatorial chlorido ligands showed a pronounced penetration into blood cells and a rapid reductive biotransformation. In contrast, the more hydrophilic platinum(iv) complexes with a carboplatin- and oxaliplatin-core exerted kinetic inertness on a pharmacologically relevant time scale with notable amounts of the compound accumulated in the plasma fraction.

  13. Human genetics studies in areas of high natural radiation. IV. Research in radioactive areas

    Energy Technology Data Exchange (ETDEWEB)

    Freire-Maia, A [Faculdade de Ciencias Medicas e Biologicas de Botucatu (Brazil). Departamento de Genetica

    1974-01-01

    A review is made on researches performed in areas with high levels of natural radioactivity. Some considerations are made on the importance and difficulties involved in projects of this kind. Although there is no doubt that natural radioactivity is one of the causes of the so-called spontaneous mutations, the practical demonstration of this assertion is extremely complex. Projects trying to correlate high levels of natural radioactivity with the occurrence of cancer (in general, or specific), leukemia, congenital malformations (in general or specific), neuro-vegetative disturbs, sex ratio, mortality, and physical development, as well as other characteristics. Some researches with animals are also mentioned, and references are given for plant studies. A critical analysis is made of some works relating to human populations.

  14. Human genetics studies in areas of high natural radiation.IV. Research in radioactive areas

    International Nuclear Information System (INIS)

    Freire-Maia, A.

    1974-01-01

    A review is made on researches performed in areas with high levels of natural radioactivity. Some considerations are made on the importance and difficulties involved in projects of this kind. Although there is no doubt that natural radioactivity is one of the causes of the so-called spontaneous mutations, the practical demonstration of this assertion is extremely complex. Projects trying to correlate high levels of natural radioactivity with the occurrence of cancer (in general, or specific), leukemia, congenital malformations (in general or specific), neuro-vegetative disturbs, sex ratio, mortality, and physical development, as well as other characteristics. Some researches with animals are also mentioned, and references are given for plant studies. A critical analysis is made of some works relating to human populations [pt

  15. Functional similarities between the dictyostelium protein AprA and the human protein dipeptidyl-peptidase IV.

    Science.gov (United States)

    Herlihy, Sarah E; Tang, Yu; Phillips, Jonathan E; Gomer, Richard H

    2017-03-01

    Autocrine proliferation repressor protein A (AprA) is a protein secreted by Dictyostelium discoideum cells. Although there is very little sequence similarity between AprA and any human protein, AprA has a predicted structural similarity to the human protein dipeptidyl peptidase IV (DPPIV). AprA is a chemorepellent for Dictyostelium cells, and DPPIV is a chemorepellent for neutrophils. This led us to investigate if AprA and DPPIV have additional functional similarities. We find that like AprA, DPPIV is a chemorepellent for, and inhibits the proliferation of, D. discoideum cells, and that AprA binds some DPPIV binding partners such as fibronectin. Conversely, rAprA has DPPIV-like protease activity. These results indicate a functional similarity between two eukaryotic chemorepellent proteins with very little sequence similarity, and emphasize the usefulness of using a predicted protein structure to search a protein structure database, in addition to searching for proteins with similar sequences. © 2016 The Protein Society.

  16. Functional similarities between the dictyostelium protein AprA and the human protein dipeptidyl‐peptidase IV

    Science.gov (United States)

    Herlihy, Sarah E.; Tang, Yu; Phillips, Jonathan E.

    2017-01-01

    Abstract Autocrine proliferation repressor protein A (AprA) is a protein secreted by Dictyostelium discoideum cells. Although there is very little sequence similarity between AprA and any human protein, AprA has a predicted structural similarity to the human protein dipeptidyl peptidase IV (DPPIV). AprA is a chemorepellent for Dictyostelium cells, and DPPIV is a chemorepellent for neutrophils. This led us to investigate if AprA and DPPIV have additional functional similarities. We find that like AprA, DPPIV is a chemorepellent for, and inhibits the proliferation of, D. discoideum cells, and that AprA binds some DPPIV binding partners such as fibronectin. Conversely, rAprA has DPPIV‐like protease activity. These results indicate a functional similarity between two eukaryotic chemorepellent proteins with very little sequence similarity, and emphasize the usefulness of using a predicted protein structure to search a protein structure database, in addition to searching for proteins with similar sequences. PMID:28028841

  17. [Influence of valproic acid (depakine I.V.) on human placenta metabolism--experimental model].

    Science.gov (United States)

    Semczuk-Sikora, Anna; Rogowska, Wanda; Semczuk, Marian

    2003-08-01

    The pregnancy in women with epilepsy is associated with an increased incidence of congenital malformations in offspring. Currently, anti-epileptic drugs (AEDs) are concerned to be a major etiologic factor of abnormal fetal development but the pathomechanism of teratogenicity of AEDs is complex and not well understood. The purpose of this study was to evaluate an influence of one of the AED-valproic acid (VPA) on placental metabolism (glucose consumption and lactate production). Term human placental cotyledons were perfused in vitro using a recycling perfusion of maternal and fetal circulations. A total 18 placentas were perfused either with 75 micrograms/ml of VPA (therapeutic dose) or with 225 micrograms/ml of VPA (toxic dose). Eight placentas were perfused with a medium without VPA and served as controls. During 2.5 h of experiment, both maternal and fetal glucose consumption and lactate production were measured every 30 minutes. The introduction of different concentrations of VPA into the perfusion system did not effect placental glucose consumption and lactate production rates in both maternal and fetal compartments. The teratogenic effect of valproic acid is not associated with metabolic disturbances of glucose or lactate in the placental tissue.

  18. Astragaloside IV prevents damage to human mesangial cells through the inhibition of the NADPH oxidase/ROS/Akt/NF‑κB pathway under high glucose conditions.

    Science.gov (United States)

    Sun, Li; Li, Weiping; Li, Weizu; Xiong, Li; Li, Guiping; Ma, Rong

    2014-07-01

    Glomerular hypertrophy and hyperfiltration are the two major pathological characteristics of the early stages of diabetic nephropathy (DN), which are respectively related to mesangial cell (MC) proliferation and a decrease in calcium influx conducted by canonical transient receptor potential cation channel 6 (TRPC6). The marked increase in the production of reactive oxygen species (ROS) induced by hyperglycemia is the main sponsor of multiple pathological pathways in DN. Nicotinamide adenine dinucleotide phosphate (NADPH) oxidase is an important source of ROS production in MCs. Astragaloside IV (AS‑IV) is an active ingredient of Radix Astragali which has a potent antioxidative effect. In this study, we aimed to investigate whether high glucose (HG)‑induced NADPH oxidase activation and ROS production contribute to MC proliferation and the downregulation of TRPC6 expression; we also wished to determine the effects of AS‑IV on MCs under HG conditions. Using a human glomerular mesangial cell line, we found that treatment with AS‑IV for 48 h markedly attenuated HG‑induced proliferation and the hypertrophy of MCs in a dose‑dependent manner. The intracellular ROS level was also markedly reduced following treatment with AS‑IV. In addition, the enhanced activity of NADPH oxidase and the expression level of NADPH oxidase 4 (Nox4) protein were decreased. Treatment with AS‑IV also inhibited the phosphorylation level of Akt and IκBα in the MCs. In addition, TRPC6 protein expression and the intracellular free calcium concentration were also markedly reduced following treatment with AS‑IV under HG conditions. These results suggest that AS‑IV inhibits HG‑induced mesangial cell proliferation and glomerular contractile dysfunction through the NADPH oxidase/ROS/Akt/nuclear factor‑κB (NF‑κB) pathway, providing a new perspective for the clinical treatment of DN.

  19. Distinct kinetics of human DNA ligases I, IIIalpha, IIIbeta, and IV reveal direct DNA sensing ability and differential physiological functions in DNA repair

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xi; Ballin, Jeff D.; Della-Maria, Julie; Tsai, Miaw-Sheue; White, Elizabeth J.; Tomkinson, Alan E.; Wilson, Gerald M.

    2009-05-11

    The three human LIG genes encode polypeptides that catalyze phosphodiester bond formation during DNA replication, recombination and repair. While numerous studies have identified protein partners of the human DNA ligases (hLigs), there has been little characterization of the catalytic properties of these enzymes. In this study, we developed and optimized a fluorescence-based DNA ligation assay to characterize the activities of purified hLigs. Although hLigI joins DNA nicks, it has no detectable activity on linear duplex DNA substrates with short, cohesive single-strand ends. By contrast, hLigIII{beta} and the hLigIII{alpha}/XRCC1 and hLigIV/XRCC4 complexes are active on both nicked and linear duplex DNA substrates. Surprisingly, hLigIV/XRCC4, which is a key component of the major non-homologous end joining (NHEJ) pathway, is significantly less active than hLigIII on a linear duplex DNA substrate. Notably, hLigIV/XRCC4 molecules only catalyze a single ligation event in the absence or presence of ATP. The failure to catalyze subsequent ligation events reflects a defect in the enzyme-adenylation step of the next ligation reaction and suggests that, unless there is an in vivo mechanism to reactivate DNA ligase IV/XRCC4 following phosphodiester bond formation, the cellular NHEJ capacity will be determined by the number of adenylated DNA ligaseIV/XRCC4 molecules.

  20. Identification of novel human dipeptidyl peptidase-IV inhibitors of natural origin (Part II: in silico prediction in antidiabetic extracts.

    Directory of Open Access Journals (Sweden)

    Laura Guasch

    Full Text Available BACKGROUND: Natural extracts play an important role in traditional medicines for the treatment of diabetes mellitus and are also an essential resource for new drug discovery. Dipeptidyl peptidase IV (DPP-IV inhibitors are potential candidates for the treatment of type 2 diabetes mellitus, and the effectiveness of certain antidiabetic extracts of natural origin could be, at least partially, explained by the inhibition of DPP-IV. METHODOLOGY/PRINCIPAL FINDINGS: Using an initial set of 29,779 natural products that are annotated with their natural source and an experimentally validated virtual screening procedure previously developed in our lab (Guasch et al.; 2012 [1], we have predicted 12 potential DPP-IV inhibitors from 12 different plant extracts that are known to have antidiabetic activity. Seven of these molecules are identical or similar to molecules with described antidiabetic activity (although their role as DPP-IV inhibitors has not been suggested as an explanation for their bioactivity. Therefore, it is plausible that these 12 molecules could be responsible, at least in part, for the antidiabetic activity of these extracts through their inhibitory effect on DPP-IV. In addition, we also identified as potential DPP-IV inhibitors 6 molecules from 6 different plants with no described antidiabetic activity but that share the same genus as plants with known antidiabetic properties. Moreover, none of the 18 molecules that we predicted as DPP-IV inhibitors exhibits chemical similarity with a group of 2,342 known DPP-IV inhibitors. CONCLUSIONS/SIGNIFICANCE: Our study identified 18 potential DPP-IV inhibitors in 18 different plant extracts (12 of these plants have known antidiabetic properties, whereas, for the remaining 6, antidiabetic activity has been reported for other plant species from the same genus. Moreover, none of the 18 molecules exhibits chemical similarity with a large group of known DPP-IV inhibitors.

  1. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  2. MINOR HUMAN-ANTIBODY RESPONSE TO A MOUSE AND CHIMERIC MONOCLONAL-ANTIBODY AFTER A SINGLE IV INFUSION IN OVARIAN-CARCINOMA PATIENTS - A COMPARISON OF 5 ASSAYS

    NARCIS (Netherlands)

    BUIST, MR; KENEMANS, P; VANKAMP, GJ; Haisma, Hidde

    The human anti-(mouse Ig) antibody (HAMA) response was measured in serum of 52 patients suspected of having ovarian carcinoma who had received an i.v. injection of either the murine monoclonal antibody (mAb) OV-TL 3 F(ab')(2) (n = 28, 1 mg) or the chimeric mouse/human mAb MOv18 (cMOv18; n = 24, 3

  3. Minor human antibody response to a mouse and chimeric monoclonal antibody after a single i.v. infusion in ovarian carcinoma patients: a comparison of five assays

    NARCIS (Netherlands)

    Buist, M. R.; Kenemans, P.; van Kamp, G. J.; Haisma, H. J.

    1995-01-01

    The human anti-(mouse Ig) antibody (HAMA) response was measured in serum of 52 patients suspected of having ovarian carcinoma who had received an i.v. injection of either the murine monoclonal antibody (mAb) OV-TL 3 F(ab')2 (n = 28, 1 mg) or the chimeric mouse/human mAb MOv18 (cMOv18; n = 24, 3 mg).

  4. A bipartite signal mediates the transfer of type IV secretion substrates of Bartonella henselae into human cells.

    Science.gov (United States)

    Schulein, Ralf; Guye, Patrick; Rhomberg, Thomas A; Schmid, Michael C; Schröder, Gunnar; Vergunst, Annette C; Carena, Ilaria; Dehio, Christoph

    2005-01-18

    Bacterial type IV secretion (T4S) systems mediate the transfer of macromolecular substrates into various target cells, e.g., the conjugative transfer of DNA into bacteria or the transfer of virulence proteins into eukaryotic host cells. The T4S apparatus VirB of the vascular tumor-inducing pathogen Bartonella henselae causes subversion of human endothelial cell (HEC) function. Here we report the identification of multiple protein substrates of VirB, which, upon translocation into HEC, mediate all known VirB-dependent cellular changes. These Bartonella-translocated effector proteins (Beps) A-G are encoded together with the VirB system and the T4S coupling protein VirD4 on a Bartonella-specific pathogenicity island. The Beps display a modular architecture, suggesting an evolution by extensive domain duplication and reshuffling. The C terminus of each Bep harbors at least one copy of the Bep-intracellular delivery domain and a short positively charged tail sequence. This biparte C terminus constitutes a transfer signal that is sufficient to mediate VirB/VirD4-dependent intracellular delivery of reporter protein fusions. The Bep-intracellular delivery domain is also present in conjugative relaxases of bacterial conjugation systems. We exemplarily show that the C terminus of such a conjugative relaxase mediates protein transfer through the Bartonella henselae VirB/VirD4 system into HEC. Conjugative relaxases may thus represent the evolutionary origin of the here defined T4S signal for protein transfer into human cells.

  5. Carbohydrate linked organotin(IV) complexes as human topoisomerase Iα inhibitor and their antiproliferative effects against the human carcinoma cell line.

    Science.gov (United States)

    Khan, Rais Ahmad; Yadav, Shipra; Hussain, Zahid; Arjmand, Farukh; Tabassum, Sartaj

    2014-02-14

    Dimethyltin(IV) complexes with ethanolamine (1) and biologically significant N-glycosides (2 and 3) were designed and synthesized. The structural elucidation of complexes 1-3 was done using elemental and spectroscopic methods; in addition, complex 1 was studied by single crystal X-ray diffraction studies. The in vitro DNA binding profile of complexes 2 and 3 was carried out by employing different biophysical methods to ascertain the feasibility of glycosylated complexes. Further, the cleaving ability of 2 and 3 was investigated by the agarose gel electrophoretic mobility assay with supercoiled pBR322 DNA, and demonstrated significantly good nuclease activity. Furthermore, both the complexes exhibited significant inhibitory effects on the catalytic activity of human Topo I at lower concentration than standard drugs. Computer-aided molecular docking techniques were used to ascertain the mode and mechanism of action towards the molecular target DNA and Topo I. The cytotoxicity of 2 and 3 against human hepatoma cancer cells (Huh7) was evaluated, which revealed significant regression in cancerous cells as compared with the standard drug. The antiproliferative activities of 2 and 3 were tested against human hepatoma cancer cells (Huh7), and results showed significantly good activity. Additionally, to validate the remarkable antiproliferative activity of complexes 2 and 3, specific regulatory gene expression (MMP-2 and TGF-β) was obtained by real time PCR.

  6. Asteroids IV

    Science.gov (United States)

    Michel, Patrick; DeMeo, Francesca E.; Bottke, William F.

    Asteroids are fascinating worlds. Considered the building blocks of our planets, many of the authors of this book have devoted their scientific careers to exploring them with the tools of our trade: ground- and spacebased observations, in situ space missions, and studies that run the gamut from theoretical modeling efforts to laboratory work. Like fossils for paleontologists, or DNA for geneticists, they allow us to construct a veritable time machine and provide us with tantalizing glimpses of the earliest nature of our solar system. By investigating them, we can probe what our home system was like before life or even the planets existed. The origin and evolution of life on our planet is also intertwined with asteroids in a different way. It is believed that impacts on the primordial Earth may have delivered the basic components for life, with biology favoring attributes that could more easily survive the aftermath of such energetic events. In this fashion, asteroids may have banished many probable avenues for life to relative obscurity. Similarly, they may have also prevented our biosphere from becoming more complex until more recent eras. The full tale of asteroid impacts on the history of our world, and how human life managed to emerge from myriad possibilities, has yet to be fully told. The hazard posed by asteroid impacts to our civilization is low but singular. The design of efficient mitigation strategies strongly relies on asteroid detection by our ground- and spacebased surveys as well as knowledge of their physical properties. A more positive motivation for asteroid discovery is that the proximity of some asteroids to Earth may allow future astronauts to harvest their water and rare mineral resources for use in exploration. A key goal of asteroid science is therefore to learn how humans and robotic probes can interact with asteroids (and extract their materials) in an efficient way. We expect that these adventures may be commonplace in the future

  7. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  8. Big Dreams

    Science.gov (United States)

    Benson, Michael T.

    2015-01-01

    The Keen Johnson Building is symbolic of Eastern Kentucky University's historic role as a School of Opportunity. It is a place that has inspired generations of students, many from disadvantaged backgrounds, to dream big dreams. The construction of the Keen Johnson Building was inspired by a desire to create a student union facility that would not…

  9. Big Science

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1986-05-15

    Astronomy, like particle physics, has become Big Science where the demands of front line research can outstrip the science budgets of whole nations. Thus came into being the European Southern Observatory (ESO), founded in 1962 to provide European scientists with a major modern observatory to study the southern sky under optimal conditions.

  10. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness.

    Science.gov (United States)

    Dove, Edward S; Özdemir, Vural

    2015-09-01

    The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science); and consortia ethics (Big Ethics). These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, "extreme centrism", and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics-separate and together-have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness . By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit) to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit.

  11. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness

    Science.gov (United States)

    Dove, Edward S.; Özdemir, Vural

    2015-01-01

    The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science); and consortia ethics (Big Ethics). These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, “extreme centrism”, and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics—separate and together—have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness. By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit) to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit. PMID:26345196

  12. What Role for Law, Human Rights, and Bioethics in an Age of Big Data, Consortia Science, and Consortia Ethics? The Importance of Trustworthiness

    Directory of Open Access Journals (Sweden)

    Edward S. Dove

    2015-08-01

    Full Text Available The global bioeconomy is generating new paradigm-shifting practices of knowledge co-production, such as collective innovation; large-scale, data-driven global consortia science (Big Science; and consortia ethics (Big Ethics. These bioeconomic and sociotechnical practices can be forces for progressive social change, but they can also raise predicaments at the interface of law, human rights, and bioethics. In this article, we examine one such double-edged practice: the growing, multivariate exploitation of Big Data in the health sector, particularly by the private sector. Commercial exploitation of health data for knowledge-based products is a key aspect of the bioeconomy and is also a topic of concern among publics around the world. It is exacerbated in the current age of globally interconnected consortia science and consortia ethics, which is characterized by accumulating epistemic proximity, diminished academic independence, “extreme centrism”, and conflicted/competing interests among innovation actors. Extreme centrism is of particular importance as a new ideology emerging from consortia science and consortia ethics; this relates to invariably taking a middle-of-the-road populist stance, even in the event of human rights breaches, so as to sustain the populist support needed for consortia building and collective innovation. What role do law, human rights, and bioethics—separate and together—have to play in addressing these predicaments and opportunities in early 21st century science and society? One answer we propose is an intertwined ethico-legal normative construct, namely trustworthiness. By considering trustworthiness as a central pillar at the intersection of law, human rights, and bioethics, we enable others to trust us, which in turns allows different actors (both nonprofit and for-profit to operate more justly in consortia science and ethics, as well as to access and responsibly use health data for public benefit.

  13. L-arginine mediated renaturation enhances yield of human, α6 Type IV collagen non-collagenous domain from bacterial inclusion bodies.

    Science.gov (United States)

    Gunda, Venugopal; Boosani, Chandra Shekhar; Verma, Raj Kumar; Guda, Chittibabu; Sudhakar, Yakkanti Akul

    2012-10-01

    The anti-angiogenic, carboxy terminal non-collagenous domain (NC1) derived from human Collagen type IV alpha 6 chain, [α6(IV)NC1] or hexastatin, was earlier obtained using different recombinant methods of expression in bacterial systems. However, the effect of L-arginine mediated renaturation in enhancing the relative yields of this protein from bacterial inclusion bodies has not been evaluated. In the present study, direct stirring and on-column renaturation methods using L-arginine and different size exclusion chromatography matrices were applied for enhancing the solubility in purifying the recombinant α6(IV)NC1 from bacterial inclusion bodies. This methodology enabled purification of higher quantities of soluble protein from inclusion bodies, which inhibited endothelial cell proliferation, migration and tube formation. Thus, the scope for L-arginine mediated renaturation in obtaining higher yields of soluble, biologically active NC1 domain from bacterial inclusion bodies was evaluated.

  14. Identification of novel human dipeptidyl peptidase-IV inhibitors of natural origin (part I: virtual screening and activity assays.

    Directory of Open Access Journals (Sweden)

    Laura Guasch

    Full Text Available BACKGROUND: There has been great interest in determining whether natural products show biological activity toward protein targets of pharmacological relevance. One target of particular interest is DPP-IV whose most important substrates are incretins that, among other beneficial effects, stimulates insulin biosynthesis and secretion. Incretins have very short half-lives because of their rapid degradation by DPP-IV and, therefore, inhibiting this enzyme improves glucose homeostasis. As a result, DPP-IV inhibitors are of considerable interest to the pharmaceutical industry. The main goals of this study were (a to develop a virtual screening process to identify potential DPP-IV inhibitors of natural origin; (b to evaluate the reliability of our virtual-screening protocol by experimentally testing the in vitro activity of selected natural-product hits; and (c to use the most active hit for predicting derivatives with higher binding affinities for the DPP-IV binding site. METHODOLOGY/PRINCIPAL FINDINGS: We predicted that 446 out of the 89,165 molecules present in the natural products subset of the ZINC database would inhibit DPP-IV with good ADMET properties. Notably, when these 446 molecules were merged with 2,342 known DPP-IV inhibitors and the resulting set was classified into 50 clusters according to chemical similarity, there were 12 clusters that contained only natural products for which no DPP-IV inhibitory activity has been previously reported. Nine molecules from 7 of these 12 clusters were then selected for in vitro activity testing and 7 out of the 9 molecules were shown to inhibit DPP-IV (where the remaining two molecules could not be solubilized, preventing the evaluation of their DPP-IV inhibitory activity. Then, the hit with the highest activity was used as a lead compound in the prediction of more potent derivatives. CONCLUSIONS/SIGNIFICANCE: We have demonstrated that our virtual-screening protocol was successful in identifying novel

  15. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  16. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  17. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  18. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  19. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  20. Big Data

    OpenAIRE

    Bútora, Matúš

    2017-01-01

    Cieľom bakalárskej práca je popísať problematiku Big Data a agregačné operácie OLAP pre podporu rozhodovania, ktoré sú na ne aplikované pomocou technológie Apache Hadoop. Prevažná časť práce je venovaná popisu práve tejto technológie. Posledná kapitola sa zaoberá spôsobom aplikovania agregačných operácií a problematikou ich realizácie. Nasleduje celkové zhodnotenie práce a možnosti využitia výsledného systému do budúcna. The aim of the bachelor thesis is to describe the Big Data issue and ...

  1. BIG DATA

    OpenAIRE

    Abhishek Dubey

    2018-01-01

    The term 'Big Data' portrays inventive methods and advances to catch, store, disseminate, oversee and break down petabyte-or bigger estimated sets of data with high-speed & diverted structures. Enormous information can be organized, non-structured or half-organized, bringing about inadequacy of routine information administration techniques. Information is produced from different distinctive sources and can touch base in the framework at different rates. With a specific end goal to handle this...

  2. Keynote: Big Data, Big Opportunities

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    The enthusiasm for big data is obscuring the complexity and diversity of data in scholarship and the challenges for stewardship. Inside the black box of data are a plethora of research, technology, and policy issues. Data are not shiny objects that are easily exchanged. Rather, data are representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Data practices are local, varying from field to field, individual to indiv...

  3. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  4. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...... indtil videre begrænset. Denne artikel belyser, hvordan den offentlige sundhedssektor kan genanvende og udnytte en stadig større mængde data under hensyntagen til offentlige værdier. Artiklen bygger på et casestudie af anvendelsen af store mængder sundhedsdata i Dansk AlmenMedicinsk Database (DAMD......). Analysen viser, at (gen)brug af data i nye sammenhænge er en flerspektret afvejning mellem ikke alene økonomiske rationaler og kvalitetshensyn, men også kontrol over personfølsomme data og etiske implikationer for borgeren. I DAMD-casen benyttes data på den ene side ”i den gode sags tjeneste” til...

  5. IVS Organization

    Science.gov (United States)

    2004-01-01

    International VLBI Service (IVS) is an international collaboration of organizations which operate or support Very Long Baseline Interferometry (VLBI) components. The goals are: To provide a service to support geodetic, geophysical and astrometric research and operational activities. To promote research and development activities in all aspects of the geodetic and astrometric VLBI technique. To interact with the community of users of VLBI products and to integrate VLBI into a global Earth observing system.

  6. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  7. Adsorption and transformation of selected human-used macrolide antibacterial agents with iron(III) and manganese(IV) oxides

    Energy Technology Data Exchange (ETDEWEB)

    Feitosa-Felizzola, Juliana [Laboratoire Chimie Provence, Aix-Marseille Universites-CNRS (UMR 6264), 3 place Victor Hugo, 13331 Marseille Cedex 3 (France); Hanna, Khalil [Laboratoire de Chimie Physique et Microbiologie pour l' Environnement, CNRS-Universite Henri Poincare-Nancy 1 (UMR 7564), 405 rue de Vandoeuvre, 54600 Villers-les-Nancy (France); Chiron, Serge [Laboratoire Chimie Provence, Aix-Marseille Universites-CNRS (UMR 6264), 3 place Victor Hugo, 13331 Marseille Cedex 3 (France)], E-mail: serge.chiron@univ-provence.fr

    2009-04-15

    The adsorption/transformation of two members (clarithromycin and roxithromycin) of the macrolide (ML) antibacterial agents on the surface of three environmental subsurface sorbents (clay, iron(III) and manganese(IV) oxy-hydroxides) was investigated. The adsorption fitted well to the Freundlich model with a high sorption capacity. Adsorption probably occurred through a surface complexation mechanism and was accompanied by slow degradation of the selected MLs. Transformation proceeded through two parallel pathways: a major pathway was the hydrolysis of the cladinose sugar, and to a lesser extent the hydrolysis of the lactone ring. A minor pathway was the N-dealkylation of the amino sugar. This study indicates that Fe(III) and Mn(IV) oxy-hydroxides in aquatic sediments may play an important role in the natural attenuation of MLs. Such an attenuation route yields a range of intermediates that might retain some of their biological activity. - Iron(III) and manganese(IV) oxy-hydroxides in aquatic sediments may play an important role in the natural attenuation of macrolide antibacterial agents.

  8. Adsorption and transformation of selected human-used macrolide antibacterial agents with iron(III) and manganese(IV) oxides

    International Nuclear Information System (INIS)

    Feitosa-Felizzola, Juliana; Hanna, Khalil; Chiron, Serge

    2009-01-01

    The adsorption/transformation of two members (clarithromycin and roxithromycin) of the macrolide (ML) antibacterial agents on the surface of three environmental subsurface sorbents (clay, iron(III) and manganese(IV) oxy-hydroxides) was investigated. The adsorption fitted well to the Freundlich model with a high sorption capacity. Adsorption probably occurred through a surface complexation mechanism and was accompanied by slow degradation of the selected MLs. Transformation proceeded through two parallel pathways: a major pathway was the hydrolysis of the cladinose sugar, and to a lesser extent the hydrolysis of the lactone ring. A minor pathway was the N-dealkylation of the amino sugar. This study indicates that Fe(III) and Mn(IV) oxy-hydroxides in aquatic sediments may play an important role in the natural attenuation of MLs. Such an attenuation route yields a range of intermediates that might retain some of their biological activity. - Iron(III) and manganese(IV) oxy-hydroxides in aquatic sediments may play an important role in the natural attenuation of macrolide antibacterial agents

  9. Big Data, Big Opportunities, and Big Challenges.

    Science.gov (United States)

    Frelinger, Jeffrey A

    2015-11-01

    High-throughput assays have begun to revolutionize modern biology and medicine. The advent of cheap next-generation sequencing (NGS) has made it possible to interrogate cells and human populations as never before. Although this has allowed us to investigate the genetics, gene expression, and impacts of the microbiome, there remain both practical and conceptual challenges. These include data handling, storage, and statistical analysis, as well as an inherent problem of the analysis of heterogeneous cell populations.

  10. Risk-benefit evaluation of fish from Chinese markets: Nutrients and contaminants in 24 fish species from five big cities and related assessment for human health

    Energy Technology Data Exchange (ETDEWEB)

    Du, Zhen-Yu, E-mail: zdu@nifes.no [National Institute of Nutrition and Seafood Research (NIFES), N-5817 Bergen (Norway); Zhang, Jian [National Institute of Nutrition and Seafood Research (NIFES), N-5817 Bergen (Norway); Institute of Nutrition and Food Safety, Chinese Center for Disease Control and Prevention, Beijing, 100050 (China); Department of Biomedicine, University of Bergen (Norway); Wang, Chunrong; Li, Lixiang; Man, Qingqing [Institute of Nutrition and Food Safety, Chinese Center for Disease Control and Prevention, Beijing, 100050 (China); Lundebye, Anne-Katrine; Froyland, Livar [National Institute of Nutrition and Seafood Research (NIFES), N-5817 Bergen (Norway)

    2012-02-01

    The risks and benefits of fish from markets in Chinese cities have not previously been fully evaluated. In the present study, 24 common fish species with more than 400 individual samples were collected from markets from five big Chinese cities in 2007. The main nutrients and contaminants were measured and the risk-benefit was evaluated based on recommended nutrient intakes and risk level criteria set by relevant authorities. The comprehensive effects of nutrients and contaminants in marine oily fish were also evaluated using the data of two related human dietary intervention trials performed in dyslipidemic Chinese men and women in 2008 and 2010, respectively. The results showed that concentrations of contaminants analyzed including DDT, PCB{sub 7}, arsenic and cadmium were much lower than their corresponding maximum limits with the exception of the mercury concentration in common carp. Concentrations of POPs and n-3 LCPUFA, mainly EPA and DHA, were positively associated with the lipid content of the fish. With a daily intake of 80-100 g marine oily fish, the persistent organic pollutants in fish would not counteract the beneficial effects of n-3 LCPUFA in reducing cardiovascular disease (CVD) risk markers. Marine oily fish provided more effective protection against CVD than lean fish, particularly for the dyslipidemic populations. The risk-benefit assessment based on the present daily aquatic product intake in Chinese urban residents (44.9 and 62.3 g for the average values for all cities and big cities, respectively) indicated that fish, particularly marine oily fish, can be regularly consumed to achieve optimal nutritional benefits from n-3 LCPUFA, without causing significant contaminant-related health risks. However, the potential health threat from contaminants in fish should still be emphasized for the populations consuming large quantities of fish, particularly wild fish. - Highlights: Black-Right-Pointing-Pointer We collected 24 fish species with more than

  11. Risk–benefit evaluation of fish from Chinese markets: Nutrients and contaminants in 24 fish species from five big cities and related assessment for human health

    International Nuclear Information System (INIS)

    Du, Zhen-Yu; Zhang, Jian; Wang, Chunrong; Li, Lixiang; Man, Qingqing; Lundebye, Anne-Katrine; Frøyland, Livar

    2012-01-01

    The risks and benefits of fish from markets in Chinese cities have not previously been fully evaluated. In the present study, 24 common fish species with more than 400 individual samples were collected from markets from five big Chinese cities in 2007. The main nutrients and contaminants were measured and the risk–benefit was evaluated based on recommended nutrient intakes and risk level criteria set by relevant authorities. The comprehensive effects of nutrients and contaminants in marine oily fish were also evaluated using the data of two related human dietary intervention trials performed in dyslipidemic Chinese men and women in 2008 and 2010, respectively. The results showed that concentrations of contaminants analyzed including DDT, PCB 7 , arsenic and cadmium were much lower than their corresponding maximum limits with the exception of the mercury concentration in common carp. Concentrations of POPs and n-3 LCPUFA, mainly EPA and DHA, were positively associated with the lipid content of the fish. With a daily intake of 80–100 g marine oily fish, the persistent organic pollutants in fish would not counteract the beneficial effects of n-3 LCPUFA in reducing cardiovascular disease (CVD) risk markers. Marine oily fish provided more effective protection against CVD than lean fish, particularly for the dyslipidemic populations. The risk–benefit assessment based on the present daily aquatic product intake in Chinese urban residents (44.9 and 62.3 g for the average values for all cities and big cities, respectively) indicated that fish, particularly marine oily fish, can be regularly consumed to achieve optimal nutritional benefits from n-3 LCPUFA, without causing significant contaminant-related health risks. However, the potential health threat from contaminants in fish should still be emphasized for the populations consuming large quantities of fish, particularly wild fish. - Highlights: ► We collected 24 fish species with more than 400 individual samples

  12. Big data naturally rescaled

    International Nuclear Information System (INIS)

    Stoop, Ruedi; Kanders, Karlis; Lorimer, Tom; Held, Jenny; Albert, Carlo

    2016-01-01

    We propose that a handle could be put on big data by looking at the systems that actually generate the data, rather than the data itself, realizing that there may be only few generic processes involved in this, each one imprinting its very specific structures in the space of systems, the traces of which translate into feature space. From this, we propose a practical computational clustering approach, optimized for coping with such data, inspired by how the human cortex is known to approach the problem.

  13. Biochemical characterization of individual human glycosylated pro-insulin-like growth factor (IGF)-II and big-IGF-II isoforms associated with cancer.

    Science.gov (United States)

    Greenall, Sameer A; Bentley, John D; Pearce, Lesley A; Scoble, Judith A; Sparrow, Lindsay G; Bartone, Nicola A; Xiao, Xiaowen; Baxter, Robert C; Cosgrove, Leah J; Adams, Timothy E

    2013-01-04

    Insulin-like growth factor II (IGF-II) is a major embryonic growth factor belonging to the insulin-like growth factor family, which includes insulin and IGF-I. Its expression in humans is tightly controlled by maternal imprinting, a genetic restraint that is lost in many cancers, resulting in up-regulation of both mature IGF-II mRNA and protein expression. Additionally, increased expression of several longer isoforms of IGF-II, termed "pro" and "big" IGF-II, has been observed. To date, it is ambiguous as to what role these IGF-II isoforms have in initiating and sustaining tumorigenesis and whether they are bioavailable. We have expressed each individual IGF-II isoform in their proper O-glycosylated format and established that all bind to the IGF-I receptor and both insulin receptors A and B, resulting in their activation and subsequent stimulation of fibroblast proliferation. We also confirmed that all isoforms are able to be sequestered into binary complexes with several IGF-binding proteins (IGFBP-2, IGFBP-3, and IGFBP-5). In contrast to this, ternary complex formation with IGFBP-3 or IGFBP-5 and the auxillary protein, acid labile subunit, was severely diminished. Furthermore, big-IGF-II isoforms bound much more weakly to purified ectodomain of the natural IGF-II scavenging receptor, IGF-IIR. IGF-II isoforms thus possess unique biological properties that may enable them to escape normal sequestration avenues and remain bioavailable in vivo to sustain oncogenic signaling.

  14. The International Big History Association

    Science.gov (United States)

    Duffy, Michael; Duffy, D'Neil

    2013-01-01

    IBHA, the International Big History Association, was organized in 2010 and "promotes the unified, interdisciplinary study and teaching of history of the Cosmos, Earth, Life, and Humanity." This is the vision that Montessori embraced long before the discoveries of modern science fleshed out the story of the evolving universe. "Big…

  15. Characterization of the human predominant fecal microbiota - With special focus on the Clostridial clusters IV and XIVa

    OpenAIRE

    Maukonen, Johanna

    2012-01-01

    The human gut microbiota is considered to be a complex fermentor with a metabolic potential rivaling that of the liver. In addition to its primary function in digestion, it affects the human host in numerous ways: maturation and modulation of the immune system, production of short-chain fatty acids and gases, transformation of bile acids, formation of vitamins, and also potential formation of mutagenic, toxic, and carcinogenic substances. Commensal bacteria are able to modulate the expression...

  16. Simulators IV

    International Nuclear Information System (INIS)

    Fairchild, B.T.

    1987-01-01

    These proceedings contain papers on simulators with artificial intelligence, and the human decision making process; visuals for simulators: human factors, training, and psycho-physical impacts; the role of institutional structure on simulation projects; maintenance trainers for economic value and safety; biomedical simulators for understanding nature, for medical benefits, and the physiological effects of simulators; the mathematical models and numerical techniques that drive today's simulators; and the demography of simulators, with census papers identifying the population of real-time simulator training devices; nuclear reactors

  17. Continuing to Build a Community Consensus on the Future of Human Space Flight: Report of the Fourth Community Workshop on Achievability and Sustainability of Human Exploration of Mars (AM IV)

    Science.gov (United States)

    Thronson, Harley A.; Baker, John; Beaty, David; Carberry, Chris; Craig, Mark; Davis, Richard M.; Drake, Bret G.; Cassady, Joseph; Hays, Lindsay; Hoffman, Stephen J.; hide

    2016-01-01

    To continue to build broadly based consensus on the future of human space exploration, the Fourth Community Workshop on Achievability and Sustainability of Human Exploration of Mars (AM IV), organized by Explore Mars, Inc. and the American Astronautical Society, was held at the Double Tree Inn in Monrovia, CA., December 68, 2016. Approximately 60 invited professionals from the industrial and commercial sectors, academia, and NASA, along with international colleagues, participated in the workshop. These individuals were chosen to be representative of the breadth of interests in astronaut and robotic Mars exploration.

  18. The challenges of big data.

    Science.gov (United States)

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  19. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  20. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  1. Capillary electrophoresis of Big-Dye terminator sequencing reactions for human mtDNA Control Region haplotyping in the identification of human remains.

    Science.gov (United States)

    Montesino, Marta; Prieto, Lourdes

    2012-01-01

    Cycle sequencing reaction with Big-Dye terminators provides the methodology to analyze mtDNA Control Region amplicons by means of capillary electrophoresis. DNA sequencing with ddNTPs or terminators was developed by (1). The progressive automation of the method by combining the use of fluorescent-dye terminators with cycle sequencing has made it possible to increase the sensibility and efficiency of the method and hence has allowed its introduction into the forensic field. PCR-generated mitochondrial DNA products are the templates for sequencing reactions. Different set of primers can be used to generate amplicons with different sizes according to the quality and quantity of the DNA extract providing sequence data for different ranges inside the Control Region.

  2. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  3. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  4. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  5. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  6. Privacy as human flourishing: could a shift towards virtue ethics strengthen privacy protection in the age of Big Data?

    NARCIS (Netherlands)

    van der Sloot, B.

    2014-01-01

    Privacy is commonly seen as an instrumental value in relation to negative freedom, human dignity and personal autonomy. Article 8 ECHR, protecting the right to privacy, was originally coined as a doctrine protecting the negative freedom of citizens in vertical relations, that is between citizen and

  7. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  8. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  9. The mobilize center: an NIH big data to knowledge center to advance human movement research and improve mobility.

    Science.gov (United States)

    Ku, Joy P; Hicks, Jennifer L; Hastie, Trevor; Leskovec, Jure; Ré, Christopher; Delp, Scott L

    2015-11-01

    Regular physical activity helps prevent heart disease, stroke, diabetes, and other chronic diseases, yet a broad range of conditions impair mobility at great personal and societal cost. Vast amounts of data characterizing human movement are available from research labs, clinics, and millions of smartphones and wearable sensors, but integration and analysis of this large quantity of mobility data are extremely challenging. The authors have established the Mobilize Center (http://mobilize.stanford.edu) to harness these data to improve human mobility and help lay the foundation for using data science methods in biomedicine. The Center is organized around 4 data science research cores: biomechanical modeling, statistical learning, behavioral and social modeling, and integrative modeling. Important biomedical applications, such as osteoarthritis and weight management, will focus the development of new data science methods. By developing these new approaches, sharing data and validated software tools, and training thousands of researchers, the Mobilize Center will transform human movement research. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  10. Temporal trend in the levels of polycyclic aromatic hydrocarbons emitted in a big tire landfill fire in Spain: Risk assessment for human health.

    Science.gov (United States)

    Rovira, Joaquim; Domínguez-Morueco, Noelia; Nadal, Martí; Schuhmacher, Marta; Domingo, José L

    2018-02-23

    In May 2016, a big fire occurred in an illegal landfill placed in Seseña (Toledo, Spain), where between 70,000 and 90,000 tons of tires had been accumulated during years. Just after the fire, and because of the increase of airborne PAHs, we found that cancer risks for the population living in the neighborhood of the landfill were 3-5 times higher than for the rest of inhabitants of Seseña. Some months after our initial (June 2016) study, two sampling campaigns (December 2016 and May 2017) were performed to assess the temporal trends of the environmental levels of PAHs, as well as to reassure that these chemicals did not pose any risk for the human health of Seseña inhabitants. In soils, the total concentrations of the 16 PAHs (December 2016), as well as the sum of the seven carcinogenic PAHs, showed values between 8.5 and 94.7 ng g -1 and between 1.0 and 42.3 ng g -1 , respectively. In May 2017, a significant decrease (between 4 and 38 times) in the levels of PAHs in air was observed, with total concentrations ranging between 3.49 and 5.06 ng m -3 . One year after the fire, the cancer risk at different zones of Seseña was similar, being lower than that found in June 2016, and negligible according to national and international agencies.

  11. Microcalorimetric measurements of heat production in human erythrocytes. IV. Comparison between different calorimetric techniques, suspension media, and preparation methods.

    Science.gov (United States)

    Monti, M; Wadsö, I

    1976-10-01

    Heat production in human erythrocytes from healthy subjects has been measured under different experimental conditions. Simultaneous measurements were made on the same samples using different types of microcalorimeters: a static ampoule calorimeter, an air perfusion calorimeter, and a flow calorimeter. Obtained heat effect values for specified standard conditions, P degrees, were within uncertainty limits the same for the different calorimeters. Cells were suspended either in autologous plasma or in a phosphate buffer. P degrees values for buffer suspensions were significantly higher than those for plasma suspensions. Erythrocyte samples prepared by the column adsorption technique gave higher P degrees values than those obtained by a conventional centrifugation procedure.

  12. Phase transitions in tumor growth: IV relationship between metabolic rate and fractal dimension of human tumor cells

    Science.gov (United States)

    Betancourt-Mar, J. A.; Llanos-Pérez, J. A.; Cocho, G.; Mansilla, R.; Martin, R. R.; Montero, S.; Nieto-Villar, J. M.

    2017-05-01

    By the use of thermodynamics formalism of irreversible processes, complex systems theory and systems biology, it is derived a relationship between the production of entropy per unit time, the fractal dimension and the tumor growth rate for human tumors cells. The thermodynamics framework developed demonstrates that, the dissipation function is a Landau potential and also the Lyapunov function of the dynamical behavior of tumor growth, which indicate the directional character, stability and robustness of the phenomenon. The entropy production rate may be used as a quantitative index of the metastatic potential of tumors. The current theoretical framework will hopefully provide a better understanding of cancer and contribute to improvements in cancer treatment.

  13. Big Data: Understanding Big Data

    OpenAIRE

    Taylor-Sakyi, Kevin

    2016-01-01

    Steve Jobs, one of the greatest visionaries of our time was quoted in 1996 saying "a lot of times, people do not know what they want until you show it to them" [38] indicating he advocated products to be developed based on human intuition rather than research. With the advancements of mobile devices, social networks and the Internet of Things, enormous amounts of complex data, both structured and unstructured are being captured in hope to allow organizations to make better business decisions ...

  14. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  15. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  16. Population specific biomarkers of human aging: a big data study using South Korean, Canadian and Eastern European patient populations.

    Science.gov (United States)

    Mamoshina, Polina; Kochetov, Kirill; Putin, Evgeny; Cortese, Franco; Aliper, Alexander; Lee, Won-Suk; Ahn, Sung-Min; Uhn, Lee; Skjodt, Neil; Kovalchuk, Olga; Scheibye-Knudsen, Morten; Zhavoronkov, Alex

    2018-01-11

    Accurate and physiologically meaningful biomarkers for human aging are key to assessing anti-aging therapies. Given ethnic differences in health, diet, lifestyle, behaviour, environmental exposures and even average rate of biological aging, it stands to reason that aging clocks trained on datasets obtained from specific ethnic populations are more likely to account for these potential confounding factors, resulting in an enhanced capacity to predict chronological age and quantify biological age. Here we present a deep learning-based hematological aging clock modeled using the large combined dataset of Canadian, South Korean and Eastern European population blood samples that show increased predictive accuracy in individual populations compared to population-specific hematologic aging clocks. The performance of models was also evaluated on publicly-available samples of the American population from the National Health and Nutrition Examination Survey (NHANES). In addition, we explored the association between age predicted by both population-specific and combined hematological clocks and all-cause mortality. Overall, this study suggests a) the population-specificity of aging patterns and b) hematologic clocks predicts all-cause mortality. Proposed models added to the freely available Aging.AI system allowing improved ability to assess human aging. © The Author(s) 2018. Published by Oxford University Press on behalf of The Gerontological Society of America.

  17. Enhanced Design Alternative IV

    International Nuclear Information System (INIS)

    Kramer, N.E.

    1999-01-01

    This report evaluates Enhanced Design Alternative (EDA) IV as part of the second phase of the License Application Design Selection (LADS) effort. The EDA IV concept was compared to the VA reference design using criteria from the Design Input Request for LADS Phase II EDA Evaluations (CRWMS M and O 1999b) and (CRWMS M and O 1999f). Briefly, the EDA IV concept arranges the waste packages close together in an emplacement configuration known as line load. Continuous pre-closure ventilation keeps the waste packages from exceeding their 350 C cladding and 200 C (4.3.6) drift wall temperature limits. This EDA concept keeps relatively high, uniform emplacement drift temperatures (post-closure) to drive water away from the repository and thus dry out the pillars between emplacement drifts. The waste package is shielded to permit human access to emplacement drifts and includes an integral filler inside the package to reduce the amount of water that can contact the waste form. Closure of the repository is desired 50 years after first waste is emplaced. Both backfill and drip shields will be emplaced at closure to improve post-closure performance. The EDA IV concept includes more defense-in-depth layers than the VA reference design because of its backfill, drip shield, waste package shielding, and integral filler features. These features contribute to the low dose-rate to the public achieved during the first 10,000 years of repository life as shown in Figure 3. Investigation of the EDA IV concept has led to the following general conclusions: (1) The total life cycle cost for EDA IV is about $21.7 billion which equates to a $11.3 billion net present value (both figures rounded up). (2) The incidence of design basis events for EDA IV is similar to the VA reference design. (3) The emplacement of the waste packages in drifts will be similar to the VA reference design. However, heavier equipment may be required because the shielded waste package will be heavier. (4) The heavier

  18. Thyroid remnant ablation success and disease outcome in stage III or IV differentiated thyroid carcinoma: recombinant human thyrotropin versus thyroid hormone withdrawal.

    Science.gov (United States)

    Vallejo Casas, Juan A; Mena Bares, Luisa M; Gálvez Moreno, Maria A; Moreno Ortega, Estefanía; Marlowe, Robert J; Maza Muret, Francisco R; Albalá González, María D

    2016-06-01

    Most publications to date compare outcomes after post-surgical thyroid remnant ablation stimulated by recombinant human thyrotropin (rhTSH) versus thyroid hormone withholding/withdrawal (THW) in low-recurrence risk differentiated thyroid carcinoma (DTC) patients. We sought to perform this comparison in high-risk patients. We retrospectively analyzed ~9-year single-center experience in 70 consecutive adults with initial UICC (Union for International Cancer Control) stage III/IV, M0 DTC undergoing rhTSH-aided (N.=54) or THW-aided (N.=16) high-activity ablation. Endpoints included ablation success and DTC outcome. Assessed ≥1 year post-ablation, ablation success comprised a) no visible scintigraphic thyroid bed uptake or pathological extra-thyroidal uptake; b) undetectable stimulated serum thyroglobulin (Tg) without interfering autoantibodies; c) both criteria. DTC outcome, determined at the latest visit, comprised either 1) "no evidence of disease" (NED): undetectable Tg, negative Tg autoantibodies, negative most recent whole-body scan, no suspicious findings clinically, on neck ultrasonography, or on other imaging; 2) persistent disease: failure to attain NED; or 3) recurrence: loss of NED. After the first ablative activity, ablation success by scintigraphic plus biochemical criteria was 64.8% in rhTSH patients, 56.3% in THW patients (P=NS). After 3.5-year versus 6.2-year median follow-up (P<0.05), DTC outcomes were NED, 85.2%, persistent disease, 13.0%, recurrence, 1.9%, in the rhTSH group and NED, 87.5%, persistent or recurrent disease, 6.3% each, in the THW group (P=NS). In patients with initial stage III/IV, M0 DTC, rhTSH-aided and THW-assisted ablation were associated with comparable remnant eradication or DTC cure rates.

  19. Different pressor and bronchoconstrictor properties of human big-endothelin-1, 2 (1-38) and 3 in ketamine/xylazine-anaesthetized guinea-pigs.

    OpenAIRE

    Gratton, J P; Rae, G A; Claing, A; Télémaque, S; D'Orléans-Juste, P

    1995-01-01

    1. In the present study, the precursors of endothelin-1, endothelin-2 and endothelin-3 were tested for their pressor and bronchoconstrictor properties in the anaesthetized guinea-pig. In addition, the effects of big-endothelin-1 and endothelin-1 were assessed under urethane or ketamine/xylazine anaesthesia. 2. When compared to ketamine/xylazine, urethane markedly depressed the pressor and bronchoconstrictor properties of endothelin-1 and big-endothelin-1. 3. Under ketamine/xylazine anaesthesi...

  20. DOE Human Genome Program: Contractor-Grantee Workshop IV, November 13--17, 1994, Santa Fe, New Mexico

    Energy Technology Data Exchange (ETDEWEB)

    1994-10-01

    This volume contains the proceedings of the fourth Contractor-Grantee Workshop for the Department of Energy (DOE) Human Genome Program. Of the 204 abstracts in this book, some 200 describe the genome research of DOE-funded grantees and contractors located at the multidisciplinary centers at Lawrence Berkeley Laboratory, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory; other DOE-supported laboratories; and more than 54 universities, research organizations, and companies in the United States and abroad. Included are 16 abstracts from ongoing projects in the Ethical, Legal, and Social Issues (ELSI) component, an area that continues to attract considerable attention from a wide variety of interested parties. Three abstracts summarize work in the new Microbial Genome Initiative launched this year by the Office of Health and Environmental Research (OHER) to provide genome sequence and mapping data on industrially important microorganisms and those that live under extreme conditions. Many of the projects will be discussed at plenary sessions held throughout the workshop, and all are represented in the poster sessions.

  1. Biochemical Characterization of Individual Human Glycosylated pro-Insulin-like Growth Factor (IGF)-II and big-IGF-II Isoforms Associated with Cancer

    Science.gov (United States)

    Greenall, Sameer A.; Bentley, John D.; Pearce, Lesley A.; Scoble, Judith A.; Sparrow, Lindsay G.; Bartone, Nicola A.; Xiao, Xiaowen; Baxter, Robert C.; Cosgrove, Leah J.; Adams, Timothy E.

    2013-01-01

    Insulin-like growth factor II (IGF-II) is a major embryonic growth factor belonging to the insulin-like growth factor family, which includes insulin and IGF-I. Its expression in humans is tightly controlled by maternal imprinting, a genetic restraint that is lost in many cancers, resulting in up-regulation of both mature IGF-II mRNA and protein expression. Additionally, increased expression of several longer isoforms of IGF-II, termed “pro” and “big” IGF-II, has been observed. To date, it is ambiguous as to what role these IGF-II isoforms have in initiating and sustaining tumorigenesis and whether they are bioavailable. We have expressed each individual IGF-II isoform in their proper O-glycosylated format and established that all bind to the IGF-I receptor and both insulin receptors A and B, resulting in their activation and subsequent stimulation of fibroblast proliferation. We also confirmed that all isoforms are able to be sequestered into binary complexes with several IGF-binding proteins (IGFBP-2, IGFBP-3, and IGFBP-5). In contrast to this, ternary complex formation with IGFBP-3 or IGFBP-5 and the auxillary protein, acid labile subunit, was severely diminished. Furthermore, big-IGF-II isoforms bound much more weakly to purified ectodomain of the natural IGF-II scavenging receptor, IGF-IIR. IGF-II isoforms thus possess unique biological properties that may enable them to escape normal sequestration avenues and remain bioavailable in vivo to sustain oncogenic signaling. PMID:23166326

  2. Complete amino acid sequence of the human alpha 5 (IV) collagen chain and identification of a single-base mutation in exon 23 converting glycine 521 in the collagenous domain to cysteine in an Alport syndrome patient

    DEFF Research Database (Denmark)

    Zhou, J; Hertz, Jens Michael; Leinonen, A

    1992-01-01

    We have generated and characterized cDNA clones providing the complete amino acid sequence of the human type IV collagen chain whose gene has been shown to be mutated in X chromosome-linked Alport syndrome. The entire translation product has 1,685 amino acid residues. There is a 26-residue signal...

  3. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  4. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  5. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  6. Digital humanitarians how big data is changing the face of humanitarian response

    CERN Document Server

    Meier, Patrick

    2015-01-01

    The Rise of Digital HumanitariansMapping Haiti LiveSupporting Search And Rescue EffortsPreparing For The Long Haul Launching An SMS Life Line Sending In The Choppers Openstreetmap To The Rescue Post-Disaster Phase The Human Story Doing Battle With Big Data Rise Of Digital Humanitarians This Book And YouThe Rise of Big (Crisis) DataBig (Size) Data Finding Needles In Big (Size) Data Policy, Not Simply Technology Big (False) Data Unpacking Big (False) Data Calling 991 And 999 Big (

  7. Helicobacter pylori Type IV Secretion System and Its Adhesin Subunit, CagL, Mediate Potent Inflammatory Responses in Primary Human Endothelial Cells

    Directory of Open Access Journals (Sweden)

    Mona Tafreshi

    2018-02-01

    Full Text Available The Gram-negative bacterium, Helicobacter pylori, causes chronic gastritis, peptic ulcers, and gastric cancer in humans. Although the gastric epithelium is the primary site of H. pylori colonization, H. pylori can gain access to deeper tissues. Concurring with this notion, H. pylori has been found in the vicinity of endothelial cells in gastric submucosa. Endothelial cells play crucial roles in innate immune response, wound healing and tumorigenesis. This study examines the molecular mechanisms by which H. pylori interacts with and triggers inflammatory responses in endothelial cells. We observed that H. pylori infection of primary human endothelial cells stimulated secretion of the key inflammatory cytokines, interleukin-6 (IL-6 and interleukin-8 (IL-8. In particular, IL-8, a potent chemokine and angiogenic factor, was secreted by H. pylori-infected endothelial cells to levels ~10- to 20-fold higher than that typically observed in H. pylori-infected gastric epithelial cells. These inflammatory responses were triggered by the H. pylori type IV secretion system (T4SS and the T4SS-associated adhesin CagL, but not the translocation substrate CagA. Moreover, in contrast to integrin α5β1 playing an essential role in IL-8 induction by H. pylori upon infection of gastric epithelial cells, both integrin α5β1 and integrin αvβ3 were dispensable for IL-8 induction in H. pylori-infected endothelial cells. However, epidermal growth factor receptor (EGFR is crucial for mediating the potent H. pylori-induced IL-8 response in endothelial cells. This study reveals a novel mechanism by which the H. pylori T4SS and its adhesin subunit, CagL, may contribute to H. pylori pathogenesis by stimulating the endothelial innate immune responses, while highlighting EGFR as a potential therapeutic target for controlling H. pylori-induced inflammation.

  8. What do Big Data do in Global Governance?

    DEFF Research Database (Denmark)

    Krause Hansen, Hans; Porter, Tony

    2017-01-01

    Two paradoxes associated with big data are relevant to global governance. First, while promising to increase the capacities of humans in governance, big data also involve an increasingly independent role for algorithms, technical artifacts, the Internet of things, and other objects, which can...... reduce the control of human actors. Second, big data involve new boundary transgressions as data are brought together from multiple sources while also creating new boundary conflicts as powerful actors seek to gain advantage by controlling big data and excluding competitors. These changes are not just...... about new data sources for global decision-makers, but instead signal more profound changes in the character of global governance....

  9. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  10. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  11. The tarantula toxins ProTx-II and huwentoxin-IV differentially interact with human Nav1.7 voltage sensors to inhibit channel activation and inactivation.

    Science.gov (United States)

    Xiao, Yucheng; Blumenthal, Kenneth; Jackson, James O; Liang, Songping; Cummins, Theodore R

    2010-12-01

    The voltage-gated sodium channel Na(v)1.7 plays a crucial role in pain, and drugs that inhibit hNa(v)1.7 may have tremendous therapeutic potential. ProTx-II and huwentoxin-IV (HWTX-IV), cystine knot peptides from tarantula venoms, preferentially block hNa(v)1.7. Understanding the interactions of these toxins with sodium channels could aid the development of novel pain therapeutics. Whereas both ProTx-II and HWTX-IV have been proposed to preferentially block hNa(v)1.7 activation by trapping the domain II voltage-sensor in the resting configuration, we show that specific residues in the voltage-sensor paddle of domain II play substantially different roles in determining the affinities of these toxins to hNa(v)1.7. The mutation E818C increases ProTx-II's and HWTX-IV's IC(50) for block of hNa(v)1.7 currents by 4- and 400-fold, respectively. In contrast, the mutation F813G decreases ProTx-II affinity by 9-fold but has no effect on HWTX-IV affinity. It is noteworthy that we also show that ProTx-II, but not HWTX-IV, preferentially interacts with hNa(v)1.7 to impede fast inactivation by trapping the domain IV voltage-sensor in the resting configuration. Mutations E1589Q and T1590K in domain IV each decreased ProTx-II's IC(50) for impairment of fast inactivation by ~6-fold. In contrast mutations D1586A and F1592A in domain-IV increased ProTx-II's IC(50) for impairment of fast inactivation by ~4-fold. Our results show that whereas ProTx-II and HWTX-IV binding determinants on domain-II may overlap, domain II plays a much more crucial role for HWTX-IV, and contrary to what has been proposed to be a guiding principle of sodium channel pharmacology, molecules do not have to exclusively target the domain IV voltage-sensor to influence sodium channel inactivation.

  12. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  13. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  14. The BIG Data Center: from deposition to integration to translation.

    Science.gov (United States)

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. IV treatment at home

    Science.gov (United States)

    ... Other IV treatments you may receive after you leave the hospital include: Treatment for hormone deficiencies Medicines for severe nausea that cancer chemotherapy or pregnancy may cause Patient-controlled analgesia (PCA) for pain (this is IV ...

  16. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  17. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  18. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  19. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  20. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  1. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  2. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  3. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  4. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  5. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  6. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  7. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  8. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  9. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  10. Citizens’ Media Meets Big Data: The Emergence of Data Activism

    NARCIS (Netherlands)

    Milan, S.; Gutiérrez, M.

    2015-01-01

    Big data presents citizens with new challenges and opportunities. ‘Data activism’ practices emerge at the intersection of the social and technological dimension of human action, whereby citizens take a critical approach to big data, and appropriate and manipulate data for advocacy and social change.

  11. Crisis analytics : big data-driven crisis response

    NARCIS (Netherlands)

    Qadir, Junaid; ur Rasool, Raihan; Zwitter, Andrej; Sathiaseelan, Arjuna; Crowcroft, Jon

    2016-01-01

    Disasters have long been a scourge for humanity. With the advances in technology (in terms of computing, communications, and the ability to process, and analyze big data), our ability to respond to disasters is at an inflection point. There is great optimism that big data tools can be leveraged to

  12. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  13. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  14. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  15. BigDansing

    KAUST Repository

    Khayyat, Zuhair; Ilyas, Ihab F.; Jindal, Alekh; Madden, Samuel; Ouzzani, Mourad; Papotti, Paolo; Quiané -Ruiz, Jorge-Arnulfo; Tang, Nan; Yin, Si

    2015-01-01

    of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic

  16. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  17. Big Creek Pit Tags

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The BCPITTAGS database is used to store data from an Oncorhynchus mykiss (steelhead/rainbow trout) population dynamics study in Big Creek, a coastal stream along the...

  18. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-01-01

    on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also

  19. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  20. An Intron 9 CYP19 Gene Variant (IVS9+5G>A), Present in an Aromatase-Deficient Girl, Affects Normal Splicing and Is Also Present in Normal Human Steroidogenic Tissues.

    Science.gov (United States)

    Saraco, Nora; Nesi-Franca, Suzana; Sainz, Romina; Marino, Roxana; Marques-Pereira, Rosana; La Pastina, Julia; Perez Garrido, Natalia; Sandrini, Romolo; Rivarola, Marco Aurelio; de Lacerda, Luiz; Belgorosky, Alicia

    2015-01-01

    Splicing CYP19 gene variants causing aromatase deficiency in 46,XX disorder of sexual development (DSD) patients have been reported in a few cases. A misbalance between normal and aberrant splicing variants was proposed to explain spontaneous pubertal breast development but an incomplete sex maturation progress. The aim of this study was to functionally characterize a novel CYP19A1 intronic homozygote mutation (IVS9+5G>A) in a 46,XX DSD girl presenting spontaneous breast development and primary amenorrhea, and to evaluate similar splicing variant expression in normal steroidogenic tissues. Genomic DNA analysis, splicing prediction programs, splicing assays, and in vitro protein expression and enzyme activity analyses were carried out. CYP19A1 mRNA expression in human steroidogenic tissues was also studied. A novel IVS9+5G>A homozygote mutation was found. In silico analysis predicts the disappearance of the splicing donor site in intron 9, confirmed by patient peripheral leukocyte cP450arom and in vitro studies. Protein analysis showed a shorter and inactive protein. The intron 9 transcript variant was also found in human steroidogenic tissues. The mutation IVS9+5G>A generates a splicing variant that includes intron 9 which is also present in normal human steroidogenic tissues, suggesting that a misbalance between normal and aberrant splicing variants might occur in target tissues, explaining the clinical phenotype in the affected patient. © 2015 S. Karger AG, Basel.

  1. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  2. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  3. Minsky on "Big Government"

    Directory of Open Access Journals (Sweden)

    Daniel de Santana Vasconcelos

    2014-03-01

    Full Text Available This paper objective is to assess, in light of the main works of Minsky, his view and analysis of what he called the "Big Government" as that huge institution which, in parallels with the "Big Bank" was capable of ensuring stability in the capitalist system and regulate its inherently unstable financial system in mid-20th century. In this work, we analyze how Minsky proposes an active role for the government in a complex economic system flawed by financial instability.

  4. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  5. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  6. THE FASTEST OODA LOOP: THE IMPLICATIONS OF BIG DATA FOR AIR POWER

    Science.gov (United States)

    2016-06-01

    need for a human interpreter. Until the rise of Big Data , automated translation only had a “small” library of several million words to pull from and...AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY THE FASTEST OODA LOOP: THE IMPLICATIONS OF BIG DATA FOR AIR POWER by Aaron J. Dove, Maj, USAF A...1 Previous Academic Study....................................................................................................2 Why Big Data

  7. Privacy Challenges of Genomic Big Data.

    Science.gov (United States)

    Shen, Hong; Ma, Jian

    2017-01-01

    With the rapid advancement of high-throughput DNA sequencing technologies, genomics has become a big data discipline where large-scale genetic information of human individuals can be obtained efficiently with low cost. However, such massive amount of personal genomic data creates tremendous challenge for privacy, especially given the emergence of direct-to-consumer (DTC) industry that provides genetic testing services. Here we review the recent development in genomic big data and its implications on privacy. We also discuss the current dilemmas and future challenges of genomic privacy.

  8. Cognitive computing and big data analytics

    CERN Document Server

    Hurwitz, Judith; Bowles, Adrian

    2015-01-01

    MASTER THE ABILITY TO APPLY BIG DATA ANALYTICS TO MASSIVE AMOUNTS OF STRUCTURED AND UNSTRUCTURED DATA Cognitive computing is a technique that allows humans and computers to collaborate in order to gain insights and knowledge from data by uncovering patterns and anomalies. This comprehensive guide explains the underlying technologies, such as artificial intelligence, machine learning, natural language processing, and big data analytics. It then demonstrates how you can use these technologies to transform your organization. You will explore how different vendors and different industries are a

  9. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  10. Generation IV national program

    International Nuclear Information System (INIS)

    Preville, M.; Sadhankar, R.; Brady, D.

    2007-01-01

    This paper outlines the Generation IV National Program. This program involves evolutionary and innovative design with significantly higher efficiencies (∼50% compared to present ∼30%) - sustainable, economical, safe, reliable and proliferation resistant - for future energy security. The Generation IV Forum (GIF) effectively leverages the resources of the participants to meet these goals. Ten countries signed the GIF Charter in 2001

  11. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  12. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  13. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  15. Big Data: Concept, Potentialities and Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Fernando Almeida

    2018-03-01

    Full Text Available The evolution of information systems and the growth in the use of the Internet and social networks has caused an explosion in the amount of available data relevant to the activities of the companies. Therefore, the treatment of these available data is vital to support operational, tactical and strategic decisions. This paper aims to present the concept of big data and the main technologies that support the analysis of large data volumes. The potential of big data is explored considering nine sectors of activity, such as financial, retail, healthcare, transports, agriculture, energy, manufacturing, public, and media and entertainment. In addition, the main current opportunities, vulnerabilities and privacy challenges of big data are discussed. It was possible to conclude that despite the potential for using the big data to grow in the previously identified areas, there are still some challenges that need to be considered and mitigated, namely the privacy of information, the existence of qualified human resources to work with Big Data and the promotion of a data-driven organizational culture.

  16. The human clone ST22 SCCmec IV methicillin-resistant Staphylococcus aureus isolated from swine herds and wild primates in Nepal: is man the common source?

    Science.gov (United States)

    Roberts, Marilyn C; Joshi, Prabhu Raj; Greninger, Alexander L; Melendez, Daira; Paudel, Saroj; Acharya, Mahesh; Bimali, Nabin Kishor; Koju, Narayan P; No, David; Chalise, Mukesh; Kyes, Randall C

    2018-05-01

    Swine nasal samples [n = 282] were collected from 12 randomly selected farms around Kathmandu, Nepal, from healthy animals. In addition, wild monkey (Macaca mulatta) saliva samples [n = 59] were collected near temples areas in Kathmandu using a non-invasive sampling technique. All samples were processed for MRSA using standardized selective media and conventional biochemical tests. MRSA verification was done and isolates characterized by SCCmec, multilocus sequence typing, whole genome sequencing [WGS] and antibiotic susceptibilities. Six (2.1%) swine MRSA were isolated from five of the different swine herds tested, five were ST22 type IV and one ST88 type V. Four (6.8%) macaques MRSA were isolated, with three ST22 SCCmec type IV and one ST239 type III. WGS sequencing showed that the eight ciprofloxacin resistant ST22 isolates carried gyrA mutation [S84L]. Six isolates carried the erm(C) genes, five isolates carried aacC-aphD genes and four isolates carried blaZ genes. The swine linezolid resistant ST22 did not carry any known acquired linezolid resistance genes but had a mutation in ribosomal protein L22 [A29V] and an insertion in L4 [68KG69], both previously associated with linezolid resistance. Multiple virulence factors were also identified. This is the first time MRSA ST22 SCCmec IV has been isolated from livestock or primates.

  17. Dosimetry results for Big Ten and related benchmarks

    International Nuclear Information System (INIS)

    Hansen, G.E.; Gilliam, D.M.

    1980-01-01

    Measured average reaction cross sections for the Big Ten central flux spectrum are given together with calculated values based on the U.S. Evaluated Nuclear Data File ENDF/B-IV. Central reactivity coefficients for 233 U, 235 U, 239 Pu, 6 Li and 10 B are given to check consistency of bias between measured and calculated reaction cross sections for these isotopes. Spectral indexes for the Los Alamos 233 U, 235 U and 239 Pu metal critical assemblies are updated, utilizing the Big Ten measurements and interassembly calibrations, and their implications for inelastic scattering are reiterated

  18. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    Science.gov (United States)

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  19. Neptunium (IV) oxalate solubility

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    The equilibrium solubility of neptunium (IV) oxalate in nitric/oxalic acid solutions was determined at 22 0 C, 45 0 C, and 60 0 C. The concentrations of nitric/oxalic acid solutions represented a wide range of free oxalate ion concentration. A mathematical solubility model was developed which is based on the formation of the known complexes of neptunium (IV) oxalate. the solubility model uses a simplified concentration parameter which is proportional to the free oxalate ion concentration. The solubility model can be used to estimate the equilibrium solubility of neptunium (IV) oxalate over a wide range of oxalic and nitric acid concentrations at each temperature

  20. Big Java late objects

    CERN Document Server

    Horstmann, Cay S

    2012-01-01

    Big Java: Late Objects is a comprehensive introduction to Java and computer programming, which focuses on the principles of programming, software engineering, and effective learning. It is designed for a two-semester first course in programming for computer science students.

  1. Big ideas: innovation policy

    OpenAIRE

    John Van Reenen

    2011-01-01

    In the last CentrePiece, John Van Reenen stressed the importance of competition and labour market flexibility for productivity growth. His latest in CEP's 'big ideas' series describes the impact of research on how policy-makers can influence innovation more directly - through tax credits for business spending on research and development.

  2. Big Data ethics

    NARCIS (Netherlands)

    Zwitter, Andrej

    2014-01-01

    The speed of development in Big Data and associated phenomena, such as social media, has surpassed the capacity of the average consumer to understand his or her actions and their knock-on effects. We are moving towards changes in how ethics has to be perceived: away from individual decisions with

  3. Big data in history

    CERN Document Server

    Manning, Patrick

    2013-01-01

    Big Data in History introduces the project to create a world-historical archive, tracing the last four centuries of historical dynamics and change. Chapters address the archive's overall plan, how to interpret the past through a global archive, the missions of gathering records, linking local data into global patterns, and exploring the results.

  4. The Big Sky inside

    Science.gov (United States)

    Adams, Earle; Ward, Tony J.; Vanek, Diana; Marra, Nancy; Hester, Carolyn; Knuth, Randy; Spangler, Todd; Jones, David; Henthorn, Melissa; Hammill, Brock; Smith, Paul; Salisbury, Rob; Reckin, Gene; Boulafentis, Johna

    2009-01-01

    The University of Montana (UM)-Missoula has implemented a problem-based program in which students perform scientific research focused on indoor air pollution. The Air Toxics Under the Big Sky program (Jones et al. 2007; Adams et al. 2008; Ward et al. 2008) provides a community-based framework for understanding the complex relationship between poor…

  5. Moving Another Big Desk.

    Science.gov (United States)

    Fawcett, Gay

    1996-01-01

    New ways of thinking about leadership require that leaders move their big desks and establish environments that encourage trust and open communication. Educational leaders must trust their colleagues to make wise choices. When teachers are treated democratically as leaders, classrooms will also become democratic learning organizations. (SM)

  6. A Big Bang Lab

    Science.gov (United States)

    Scheider, Walter

    2005-01-01

    The February 2005 issue of The Science Teacher (TST) reminded everyone that by learning how scientists study stars, students gain an understanding of how science measures things that can not be set up in lab, either because they are too big, too far away, or happened in a very distant past. The authors of "How Far are the Stars?" show how the…

  7. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  8. The Big Bang

    CERN Multimedia

    Moods, Patrick

    2006-01-01

    How did the Universe begin? The favoured theory is that everything - space, time, matter - came into existence at the same moment, around 13.7 thousand million years ago. This event was scornfully referred to as the "Big Bang" by Sir Fred Hoyle, who did not believe in it and maintained that the Universe had always existed.

  9. Big Data Analytics

    Indian Academy of Sciences (India)

    The volume and variety of data being generated using computersis doubling every two years. It is estimated that in 2015,8 Zettabytes (Zetta=1021) were generated which consistedmostly of unstructured data such as emails, blogs, Twitter,Facebook posts, images, and videos. This is called big data. Itis possible to analyse ...

  10. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  11. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  12. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  13. Big data analysis new algorithms for a new society

    CERN Document Server

    Stefanowski, Jerzy

    2016-01-01

    This edited volume is devoted to Big Data Analysis from a Machine Learning standpoint as presented by some of the most eminent researchers in this area. It demonstrates that Big Data Analysis opens up new research problems which were either never considered before, or were only considered within a limited range. In addition to providing methodological discussions on the principles of mining Big Data and the difference between traditional statistical data analysis and newer computing frameworks, this book presents recently developed algorithms affecting such areas as business, financial forecasting, human mobility, the Internet of Things, information networks, bioinformatics, medical systems and life science. It explores, through a number of specific examples, how the study of Big Data Analysis has evolved and how it has started and will most likely continue to affect society. While the benefits brought upon by Big Data Analysis are underlined, the book also discusses some of the warnings that have been issued...

  14. SAGE IV Pathfinder

    Data.gov (United States)

    National Aeronautics and Space Administration — Utilizing a unique, new occultation technique involving imaging, the SAGE IV concept will meet or exceed the quality of previous SAGE measurements at a small...

  15. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  16. Semantic Web technologies for the big data in life sciences.

    Science.gov (United States)

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  17. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  18. Finding the big bang

    CERN Document Server

    Page, Lyman A; Partridge, R Bruce

    2009-01-01

    Cosmology, the study of the universe as a whole, has become a precise physical science, the foundation of which is our understanding of the cosmic microwave background radiation (CMBR) left from the big bang. The story of the discovery and exploration of the CMBR in the 1960s is recalled for the first time in this collection of 44 essays by eminent scientists who pioneered the work. Two introductory chapters put the essays in context, explaining the general ideas behind the expanding universe and fossil remnants from the early stages of the expanding universe. The last chapter describes how the confusion of ideas and measurements in the 1960s grew into the present tight network of tests that demonstrate the accuracy of the big bang theory. This book is valuable to anyone interested in how science is done, and what it has taught us about the large-scale nature of the physical universe.

  19. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Klinkby Madsen, Anders; Rasche, Andreas

    data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects......This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... of international development agendas to algorithms that synthesize large-scale data, (3) novel ways of rationalizing knowledge claims that underlie development efforts, and (4) shifts in professional and organizational identities of those concerned with producing and processing data for development. Our discussion...

  20. Big nuclear accidents

    International Nuclear Information System (INIS)

    Marshall, W.; Billingon, D.E.; Cameron, R.F.; Curl, S.J.

    1983-09-01

    Much of the debate on the safety of nuclear power focuses on the large number of fatalities that could, in theory, be caused by extremely unlikely but just imaginable reactor accidents. This, along with the nuclear industry's inappropriate use of vocabulary during public debate, has given the general public a distorted impression of the risks of nuclear power. The paper reviews the way in which the probability and consequences of big nuclear accidents have been presented in the past and makes recommendations for the future, including the presentation of the long-term consequences of such accidents in terms of 'loss of life expectancy', 'increased chance of fatal cancer' and 'equivalent pattern of compulsory cigarette smoking'. The paper presents mathematical arguments, which show the derivation and validity of the proposed methods of presenting the consequences of imaginable big nuclear accidents. (author)

  1. Big Bounce and inhomogeneities

    International Nuclear Information System (INIS)

    Brizuela, David; Mena Marugan, Guillermo A; Pawlowski, Tomasz

    2010-01-01

    The dynamics of an inhomogeneous universe is studied with the methods of loop quantum cosmology, via a so-called hybrid quantization, as an example of the quantization of vacuum cosmological spacetimes containing gravitational waves (Gowdy spacetimes). The analysis of this model with an infinite number of degrees of freedom, performed at the effective level, shows that (i) the initial Big Bang singularity is replaced (as in the case of homogeneous cosmological models) by a Big Bounce, joining deterministically two large universes, (ii) the universe size at the bounce is at least of the same order of magnitude as that of the background homogeneous universe and (iii) for each gravitational wave mode, the difference in amplitude at very early and very late times has a vanishing statistical average when the bounce dynamics is strongly dominated by the inhomogeneities, whereas this average is positive when the dynamics is in a near-vacuum regime, so that statistically the inhomogeneities are amplified. (fast track communication)

  2. Big Data and reality

    Directory of Open Access Journals (Sweden)

    Ryan Shaw

    2015-11-01

    Full Text Available DNA sequencers, Twitter, MRIs, Facebook, particle accelerators, Google Books, radio telescopes, Tumblr: what do these things have in common? According to the evangelists of “data science,” all of these are instruments for observing reality at unprecedentedly large scales and fine granularities. This perspective ignores the social reality of these very different technological systems, ignoring how they are made, how they work, and what they mean in favor of an exclusive focus on what they generate: Big Data. But no data, big or small, can be interpreted without an understanding of the process that generated them. Statistical data science is applicable to systems that have been designed as scientific instruments, but is likely to lead to confusion when applied to systems that have not. In those cases, a historical inquiry is preferable.

  3. Really big numbers

    CERN Document Server

    Schwartz, Richard Evan

    2014-01-01

    In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...

  4. Disruption of fibronectin matrix affects type IV collagen, fibrillin and laminin deposition into extracellular matrix of human trabecular meshwork (HTM) cells.

    Science.gov (United States)

    Filla, Mark S; Dimeo, Kaylee D; Tong, Tiegang; Peters, Donna M

    2017-12-01

    Fibronectin fibrils are a major component of the extracellular matrix (ECM) of the trabecular meshwork (TM). They are a key mediator of the formation of the ECM which controls aqueous humor outflow and contributes to the pathogenesis of glaucoma. The purpose of this work was to determine if a fibronectin-binding peptide called FUD, derived from the Streptococcus pyogenes Functional Upstream Domain of the F1 adhesin protein, could be used to control fibronectin fibrillogenesis and hence ECM formation under conditions where its expression was induced by treatment with the glucocorticoid dexamethasone. FUD was very effective at preventing fibronectin fibrillogenesis in the presence or absence of steroid treatment as well as the removal of existing fibronectin fibrils. Disruption of fibronectin fibrillogenesis by FUD also disrupted the incorporation of type IV collagen, laminin and fibrillin into the ECM. The effect of FUD on these other protein matrices, however, was found to be dependent upon the maturity of the ECM when FUD was added. FUD effectively disrupted the incorporation of these other proteins into matrices when added to newly confluent cells that were forming a nascent ECM. In contrast, FUD had no effect on these other protein matrices if the cell cultures already possessed a pre-formed, mature ECM. Our studies indicate that FUD can be used to control fibronectin fibrillogenesis and that these fibrils play a role in regulating the assembly of other ECM protein into matrices involving type IV collagen, laminin, and fibrillin within the TM. This suggests that under in vivo conditions, FUD would selectively disrupt fibronectin fibrils and de novo assembly of other proteins into the ECM. Finally, our studies suggest that targeting fibronectin fibril assembly may be a viable treatment for POAG as well as other glaucomas involving excessive or abnormal matrix deposition of the ECM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Big Bang Circus

    Science.gov (United States)

    Ambrosini, C.

    2011-06-01

    Big Bang Circus is an opera I composed in 2001 and which was premiered at the Venice Biennale Contemporary Music Festival in 2002. A chamber group, four singers and a ringmaster stage the story of the Universe confronting and interweaving two threads: how early man imagined it and how scientists described it. Surprisingly enough fancy, myths and scientific explanations often end up using the same images, metaphors and sometimes even words: a strong tension, a drumskin starting to vibrate, a shout…

  6. Big Bang 5

    CERN Document Server

    Apolin, Martin

    2007-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Der Band 5 RG behandelt die Grundlagen (Maßsystem, Größenordnungen) und die Mechanik (Translation, Rotation, Kraft, Erhaltungssätze).

  7. Big Bang 8

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Band 8 vermittelt auf verständliche Weise Relativitätstheorie, Kern- und Teilchenphysik (und deren Anwendungen in der Kosmologie und Astrophysik), Nanotechnologie sowie Bionik.

  8. Big Bang 6

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Der Band 6 RG behandelt die Gravitation, Schwingungen und Wellen, Thermodynamik und eine Einführung in die Elektrizität anhand von Alltagsbeispielen und Querverbindungen zu anderen Disziplinen.

  9. Big Bang 7

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. In Band 7 werden neben einer Einführung auch viele aktuelle Aspekte von Quantenmechanik (z. Beamen) und Elektrodynamik (zB Elektrosmog), sowie die Klimaproblematik und die Chaostheorie behandelt.

  10. Big Bang Darkleosynthesis

    OpenAIRE

    Krnjaic, Gordan; Sigurdson, Kris

    2014-01-01

    In a popular class of models, dark matter comprises an asymmetric population of composite particles with short range interactions arising from a confined nonabelian gauge group. We show that coupling this sector to a well-motivated light mediator particle yields efficient darkleosynthesis , a dark-sector version of big-bang nucleosynthesis (BBN), in generic regions of parameter space. Dark matter self-interaction bounds typically require the confinement scale to be above ΛQCD , which generica...

  11. Big³. Editorial.

    Science.gov (United States)

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  12. Recent big flare

    International Nuclear Information System (INIS)

    Moriyama, Fumio; Miyazawa, Masahide; Yamaguchi, Yoshisuke

    1978-01-01

    The features of three big solar flares observed at Tokyo Observatory are described in this paper. The active region, McMath 14943, caused a big flare on September 16, 1977. The flare appeared on both sides of a long dark line which runs along the boundary of the magnetic field. Two-ribbon structure was seen. The electron density of the flare observed at Norikura Corona Observatory was 3 x 10 12 /cc. Several arc lines which connect both bright regions of different magnetic polarity were seen in H-α monochrome image. The active region, McMath 15056, caused a big flare on December 10, 1977. At the beginning, several bright spots were observed in the region between two main solar spots. Then, the area and the brightness increased, and the bright spots became two ribbon-shaped bands. A solar flare was observed on April 8, 1978. At first, several bright spots were seen around the solar spot in the active region, McMath 15221. Then, these bright spots developed to a large bright region. On both sides of a dark line along the magnetic neutral line, bright regions were generated. These developed to a two-ribbon flare. The time required for growth was more than one hour. A bright arc which connects two ribbons was seen, and this arc may be a loop prominence system. (Kato, T.)

  13. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  14. IV&V Project Assessment Process Validation

    Science.gov (United States)

    Driskell, Stephen

    2012-01-01

    The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.

  15. SETI as a part of Big History

    Science.gov (United States)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive the Statistical Drake Equation (namely the statistical extension of the classical Drake Equation typical of SETI) can be regarded as the “frozen in time” part of GBM. This makes SETI a subset of our Big History Theory based on GBMs: just as the GBM is the “movie” unfolding in time, so the Statistical Drake Equation is its “still picture”, static in time, and the GBM is the time-extension of the Drake Equation. Darwinian Evolution on Earth may be easily described as an increasing GBM in the number of living species on Earth over the last 3.5 billion years. The first of them was RNA 3.5 billion years ago, and now 50 million living species or more exist, each

  16. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  17. IV access in dental practice.

    LENUS (Irish Health Repository)

    Fitzpatrick, J J

    2009-04-01

    Intravenous (IV) access is a valuable skill for dental practitioners in emergency situations and in IV sedation. However, many people feel some apprehension about performing this procedure. This article explains the basic principles behind IV access, and the relevant anatomy and physiology, as well as giving a step-by-step guide to placing an IV cannula.

  18. Disaggregating asthma: Big investigation versus big data.

    Science.gov (United States)

    Belgrave, Danielle; Henderson, John; Simpson, Angela; Buchan, Iain; Bishop, Christopher; Custovic, Adnan

    2017-02-01

    We are facing a major challenge in bridging the gap between identifying subtypes of asthma to understand causal mechanisms and translating this knowledge into personalized prevention and management strategies. In recent years, "big data" has been sold as a panacea for generating hypotheses and driving new frontiers of health care; the idea that the data must and will speak for themselves is fast becoming a new dogma. One of the dangers of ready accessibility of health care data and computational tools for data analysis is that the process of data mining can become uncoupled from the scientific process of clinical interpretation, understanding the provenance of the data, and external validation. Although advances in computational methods can be valuable for using unexpected structure in data to generate hypotheses, there remains a need for testing hypotheses and interpreting results with scientific rigor. We argue for combining data- and hypothesis-driven methods in a careful synergy, and the importance of carefully characterized birth and patient cohorts with genetic, phenotypic, biological, and molecular data in this process cannot be overemphasized. The main challenge on the road ahead is to harness bigger health care data in ways that produce meaningful clinical interpretation and to translate this into better diagnoses and properly personalized prevention and treatment plans. There is a pressing need for cross-disciplinary research with an integrative approach to data science, whereby basic scientists, clinicians, data analysts, and epidemiologists work together to understand the heterogeneity of asthma. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Man is not a big rat: concerns with traditional human risk assessment of phthalates based on their anti-androgenic effects observed in the rat foetus.

    Science.gov (United States)

    Habert, René; Livera, Gabriel; Rouiller-Fabre, Virginie

    2014-01-01

    Phthalates provide one of the most documented example evidencing how much we must be cautious when using the traditional paradigm based on extrapolation of experimental data from rodent studies for human health risk assessment of endocrine disruptors (EDs). Since foetal testis is known as one of the most sensitive targets of EDs, phthalate risk assessment is routinely based on the capacity of such compounds to decrease testosterone production by the testis or to impair masculinization in the rat during foetal life. In this paper, the well-established inhibiting effects of phthalates of the foetal Leydig cells function in the rat are briefly reviewed. Then, data obtained in humans and other species are carefully analysed. Already in January 2009, using the organotypic culture system named Fetal Testis Assay (FeTA) that we developed, we reported that phthalates might not affect testosterone production in human foetal testes. Several recent experimental studies using xenografts confirm the absence of detectable anti-androgenic effect of phthalates in the human foetal testes. Epidemiological studies led to contradictory results. Altogether, these findings suggest that phthalates effects on foetal Leydig cells are largely species-specific. Consequently, the phthalate threshold doses that disturb foetal steroidogenesis in rat testes and that are presently used to define the acceptable daily intake levels for human health protection must be questioned. This does not mean that phthalates are safe because these compounds have many deleterious effects upon germ cell development that may be common to the different studied species including human. More generally, the identification of common molecular, cellular or/and phenotypic targets in rat and human testes should precede the choice of the toxicological endpoint in rat to accurately assess the safety threshold of any ED in humans.

  20. Internet Economics IV

    Science.gov (United States)

    2004-08-01

    edts.): Internet Economics IV Technical Report No. 2004-04, August 2004 Information Systems Laboratory IIS, Departement of Computer Science University of...level agreements (SLA), Information technology (IT), Internet address, Internet service provider 16. PRICE CODE 17. SECURITY CLASSIFICATION 18... technology and its economic impacts in the Internet world today. The second talk addresses the area of AAA protocol, summarizing authentication

  1. Uranium (IV) carboxylates - I

    Energy Technology Data Exchange (ETDEWEB)

    Satpathy, K C; Patnaik, A K [Sambalpur Univ. (India). Dept. of Chemistry

    1975-11-01

    A few uranium(IV) carboxylates with monochloro and trichloro acetic acid, glycine, malic, citric, adipic, o-toluic, anthranilic and salicylic acids have been prepared by photolytic methods. The I.R. spectra of these compounds are recorded and basing on the spectral data, structure of the compounds have been suggested.

  2. PLATO IV Accountancy Index.

    Science.gov (United States)

    Pondy, Dorothy, Comp.

    The catalog was compiled to assist instructors in planning community college and university curricula using the 48 computer-assisted accountancy lessons available on PLATO IV (Programmed Logic for Automatic Teaching Operation) for first semester accounting courses. It contains information on lesson access, lists of acceptable abbreviations for…

  3. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  4. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  5. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  6. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  7. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  8. How Big is Earth?

    Science.gov (United States)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  9. Privacy and Big Data

    CERN Document Server

    Craig, Terence

    2011-01-01

    Much of what constitutes Big Data is information about us. Through our online activities, we leave an easy-to-follow trail of digital footprints that reveal who we are, what we buy, where we go, and much more. This eye-opening book explores the raging privacy debate over the use of personal data, with one undeniable conclusion: once data's been collected, we have absolutely no control over who uses it or how it is used. Personal data is the hottest commodity on the market today-truly more valuable than gold. We are the asset that every company, industry, non-profit, and government wants. Pri

  10. Visualizing big energy data

    DEFF Research Database (Denmark)

    Hyndman, Rob J.; Liu, Xueqin Amy; Pinson, Pierre

    2018-01-01

    Visualization is a crucial component of data analysis. It is always a good idea to plot the data before fitting models, making predictions, or drawing conclusions. As sensors of the electric grid are collecting large volumes of data from various sources, power industry professionals are facing th...... the challenge of visualizing such data in a timely fashion. In this article, we demonstrate several data-visualization solutions for big energy data through three case studies involving smart-meter data, phasor measurement unit (PMU) data, and probabilistic forecasts, respectively....

  11. Big Data Challenges

    Directory of Open Access Journals (Sweden)

    Alexandru Adrian TOLE

    2013-10-01

    Full Text Available The amount of data that is traveling across the internet today, not only that is large, but is complex as well. Companies, institutions, healthcare system etc., all of them use piles of data which are further used for creating reports in order to ensure continuity regarding the services that they have to offer. The process behind the results that these entities requests represents a challenge for software developers and companies that provide IT infrastructure. The challenge is how to manipulate an impressive volume of data that has to be securely delivered through the internet and reach its destination intact. This paper treats the challenges that Big Data creates.

  12. A Matrix Big Bang

    OpenAIRE

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matr...

  13. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  14. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  15. Tiny tweaks, big changes: An alternative strategy to empower ethical culture of human research in anesthesia (A Taiwan Acta Anesthesiologica Taiwanica-Ethics Review Task Force Report).

    Science.gov (United States)

    Luk, Hsiang-Ning; Ennever, John F; Day, Yuan-Ji; Wong, Chih-Shung; Sun, Wei-Zen

    2015-03-01

    For this guidance article, the Ethics Review Task Force (ERTF) of the Journal reviewed and discussed the ethics issues related to publication of human research in the field of anesthesia. ERTF first introduced international ethics principles and minimal requirements of reporting of ethics practices, followed by discussing the universal problems of publication ethics. ERTF then compared the accountability and methodology of several medical journals in assuring authors' ethics compliance. Using the Taiwan Institutional Review Board system as an example, ERTF expressed the importance of institutional review board registration and accreditation to assure human participant protection. ERTF presented four major human research misconducts in the field of anesthesia in recent years. ERTF finally proposed a flow-chart to guide journal peer reviewers and editors in ethics review during the editorial process in publishing. Examples of template languages applied in the Ethics statement section in the manuscript are expected to strengthen the ethics compliance of the authors and to set an ethical culture for all the stakeholders involved in human research. Copyright © 2015. Published by Elsevier B.V.

  16. [Big data in imaging].

    Science.gov (United States)

    Sewerin, Philipp; Ostendorf, Benedikt; Hueber, Axel J; Kleyer, Arnd

    2018-04-01

    Until now, most major medical advancements have been achieved through hypothesis-driven research within the scope of clinical trials. However, due to a multitude of variables, only a certain number of research questions could be addressed during a single study, thus rendering these studies expensive and time consuming. Big data acquisition enables a new data-based approach in which large volumes of data can be used to investigate all variables, thus opening new horizons. Due to universal digitalization of the data as well as ever-improving hard- and software solutions, imaging would appear to be predestined for such analyses. Several small studies have already demonstrated that automated analysis algorithms and artificial intelligence can identify pathologies with high precision. Such automated systems would also seem well suited for rheumatology imaging, since a method for individualized risk stratification has long been sought for these patients. However, despite all the promising options, the heterogeneity of the data and highly complex regulations covering data protection in Germany would still render a big data solution for imaging difficult today. Overcoming these boundaries is challenging, but the enormous potential advances in clinical management and science render pursuit of this goal worthwhile.

  17. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Fields, Brian D.; Olive, Keith A.

    2006-01-01

    We present an overview of the standard model of big bang nucleosynthesis (BBN), which describes the production of the light elements in the early universe. The theoretical prediction for the abundances of D, 3 He, 4 He, and 7 Li is discussed. We emphasize the role of key nuclear reactions and the methods by which experimental cross section uncertainties are propagated into uncertainties in the predicted abundances. The observational determination of the light nuclides is also discussed. Particular attention is given to the comparison between the predicted and observed abundances, which yields a measurement of the cosmic baryon content. The spectrum of anisotropies in the cosmic microwave background (CMB) now independently measures the baryon density to high precision; we show how the CMB data test BBN, and find that the CMB and the D and 4 He observations paint a consistent picture. This concordance stands as a major success of the hot big bang. On the other hand, 7 Li remains discrepant with the CMB-preferred baryon density; possible explanations are reviewed. Finally, moving beyond the standard model, primordial nucleosynthesis constraints on early universe and particle physics are also briefly discussed

  18. Heterogeneity of pituitary and plasma prolactin in man: decreased affinity of big prolactin in a radioreceptor assay and evidence for its secretion

    International Nuclear Information System (INIS)

    Garnier, P.E.; Aubert, M.L.; Kaplan, S.L.; Grumbach, M.M.

    1978-01-01

    Molecular heterogeneity of immunoreactive human PRL (IR-hPRL) plasma was assessed by exclusion chromatography in blood from 4 normal adults, 3 newborn infants, 2 late gestational women, 3 patients with primary hypothyroidism and high PRL levels, 2 with functional hyperprolactinemia, 3 with acromegaly, and 10 with PRL-secreting tumors. Three forms of PRL were detected: big-big hPRL, big hPRL, and little hPRL. In normal subjects, the proportion of big-big, big, and little hPRL components was 5.1%, 9.1%, and 85.8%, respectively, without change in the distribution after TRF stimulation. In 8 of 10 patients with PRL-secreting tumors, we detected a significantly higher proportion of big PRL. In 2 additional patients with prolactinomas, the proportion of big PRL was much higher. In 3 of 10 patients, the molecular heterogeneity of the tumor PRL was similar to that in plasma. In 1 acromegalic, there was a very high proportion of big-big hPRL. The PRL fractions were tested in a radioreceptor assay (RRA) using membranes from rabbit mammary gland. Big PRL was much less active than little PRL in the RRA. The fractions were rechromatographed after storage. Big PRL partially distributed as little or big-big PRL, while little PRL remained unchanged. Big-big PRL from tumor extract partially converted into big and little PRL. The big PRL obtained by rechromatography had low activity in the RRA. These observations suggest at least part of the receptor activity of big PRL may arise from generation of or contamination by little PRL. The decreased binding affinity of big PRL in the RRA also indicates that big PRL has little, if any, biological activity. The evidence suggests big PRL is a native PRL dimer linked by intermolecular disulfide bonds which arises in the lactotrope as a postsynthetic product or derivative and is not a true precursor prohormone

  19. Big Data and Nursing: Implications for the Future.

    Science.gov (United States)

    Topaz, Maxim; Pruinelli, Lisiane

    2017-01-01

    Big data is becoming increasingly more prevalent and it affects the way nurses learn, practice, conduct research and develop policy. The discipline of nursing needs to maximize the benefits of big data to advance the vision of promoting human health and wellbeing. However, current practicing nurses, educators and nurse scientists often lack the required skills and competencies necessary for meaningful use of big data. Some of the key skills for further development include the ability to mine narrative and structured data for new care or outcome patterns, effective data visualization techniques, and further integration of nursing sensitive data into artificial intelligence systems for better clinical decision support. We provide growth-path vision recommendations for big data competencies for practicing nurses, nurse educators, researchers, and policy makers to help prepare the next generation of nurses and improve patient outcomes trough better quality connected health.

  20. Was the big bang hot

    International Nuclear Information System (INIS)

    Wright, E.L.

    1983-01-01

    The author considers experiments to confirm the substantial deviations from a Planck curve in the Woody and Richards spectrum of the microwave background, and search for conducting needles in our galaxy. Spectral deviations and needle-shaped grains are expected for a cold Big Bang, but are not required by a hot Big Bang. (Auth.)

  1. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  2. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  3. Passport to the Big Bang

    CERN Multimedia

    De Melis, Cinzia

    2013-01-01

    Le 2 juin 2013, le CERN inaugure le projet Passeport Big Bang lors d'un grand événement public. Affiche et programme. On 2 June 2013 CERN launches a scientific tourist trail through the Pays de Gex and the Canton of Geneva known as the Passport to the Big Bang. Poster and Programme.

  4. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  5. A sputnik IV saga

    Science.gov (United States)

    Lundquist, Charles A.

    2009-12-01

    The Sputnik IV launch occurred on May 15, 1960. On May 19, an attempt to deorbit a 'space cabin' failed and the cabin went into a higher orbit. The orbit of the cabin was monitored and Moonwatch volunteer satellite tracking teams were alerted to watch for the vehicle demise. On September 5, 1962, several team members from Milwaukee, Wisconsin made observations starting at 4:49 a.m. of a fireball following the predicted orbit of Sputnik IV. Requests went out to report any objects found under the fireball path. An early morning police patrol in Manitowoc had noticed a metal object on a street and had moved it to the curb. Later the officers recovered the object and had it dropped off at the Milwaukee Journal. The Moonwarch team got the object and reported the situation to Moonwatch Headquarters at the Smithsonian Astrophysical Observatory. A team member flew to Cambridge with the object. It was a solid, 9.49 kg piece of steel with a slag-like layer attached to it. Subsequent analyses showed that it contained radioactive nuclei produced by cosmic ray exposure in space. The scientists at the Observatory quickly recognized that measurements of its induced radioactivity could serve as a calibration for similar measurements of recently fallen nickel-iron meteorites. Concurrently, the Observatory directorate informed government agencies that a fragment from Sputnik IV had been recovered. Coincidently, a debate in the UN Committee on Peaceful Uses of Outer Space involved the issue of liability for damage caused by falling satellite fragments. On September 12, the Observatory delivered the bulk of the fragment to the US Delegation to the UN. Two days later, the fragment was used by US Ambassador Francis Plimpton as an exhibit that the time had come to agree on liability for damage from satellite debris. He offered the Sputnik IV fragment to USSR Ambassador P.D. Morozov, who refused the offer. On October 23, Drs. Alla Massevitch and E.K. Federov of the USSR visited the

  6. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  7. Mapping Cortical Laminar Structure in the 3D BigBrain.

    Science.gov (United States)

    Wagstyl, Konrad; Lepage, Claude; Bludau, Sebastian; Zilles, Karl; Fletcher, Paul C; Amunts, Katrin; Evans, Alan C

    2018-07-01

    Histological sections offer high spatial resolution to examine laminar architecture of the human cerebral cortex; however, they are restricted by being 2D, hence only regions with sufficiently optimal cutting planes can be analyzed. Conversely, noninvasive neuroimaging approaches are whole brain but have relatively low resolution. Consequently, correct 3D cross-cortical patterns of laminar architecture have never been mapped in histological sections. We developed an automated technique to identify and analyze laminar structure within the high-resolution 3D histological BigBrain. We extracted white matter and pial surfaces, from which we derived histologically verified surfaces at the layer I/II boundary and within layer IV. Layer IV depth was strongly predicted by cortical curvature but varied between areas. This fully automated 3D laminar analysis is an important requirement for bridging high-resolution 2D cytoarchitecture and in vivo 3D neuroimaging. It lays the foundation for in-depth, whole-brain analyses of cortical layering.

  8. Medios ciudadanos y big data: La emergencia del activismo de datos

    OpenAIRE

    Milan, S.; Gutiérrez, M.

    2015-01-01

    Big data presents citizens with new challenges and opportunities. ‘Data activism’ practices emerge at the intersection of the social and techsnological dimension of human action, whereby citizens take a critical approach to big data, and appropriate and manipulate data for advocacy and social change. This theoretical article explores the emergence of data activism as an empirical reality and a heuristic tool to study how people engage politically with big data. We ground the concept on a mult...

  9. [Three applications and the challenge of the big data in otology].

    Science.gov (United States)

    Lei, Guanxiong; Li, Jianan; Shen, Weidong; Yang, Shiming

    2016-03-01

    With the expansion of human practical activities, more and more areas have suffered from big data problems. The emergence of big data requires people to update the research paradigm and develop new technical methods. This review discussed that big data might bring opportunities and challenges in the area of auditory implantation, the deafness genome, and auditory pathophysiology, and pointed out that we needed to find appropriate theories and methods to make this kind of expectation into reality.

  10. The big data telescope

    International Nuclear Information System (INIS)

    Finkel, Elizabeth

    2017-01-01

    On a flat, red mulga plain in the outback of Western Australia, preparations are under way to build the most audacious telescope astronomers have ever dreamed of - the Square Kilometre Array (SKA). Next-generation telescopes usually aim to double the performance of their predecessors. The Australian arm of SKA will deliver a 168-fold leap on the best technology available today, to show us the universe as never before. It will tune into signals emitted just a million years after the Big Bang, when the universe was a sea of hydrogen gas, slowly percolating with the first galaxies. Their starlight illuminated the fledgling universe in what is referred to as the “cosmic dawn”.

  11. The Big Optical Array

    International Nuclear Information System (INIS)

    Mozurkewich, D.; Johnston, K.J.; Simon, R.S.

    1990-01-01

    This paper describes the design and the capabilities of the Naval Research Laboratory Big Optical Array (BOA), an interferometric optical array for high-resolution imaging of stars, stellar systems, and other celestial objects. There are four important differences between the BOA design and the design of Mark III Optical Interferometer on Mount Wilson (California). These include a long passive delay line which will be used in BOA to do most of the delay compensation, so that the fast delay line will have a very short travel; the beam combination in BOA will be done in triplets, to allow measurement of closure phase; the same light will be used for both star and fringe tracking; and the fringe tracker will use several wavelength channels

  12. Big nuclear accidents

    International Nuclear Information System (INIS)

    Marshall, W.

    1983-01-01

    Much of the debate on the safety of nuclear power focuses on the large number of fatalities that could, in theory, be caused by extremely unlikely but imaginable reactor accidents. This, along with the nuclear industry's inappropriate use of vocabulary during public debate, has given the general public a distorted impression of the safety of nuclear power. The way in which the probability and consequences of big nuclear accidents have been presented in the past is reviewed and recommendations for the future are made including the presentation of the long-term consequences of such accidents in terms of 'reduction in life expectancy', 'increased chance of fatal cancer' and the equivalent pattern of compulsory cigarette smoking. (author)

  13. Nonstandard big bang models

    International Nuclear Information System (INIS)

    Calvao, M.O.; Lima, J.A.S.

    1989-01-01

    The usual FRW hot big-bang cosmologies have been generalized by considering the equation of state ρ = Anm +(γ-1) -1 p, where m is the rest mass of the fluid particles and A is a dimensionless constant. Explicit analytic solutions are given for the flat case (ε=O). For large cosmological times these extended models behave as the standard Einstein-de Sitter universes regardless of the values of A and γ. Unlike the usual FRW flat case the deceleration parameter q is a time-dependent function and its present value, q≅ 1, obtained from the luminosity distance versus redshift relation, may be fitted by taking, for instance, A=1 and γ = 5/3 (monatomic relativistic gas with >> k B T). In all cases the universe cools obeying the same temperature law of the FRW models and it is shown that the age of the universe is only slightly modified. (author) [pt

  14. The Last Big Bang

    Energy Technology Data Exchange (ETDEWEB)

    McGuire, Austin D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meade, Roger Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-13

    As one of the very few people in the world to give the “go/no go” decision to detonate a nuclear device, Austin “Mac” McGuire holds a very special place in the history of both the Los Alamos National Laboratory and the world. As Commander of Joint Task Force Unit 8.1.1, on Christmas Island in the spring and summer of 1962, Mac directed the Los Alamos data collection efforts for twelve of the last atmospheric nuclear detonations conducted by the United States. Since data collection was at the heart of nuclear weapon testing, it fell to Mac to make the ultimate decision to detonate each test device. He calls his experience THE LAST BIG BANG, since these tests, part of Operation Dominic, were characterized by the dramatic displays of the heat, light, and sounds unique to atmospheric nuclear detonations – never, perhaps, to be witnessed again.

  15. A matrix big bang

    International Nuclear Information System (INIS)

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control

  16. A matrix big bang

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands); Sethi, Savdeep [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States); Verlinde, Erik [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)

    2005-10-15

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control.

  17. DPF Big One

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    At its latest venue at Fermilab from 10-14 November, the American Physical Society's Division of Particles and Fields meeting entered a new dimension. These regular meetings, which allow younger researchers to communicate with their peers, have been gaining popularity over the years (this was the seventh in the series), but nobody had expected almost a thousand participants and nearly 500 requests to give talks. Thus Fermilab's 800-seat auditorium had to be supplemented with another room with a video hookup, while the parallel sessions were organized into nine bewildering streams covering fourteen major physics topics. With the conventionality of the Standard Model virtually unchallenged, physics does not move fast these days. While most of the physics results had already been covered in principle at the International Conference on High Energy Physics held in Dallas in August (October, page 1), the Fermilab DPF meeting had a very different atmosphere. Major international meetings like Dallas attract big names from far and wide, and it is difficult in such an august atmosphere for young researchers to find a receptive audience. This was not the case at the DPF parallel sessions. The meeting also adopted a novel approach, with the parallels sandwiched between an initial day of plenaries to set the scene, and a final day of summaries. With the whole world waiting for the sixth ('top') quark to be discovered at Fermilab's Tevatron protonantiproton collider, the meeting began with updates from Avi Yagil and Ronald Madaras from the big detectors, CDF and DO respectively. Although rumours flew thick and fast, the Tevatron has not yet reached the top, although Yagil could show one intriguing event of a type expected from the heaviest quark

  18. DPF Big One

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1993-01-15

    At its latest venue at Fermilab from 10-14 November, the American Physical Society's Division of Particles and Fields meeting entered a new dimension. These regular meetings, which allow younger researchers to communicate with their peers, have been gaining popularity over the years (this was the seventh in the series), but nobody had expected almost a thousand participants and nearly 500 requests to give talks. Thus Fermilab's 800-seat auditorium had to be supplemented with another room with a video hookup, while the parallel sessions were organized into nine bewildering streams covering fourteen major physics topics. With the conventionality of the Standard Model virtually unchallenged, physics does not move fast these days. While most of the physics results had already been covered in principle at the International Conference on High Energy Physics held in Dallas in August (October, page 1), the Fermilab DPF meeting had a very different atmosphere. Major international meetings like Dallas attract big names from far and wide, and it is difficult in such an august atmosphere for young researchers to find a receptive audience. This was not the case at the DPF parallel sessions. The meeting also adopted a novel approach, with the parallels sandwiched between an initial day of plenaries to set the scene, and a final day of summaries. With the whole world waiting for the sixth ('top') quark to be discovered at Fermilab's Tevatron protonantiproton collider, the meeting began with updates from Avi Yagil and Ronald Madaras from the big detectors, CDF and DO respectively. Although rumours flew thick and fast, the Tevatron has not yet reached the top, although Yagil could show one intriguing event of a type expected from the heaviest quark.

  19. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  20. Development of a middle cerebral artery occlusion model in the nonhuman primate and a safety study of i.v. infusion of human mesenchymal stem cells.

    Directory of Open Access Journals (Sweden)

    Masanori Sasaki

    Full Text Available Most experimental stroke research is carried out in rodents, but given differences between rodents and human, nonhuman primate (NHP models may provide a valuable tool to study therapeutic interventions. The authors developed a surgical method for transient occlusion of the M1 branch of middle cerebral artery (MCA in the African green monkey to evaluate safety aspects of intravenous infusion of mesenchymal stem cells (hMSCs derived from human bone marrow.The left Sylvian fissure was exposed by a small fronto-temporal craniotomy. The M1 branch of the MCA was exposed by microsurgical dissection and clipped for 2 to 4 hours. Neurological examinations and magnetic resonance imaging (MRI were carried out at regular post-operative course. hMSCs were infused 1 hour after reperfusion (clip release in the 3-hour occlusion model.During M1 occlusion, two patterns of changes were observed in the lateral hemisphere surface. One pattern (Pattern 1 was darkening of venous blood, small vessel collapse, and blood pooling with no venous return in cortical veins. Animals with these three features had severe and lasting hemiplegia and MRI demonstrated extensive MCA territory infarction. Animals in the second pattern (Pattern 2 displayed darkening of venous blood, small vessel collapse, and reduced but incompletely occluded venous flow and the functional deficit was much less severe and MRI indicated smaller infarction areas in brain. The severe group (Pattern 1 likely had less extensive collateral circulation than the less severe group (Pattern 2 where venous pooling of blood was not observed. The hMSC infused animals showed a trend for greater functional improvement that was not statistically significant in the acute phase and no additive negative effects.These results indicate inter-animal variability of collateral circulation after complete M1 occlusion and that hMSC infusion is safe in the developed NHP stroke model.

  1. Some big ideas for some big problems.

    Science.gov (United States)

    Winter, D D

    2000-05-01

    Although most psychologists do not see sustainability as a psychological problem, our environmental predicament is caused largely by human behaviors, accompanied by relevant thoughts, feelings, attitudes, and values. The huge task of building sustainable cultures will require a great many psychologists from a variety of backgrounds. In an effort to stimulate the imaginations of a wide spectrum of psychologists to take on the crucial problem of sustainability, this article discusses 4 psychological approaches (neo-analytic, behavioral, social, and cognitive) and outlines some of their insights into environmentally relevant behavior. These models are useful for illuminating ways to increase environmentally responsible behaviors of clients, communities, and professional associations.

  2. Dimensions of normal and abnormal personality: Elucidating DSM-IV personality disorder symptoms in adolescents

    NARCIS (Netherlands)

    Tromp, N.B.; Koot, H.M.

    2010-01-01

    The present study aimed to elucidate dimensions of normal and abnormal personality underlying DSM-IV personality disorder (PD) symptoms in 168 adolescents referred to mental health services. Dimensions derived from the Big Five of normal personality and from Livesley's (2006) conceptualization of

  3. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  4. Reliability of the ICRP's dose coefficients for members of the public: IV. Basis of the human alimentary tract model and uncertainties in model predictions

    International Nuclear Information System (INIS)

    Leggett, R.; Harrison, J.; Phipps, A.

    2007-01-01

    The biokinetic and dosimetric model of the gastrointestinal (GI) tract applied in current documents of the International Commission on Radiological Protection (ICRP) was developed in the mid-1960's. The model was based on features of a reference adult male and was first used by the ICRP in Publication 30, Limits for Intakes of Radionuclides by Workers (Part 1, 1979). In the late 1990's an ICRP task group was appointed to develop a biokinetic and dosimetric model of the alimentary tract that reflects updated information and addresses current needs in radiation protection. The new age-specific and gender-specific model, called the Human Alimentary Tract Model (HATM), has been completed and will replace the GI model of Publication 30 in upcoming ICRP documents. This paper discusses the basis for the structure and parameter values of the HATM, summarises the uncertainties associated with selected features and types of predictions of the HATM and examines the sensitivity of dose estimates to these uncertainties for selected radionuclides. Emphasis is on generic biokinetic features of the HATM, particularly transit times through the lumen of the alimentary tract, but key dosimetric features of the model are outlined, and the sensitivity of tissue dose estimates to uncertainties in dosimetric as well as biokinetic features of the HATM are examined for selected radionuclides. (authors)

  5. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  6. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  7. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  8. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  9. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  10. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  11. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  12. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  13. Urban Big Data and the Development of City Intelligence

    Directory of Open Access Journals (Sweden)

    Yunhe Pan

    2016-06-01

    Full Text Available This study provides a definition for urban big data while exploring its features and applications of China's city intelligence. The differences between city intelligence in China and the “smart city” concept in other countries are compared to highlight and contrast the unique definition and model for China's city intelligence in this paper. Furthermore, this paper examines the role of urban big data in city intelligence by showing that it not only serves as the cornerstone of this trend as it also plays a core role in the diffusion of city intelligence technology and serves as an inexhaustible resource for the sustained development of city intelligence. This study also points out the challenges of shaping and developing of China's urban big data. Considering the supporting and core role that urban big data plays in city intelligence, the study then expounds on the key points of urban big data, including infrastructure support, urban governance, public services, and economic and industrial development. Finally, this study points out that the utility of city intelligence as an ideal policy tool for advancing the goals of China's urban development. In conclusion, it is imperative that China make full use of its unique advantages—including using the nation's current state of development and resources, geographical advantages, and good human relations—in subjective and objective conditions to promote the development of city intelligence through the proper application of urban big data.

  14. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  15. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  16. Big climate data analysis

    Science.gov (United States)

    Mudelsee, Manfred

    2015-04-01

    The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to

  17. Hey, big spender

    Energy Technology Data Exchange (ETDEWEB)

    Cope, G.

    2000-04-01

    Business to business electronic commerce is looming large in the future of the oil industry. It is estimated that by adopting e-commerce the industry could achieve bottom line savings of between $1.8 to $ 3.4 billion a year on annual gross revenues in excess of $ 30 billion. At present there are several teething problems to overcome such as inter-operability standards, which are at least two or three years away. Tying in electronically with specific suppliers is also an expensive proposition, although the big benefits are in fact in doing business with the same suppliers on a continuing basis. Despite these problems, 14 of the world's largest energy and petrochemical companies joined forces in mid-April to create a single Internet procurement marketplace for the industry's complex supply chain. The exchange was designed by B2B (business-to-business) software provider, Commerce One Inc., ; it will leverage the buying clout of these industry giants (BP Amoco, Royal Dutch Shell Group, Conoco, Occidental Petroleum, Phillips Petroleum, Unocal Corporation and Statoil among them), currently about $ 125 billion on procurement per year; they hope to save between 5 to 30 per cent depending on the product and the region involved. Other similar schemes such as Chevron and partners' Petrocosm Marketplace, Network Oil, a Houston-based Internet portal aimed at smaller petroleum companies, are also doing business in the $ 10 billion per annum range. e-Energy, a cooperative project between IBM Ericson and Telus Advertising is another neutral, virtual marketplace targeted at the oil and gas sector. PetroTRAX, a Calgary-based website plans to take online procurement and auction sales a big step forward by establishing a portal to handle any oil company's asset management needs. There are also a number of websites targeting specific needs: IndigoPool.com (acquisitions and divestitures) and WellBid.com (products related to upstream oil and gas operators) are just

  18. Hey, big spender

    International Nuclear Information System (INIS)

    Cope, G.

    2000-01-01

    Business to business electronic commerce is looming large in the future of the oil industry. It is estimated that by adopting e-commerce the industry could achieve bottom line savings of between $1.8 to $ 3.4 billion a year on annual gross revenues in excess of $ 30 billion. At present there are several teething problems to overcome such as inter-operability standards, which are at least two or three years away. Tying in electronically with specific suppliers is also an expensive proposition, although the big benefits are in fact in doing business with the same suppliers on a continuing basis. Despite these problems, 14 of the world's largest energy and petrochemical companies joined forces in mid-April to create a single Internet procurement marketplace for the industry's complex supply chain. The exchange was designed by B2B (business-to-business) software provider, Commerce One Inc., ; it will leverage the buying clout of these industry giants (BP Amoco, Royal Dutch Shell Group, Conoco, Occidental Petroleum, Phillips Petroleum, Unocal Corporation and Statoil among them), currently about $ 125 billion on procurement per year; they hope to save between 5 to 30 per cent depending on the product and the region involved. Other similar schemes such as Chevron and partners' Petrocosm Marketplace, Network Oil, a Houston-based Internet portal aimed at smaller petroleum companies, are also doing business in the $ 10 billion per annum range. e-Energy, a cooperative project between IBM Ericson and Telus Advertising is another neutral, virtual marketplace targeted at the oil and gas sector. PetroTRAX, a Calgary-based website plans to take online procurement and auction sales a big step forward by establishing a portal to handle any oil company's asset management needs. There are also a number of websites targeting specific needs: IndigoPool.com (acquisitions and divestitures) and WellBid.com (products related to upstream oil and gas operators) are just two examples. All in

  19. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  20. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  1. Medios ciudadanos y big data: La emergencia del activismo de datos

    NARCIS (Netherlands)

    Milan, S.; Gutiérrez, M.

    2015-01-01

    Big data presents citizens with new challenges and opportunities. ‘Data activism’ practices emerge at the intersection of the social and techsnological dimension of human action, whereby citizens take a critical approach to big data, and appropriate and manipulate data for advocacy and social

  2. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  3. Hepatic imaging in stage IV-S neuroblastoma

    International Nuclear Information System (INIS)

    Franken, E.A. Jr.; Smith, W.L.; Iowa Univ., Iowa City; Cohen, M.D.; Kisker, C.T.; Platz, C.E.

    1986-01-01

    Stage IV-S neuroblastoma describes a group of infants with tumor spread limited to liver, skin, or bone marrow. Such patients, who constitute about 25% of affected infants with neuroblastoma, may expect spontaneous tumor remission. We report 18 infants with Stage IV-S neuroblastoma, 83% of whom had liver involvement. Imaging investigations included Technetium 99m sulfur colloid scan, ultrasound, and CT. Two patterns of liver metastasis were noted: ill-defined nodules or diffuse tumor throughout the liver. Distinction of normal and abnormal liver with diffuse type metastasis could be quite difficult, particularly with liver scans. We conclude that patients with Stage IV-S neuroblastoma have ultrasound or CT examination as an initial workup, with nuclear medicine scans reserved for followup studies. (orig.)

  4. Biophotonics: the big picture

    Science.gov (United States)

    Marcu, Laura; Boppart, Stephen A.; Hutchinson, Mark R.; Popp, Jürgen; Wilson, Brian C.

    2018-02-01

    The 5th International Conference on Biophotonics (ICOB) held April 30 to May 1, 2017, in Fremantle, Western Australia, brought together opinion leaders to discuss future directions for the field and opportunities to consider. The first session of the conference, "How to Set a Big Picture Biophotonics Agenda," was focused on setting the stage for developing a vision and strategies for translation and impact on society of biophotonic technologies. The invited speakers, panelists, and attendees engaged in discussions that focused on opportunities and promising applications for biophotonic techniques, challenges when working at the confluence of the physical and biological sciences, driving factors for advances of biophotonic technologies, and educational opportunities. We share a summary of the presentations and discussions. Three main themes from the conference are presented in this position paper that capture the current status, opportunities, challenges, and future directions of biophotonics research and key areas of applications: (1) biophotonics at the nano- to microscale level; (2) biophotonics at meso- to macroscale level; and (3) biophotonics and the clinical translation conundrum.

  5. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  6. Big bang darkleosynthesis

    Directory of Open Access Journals (Sweden)

    Gordan Krnjaic

    2015-12-01

    Full Text Available In a popular class of models, dark matter comprises an asymmetric population of composite particles with short range interactions arising from a confined nonabelian gauge group. We show that coupling this sector to a well-motivated light mediator particle yields efficient darkleosynthesis, a dark-sector version of big-bang nucleosynthesis (BBN, in generic regions of parameter space. Dark matter self-interaction bounds typically require the confinement scale to be above ΛQCD, which generically yields large (≫MeV/dark-nucleon binding energies. These bounds further suggest the mediator is relatively weakly coupled, so repulsive forces between dark-sector nuclei are much weaker than Coulomb repulsion between standard-model nuclei, which results in an exponential barrier-tunneling enhancement over standard BBN. Thus, darklei are easier to make and harder to break than visible species with comparable mass numbers. This process can efficiently yield a dominant population of states with masses significantly greater than the confinement scale and, in contrast to dark matter that is a fundamental particle, may allow the dominant form of dark matter to have high spin (S≫3/2, whose discovery would be smoking gun evidence for dark nuclei.

  7. Predicting big bang deuterium

    Energy Technology Data Exchange (ETDEWEB)

    Hata, N.; Scherrer, R.J.; Steigman, G.; Thomas, D.; Walker, T.P. [Department of Physics, Ohio State University, Columbus, Ohio 43210 (United States)

    1996-02-01

    We present new upper and lower bounds to the primordial abundances of deuterium and {sup 3}He based on observational data from the solar system and the interstellar medium. Independent of any model for the primordial production of the elements we find (at the 95{percent} C.L.): 1.5{times}10{sup {minus}5}{le}(D/H){sub {ital P}}{le}10.0{times}10{sup {minus}5} and ({sup 3}He/H){sub {ital P}}{le}2.6{times}10{sup {minus}5}. When combined with the predictions of standard big bang nucleosynthesis, these constraints lead to a 95{percent} C.L. bound on the primordial abundance deuterium: (D/H){sub best}=(3.5{sup +2.7}{sub {minus}1.8}){times}10{sup {minus}5}. Measurements of deuterium absorption in the spectra of high-redshift QSOs will directly test this prediction. The implications of this prediction for the primordial abundances of {sup 4}He and {sup 7}Li are discussed, as well as those for the universal density of baryons. {copyright} {ital 1996 The American Astronomical Society.}

  8. Big bang darkleosynthesis

    Science.gov (United States)

    Krnjaic, Gordan; Sigurdson, Kris

    2015-12-01

    In a popular class of models, dark matter comprises an asymmetric population of composite particles with short range interactions arising from a confined nonabelian gauge group. We show that coupling this sector to a well-motivated light mediator particle yields efficient darkleosynthesis, a dark-sector version of big-bang nucleosynthesis (BBN), in generic regions of parameter space. Dark matter self-interaction bounds typically require the confinement scale to be above ΛQCD, which generically yields large (≫MeV /dark-nucleon) binding energies. These bounds further suggest the mediator is relatively weakly coupled, so repulsive forces between dark-sector nuclei are much weaker than Coulomb repulsion between standard-model nuclei, which results in an exponential barrier-tunneling enhancement over standard BBN. Thus, darklei are easier to make and harder to break than visible species with comparable mass numbers. This process can efficiently yield a dominant population of states with masses significantly greater than the confinement scale and, in contrast to dark matter that is a fundamental particle, may allow the dominant form of dark matter to have high spin (S ≫ 3 / 2), whose discovery would be smoking gun evidence for dark nuclei.

  9. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  10. The role of big laboratories

    International Nuclear Information System (INIS)

    Heuer, R-D

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward. (paper)

  11. Diaquatetrabromidotin(IV trihydrate

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2012-09-01

    Full Text Available The title compound, [SnBr4(H2O2]·3H2O, forms large colourless crystals in originally sealed samples of tin tetrabromide. It constitutes the first structurally characterized hydrate of SnBr4 and is isostructural with the corresponding hydrate of SnCl4. It is composed of SnIV atoms octahedrally coordinated by four Br atoms and two cis-related water molecules. The octahedra exhibit site symmetry 2. They are arranged into columns along [001] via medium–strong O—H...O hydrogen bonds involving the two lattice water molecules (one situated on a twofold rotation axis while the chains are interconnected via longer O—H...Br hydrogen bonds, forming a three-dimensional network.

  12. Cyclopentadienyluranium(IV) acetylacetonates

    International Nuclear Information System (INIS)

    Bagnall, K.W.; Edwards, J.; Rickard, C.E.F.; Tempest, A.C.

    1979-01-01

    Cyclopentadienyluranium(IV) acetylacetonate complexes, (eta 5 C 5 H 5 )UClsub(3-x)(acac)sub(x), where x = 1 or 2, and the corresponding bis triphenylphosphine oxide (tppo) complexes have been prepared. The bis cyclopentadienyl complexes, (eta 5 C 5 H 5 ) 2 U(acac) 2 and (eta 5 C 5 H 5 ) 2 UCl(acac)(tppo) 2 have also been prepared and are stable with respect to disproportionation, whereas (eta 5 C 5 H 5 ) 2 UCl(acac) is not. The IR and UV/visible spectra of the complexes are reported, together with some additional information on the UCl 2 (acac) 2 thf and -tppo systems. (author)

  13. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  14. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  15. Identification of novel dipeptidyl peptidase IV (DPP-IV) inhibitory peptides in camel milk protein hydrolysates.

    Science.gov (United States)

    Nongonierma, Alice B; Paolella, Sara; Mudgil, Priti; Maqsood, Sajid; FitzGerald, Richard J

    2018-04-01

    Nine novel dipeptidyl peptidase IV (DPP-IV) inhibitory peptides (FLQY, FQLGASPY, ILDKEGIDY, ILELA, LLQLEAIR, LPVP, LQALHQGQIV, MPVQA and SPVVPF) were identified in camel milk proteins hydrolysed with trypsin. This was achieved using a sequential approach combining liquid chromatography tandem mass spectrometry (LC-MS/MS), qualitative/quantitative structure activity relationship (QSAR) and confirmatory studies with synthetic peptides. The most potent camel milk protein-derived DPP-IV inhibitory peptides, LPVP and MPVQA, had DPP-IV half maximal inhibitory concentrations (IC 50 ) of 87.0 ± 3.2 and 93.3 ± 8.0 µM, respectively. DPP-IV inhibitory peptide sequences identified within camel and bovine milk protein hydrolysates generated under the same hydrolysis conditions differ. This was linked to differences in enzyme selectivity for peptide bond cleavage of camel and bovine milk proteins as well as dissimilarities in their amino acid sequences. Camel milk proteins contain novel DPP-IV inhibitory peptides which may play a role in the regulation of glycaemia in humans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Congenital bilateral neuroblastoma (stage IV-S): case report

    International Nuclear Information System (INIS)

    Lee, Jeong Hee; Lee, Hee Jung; Woo, Seong Ku; Lee, Sang Rak; Kim, Heung Sik

    2002-01-01

    Congenital neonatal neuroblastoma is not uncommon but bilateral adrenal neuroblastoma is rare, accounting for about ten percent of neuroblastomas in children. We report the US the MR findings of a stage IV-S congenital bilateral neuroblastoma occurring in a one-day-old neonate

  17. Big Data Analytics

    Indian Academy of Sciences (India)

    IAS Admin

    2016-08-20

    Aug 20, 2016 ... massive computing resources. Data Science, of ... Relational Data Base Management System (RDBMS). A query ..... Beating a human at GO was one of the ..... Job tracker splits a job submitted to the system into Map tasks.

  18. Priming the Pump for Big Data at Sentara Healthcare.

    Science.gov (United States)

    Kern, Howard P; Reagin, Michael J; Reese, Bertram S

    2016-01-01

    Today's healthcare organizations are facing significant demands with respect to managing population health, demonstrating value, and accepting risk for clinical outcomes across the continuum of care. The patient's environment outside the walls of the hospital and physician's office-and outside the electronic health record (EHR)-has a substantial impact on clinical care outcomes. The use of big data is key to understanding factors that affect the patient's health status and enhancing clinicians' ability to anticipate how the patient will respond to various therapies. Big data is essential to delivering sustainable, highquality, value-based healthcare, as well as to the success of new models of care such as clinically integrated networks (CINs) and accountable care organizations.Sentara Healthcare, based in Norfolk, Virginia, has been an early adopter of the technologies that have readied us for our big data journey: EHRs, telehealth-supported electronic intensive care units, and telehealth primary care support through MDLIVE. Although we would not say Sentara is at the cutting edge of the big data trend, it certainly is among the fast followers. Use of big data in healthcare is still at an early stage compared with other industries. Tools for data analytics are maturing, but traditional challenges such as heightened data security and limited human resources remain the primary focus for regional health systems to improve care and reduce costs. Sentara primarily makes actionable use of big data in our CIN, Sentara Quality Care Network, and at our health plan, Optima Health. Big data projects can be expensive, and justifying the expense organizationally has often been easier in times of crisis. We have developed an analytics strategic plan separate from but aligned with corporate system goals to ensure optimal investment and management of this essential asset.

  19. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  20. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  1. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  2. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  3. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  4. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  5. Glycogen Storage Disease Type IV

    DEFF Research Database (Denmark)

    Bendroth-Asmussen, Lisa; Aksglaede, Lise; Gernow, Anne B

    2016-01-01

    molecular genetic analyses confirmed glycogen storage disease Type IV with the finding of compound heterozygosity for 2 mutations (c.691+2T>C and c.1570C>T, p.R524X) in the GBE1 gene. We conclude that glycogen storage disease Type IV can cause early miscarriage and that diagnosis can initially be made...

  6. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  7. Modcomp MAX IV System Processors reference guide

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, J.

    1990-10-01

    A user almost always faces a big problem when having to learn to use a new computer system. The information necessary to use the system is often scattered throughout many different manuals. The user also faces the problem of extracting the information really needed from each manual. Very few computer vendors supply a single Users Guide or even a manual to help the new user locate the necessary manuals. Modcomp is no exception to this, Modcomp MAX IV requires that the user be familiar with the system file usage which adds to the problem. At General Atomics there is an ever increasing need for new users to learn how to use the Modcomp computers. This paper was written to provide a condensed Users Reference Guide'' for Modcomp computer users. This manual should be of value not only to new users but any users that are not Modcomp computer systems experts. This Users Reference Guide'' is intended to provided the basic information for the use of the various Modcomp System Processors necessary to, create, compile, link-edit, and catalog a program. Only the information necessary to provide the user with a basic understanding of the Systems Processors is included. This document provides enough information for the majority of programmers to use the Modcomp computers without having to refer to any other manuals. A lot of emphasis has been placed on the file description and usage for each of the System Processors. This allows the user to understand how Modcomp MAX IV does things rather than just learning the system commands.

  8. Big Five personality group differences across academic majors

    DEFF Research Database (Denmark)

    Vedel, Anna

    2016-01-01

    During the past decades, a number of studies have explored personality group differences in the Big Five personality traits among students in different academic majors. To date, though, this research has not been reviewed systematically. This was the aim of the present review. A systematic...... literature search identified twelve eligible studies yielding an aggregated sample size of 13,389. Eleven studies reported significant group differences in one or multiple Big Five personality traits. Consistent findings across studies were that students of arts/humanities and psychology scored high...... on Conscientiousness. Effect sizes were calculated to estimate the magnitude of the personality group differences. These effect sizes were consistent across studies comparing similar pairs of academic majors. For all Big Five personality traits medium effect sizes were found frequently, and for Openness even large...

  9. The Shadow of Big Data: Data-Citizenship and Exclusion

    DEFF Research Database (Denmark)

    Rossi, Luca; Hjelholt, Morten; Neumayer, Christina

    2016-01-01

    The shadow of Big Data: data-citizenship and exclusion Big data are understood as being able to provide insights on human behaviour at an individual as well as at an aggregated societal level (Manyka et al. 2011). These insights are expected to be more detailed and precise than anything before...... thanks to the large volume of digital data and to the unobstrusive nature of the data collection (Fishleigh 2014). Within this perspective, these two dimensions (volume and unobstrusiveness) define contemporary big data techniques as a socio-technical offering to society, a live representation of itself...... this process "data-citizenship" emerges. Data-citizenship assumes that citizens will be visible to the state through the data they produce. On a general level data-citizenship shifts citizenship from an intrinsic status of a group of people to a status achieved through action. This approach assumes equal...

  10. About the structure and stability of complex carbonates of thorium (IV), cerium (IV), zirconium (IV), hafnium (IV)

    International Nuclear Information System (INIS)

    Dervin, Jacqueline

    1972-01-01

    This research thesis addressed the study of complex carbonates of cations of metals belonging to the IV A column, i.e. thorium (IV), zirconium (IV), hafnium (IV), and also cerium (IV) and uranium (VI), and more particularly focused on ionic compounds formed in solution, and also on the influence of concentration and nature of cations on stability and nature of the formed solid. The author first presents methods used in this study, discusses their precision and scope of validity. She reports the study of the formation of different complex ions which have been highlighted in solution, and the determination of their formation constants. She reports the preparation and study of the stability domain of solid complexes. The next part reports the use of thermogravimetric analysis, IR spectrometry, and crystallography for the structural study of these compounds

  11. Big Data Analytics in Healthcare.

    Science.gov (United States)

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  12. From big data to deep insight in developmental science.

    Science.gov (United States)

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  13. The use of big data in transfusion medicine.

    Science.gov (United States)

    Pendry, K

    2015-06-01

    'Big data' refers to the huge quantities of digital information now available that describe much of human activity. The science of data management and analysis is rapidly developing to enable organisations to convert data into useful information and knowledge. Electronic health records and new developments in Pathology Informatics now support the collection of 'big laboratory and clinical data', and these digital innovations are now being applied to transfusion medicine. To use big data effectively, we must address concerns about confidentiality and the need for a change in culture and practice, remove barriers to adopting common operating systems and data standards and ensure the safe and secure storage of sensitive personal information. In the UK, the aim is to formulate a single set of data and standards for communicating test results and so enable pathology data to contribute to national datasets. In transfusion, big data has been used for benchmarking, detection of transfusion-related complications, determining patterns of blood use and definition of blood order schedules for surgery. More generally, rapidly available information can monitor compliance with key performance indicators for patient blood management and inventory management leading to better patient care and reduced use of blood. The challenges of enabling reliable systems and analysis of big data and securing funding in the restrictive financial climate are formidable, but not insurmountable. The promise is that digital information will soon improve the implementation of best practice in transfusion medicine and patient blood management globally. © 2015 British Blood Transfusion Society.

  14. A practical guide to big data research in psychology.

    Science.gov (United States)

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Big Book of Windows Hacks

    CERN Document Server

    Gralla, Preston

    2008-01-01

    Bigger, better, and broader in scope, the Big Book of Windows Hacks gives you everything you need to get the most out of your Windows Vista or XP system, including its related applications and the hardware it runs on or connects to. Whether you want to tweak Vista's Aero interface, build customized sidebar gadgets and run them from a USB key, or hack the "unhackable" screensavers, you'll find quick and ingenious ways to bend these recalcitrant operating systems to your will. The Big Book of Windows Hacks focuses on Vista, the new bad boy on Microsoft's block, with hacks and workarounds that

  16. Sosiaalinen asiakassuhdejohtaminen ja big data

    OpenAIRE

    Toivonen, Topi-Antti

    2015-01-01

    Tässä tutkielmassa käsitellään sosiaalista asiakassuhdejohtamista sekä hyötyjä, joita siihen voidaan saada big datan avulla. Sosiaalinen asiakassuhdejohtaminen on terminä uusi ja monille tuntematon. Tutkimusta motivoi aiheen vähäinen tutkimus, suomenkielisen tutkimuksen puuttuminen kokonaan sekä sosiaalisen asiakassuhdejohtamisen mahdollinen olennainen rooli yritysten toiminnassa tulevaisuudessa. Big dataa käsittelevissä tutkimuksissa keskitytään monesti sen tekniseen puoleen, eikä sovellutuk...

  17. Do big gods cause anything?

    DEFF Research Database (Denmark)

    Geertz, Armin W.

    2014-01-01

    Dette er et bidrag til et review symposium vedrørende Ara Norenzayans bog Big Gods: How Religion Transformed Cooperation and Conflict (Princeton University Press 2013). Bogen er spændende men problematisk i forhold til kausalitet, ateisme og stereotyper om jægere-samlere.......Dette er et bidrag til et review symposium vedrørende Ara Norenzayans bog Big Gods: How Religion Transformed Cooperation and Conflict (Princeton University Press 2013). Bogen er spændende men problematisk i forhold til kausalitet, ateisme og stereotyper om jægere-samlere....

  18. Big Data and Social Media

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    A critical analysis of the "keep everything" Big Data era, the impact on our lives of the information, at first glance "convenient for future use" that we make known about ourselves on the network. NB! The lecture will be recorded like all Academic Training lectures. Lecturer's biography: Father of the Internet, see https://internethalloffame.org/inductees/vint-cerf or https://en.wikipedia.org/wiki/Vint_Cerf The video on slide number 9 is from page https://www.gapminder.org/tools/#$state$time$value=2018&value;;&chart-type=bubbles   Keywords: Big Data, Internet, History, Applications, tools, privacy, technology, preservation, surveillance, google, Arpanet, CERN, Web  

  19. Baryon symmetric big bang cosmology

    International Nuclear Information System (INIS)

    Stecker, F.W.

    1978-01-01

    It is stated that the framework of baryon symmetric big bang (BSBB) cosmology offers our greatest potential for deducting the evolution of the Universe because its physical laws and processes have the minimum number of arbitrary assumptions about initial conditions in the big-bang. In addition, it offers the possibility of explaining the photon-baryon ratio in the Universe and how galaxies and galaxy clusters are formed. BSBB cosmology also provides the only acceptable explanation at present for the origin of the cosmic γ-ray background radiation. (author)

  20. Release plan for Big Pete

    International Nuclear Information System (INIS)

    Edwards, T.A.

    1996-11-01

    This release plan is to provide instructions for the Radiological Control Technician (RCT) to conduct surveys for the unconditional release of ''Big Pete,'' which was used in the removal of ''Spacers'' from the N-Reactor. Prior to performing surveys on the rear end portion of ''Big Pete,'' it shall be cleaned (i.e., free of oil, grease, caked soil, heavy dust). If no contamination is found, the vehicle may be released with the permission of the area RCT Supervisor. If contamination is found by any of the surveys, contact the cognizant Radiological Engineer for decontamination instructions

  1. Small quarks make big nuggets

    International Nuclear Information System (INIS)

    Deligeorges, S.

    1985-01-01

    After a brief recall on the classification of subatomic particles, this paper deals with quark nuggets, particle with more than three quarks, a big bag, which is called ''nuclearite''. Neutron stars, in fact, are big sacks of quarks, gigantic nuggets. Now, physicists try to calculate which type of nuggets of strange quark matter is stable, what has been the influence of quark nuggets on the primordial nucleosynthesis. At the present time, one says that if these ''nuggets'' exist, and in a large proportion, they may be candidates for the missing mass [fr

  2. [Big Data- challenges and risks].

    Science.gov (United States)

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  3. Towards a big crunch dual

    Energy Technology Data Exchange (ETDEWEB)

    Hertog, Thomas E-mail: hertog@vulcan2.physics.ucsb.edu; Horowitz, Gary T

    2004-07-01

    We show there exist smooth asymptotically anti-de Sitter initial data which evolve to a big crunch singularity in a low energy supergravity limit of string theory. This opens up the possibility of using the dual conformal field theory to obtain a fully quantum description of the cosmological singularity. A preliminary study of this dual theory suggests that the big crunch is an endpoint of evolution even in the full string theory. We also show that any theory with scalar solitons must have negative energy solutions. The results presented here clarify our earlier work on cosmic censorship violation in N=8 supergravity. (author)

  4. The Inverted Big-Bang

    OpenAIRE

    Vaas, Ruediger

    2004-01-01

    Our universe appears to have been created not out of nothing but from a strange space-time dust. Quantum geometry (loop quantum gravity) makes it possible to avoid the ominous beginning of our universe with its physically unrealistic (i.e. infinite) curvature, extreme temperature, and energy density. This could be the long sought after explanation of the big-bang and perhaps even opens a window into a time before the big-bang: Space itself may have come from an earlier collapsing universe tha...

  5. Integrative Analysis of Omics Big Data.

    Science.gov (United States)

    Yu, Xiang-Tian; Zeng, Tao

    2018-01-01

    The diversity and huge omics data take biology and biomedicine research and application into a big data era, just like that popular in human society a decade ago. They are opening a new challenge from horizontal data ensemble (e.g., the similar types of data collected from different labs or companies) to vertical data ensemble (e.g., the different types of data collected for a group of person with match information), which requires the integrative analysis in biology and biomedicine and also asks for emergent development of data integration to address the great changes from previous population-guided to newly individual-guided investigations.Data integration is an effective concept to solve the complex problem or understand the complicate system. Several benchmark studies have revealed the heterogeneity and trade-off that existed in the analysis of omics data. Integrative analysis can combine and investigate many datasets in a cost-effective reproducible way. Current integration approaches on biological data have two modes: one is "bottom-up integration" mode with follow-up manual integration, and the other one is "top-down integration" mode with follow-up in silico integration.This paper will firstly summarize the combinatory analysis approaches to give candidate protocol on biological experiment design for effectively integrative study on genomics and then survey the data fusion approaches to give helpful instruction on computational model development for biological significance detection, which have also provided newly data resources and analysis tools to support the precision medicine dependent on the big biomedical data. Finally, the problems and future directions are highlighted for integrative analysis of omics big data.

  6. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  7. RETRAN operational transient analysis of the Big Rock Point plant boiling water reactor

    International Nuclear Information System (INIS)

    Sawtelle, G.R.; Atchison, J.D.; Farman, R.F.; VandeWalle, D.J.; Bazydlo, H.G.

    1983-01-01

    Energy Incorporated used the RETRAN computer code to model and calculate nine Consumers Power Company Big Rock Point Nuclear Power Plant transients. RETRAN, a best-estimate, one-dimensional, homogeneous-flow thermal-equilibrium code, is applicable to FSAR Chapter 15 transients for Conditions 1 through IV. The BWR analyses were performed in accordance with USNRC Standard Review Plan criteria and in response to the USNRC Systematic Evaluation Program. The RETRAN Big Rock Point model was verified by comparison to plant startup test data. This paper discusses the unique modeling techniques used in RETRAN to model this steam-drum-type BWR. Transient analyses results are also presented

  8. Big Cities, Big Problems: Reason for the Elderly to Move?

    NARCIS (Netherlands)

    Fokkema, T.; de Jong-Gierveld, J.; Nijkamp, P.

    1996-01-01

    In many European countries, data on geographical patterns of internal elderly migration show that the elderly (55+) are more likely to leave than to move to the big cities. Besides emphasising the attractive features of the destination areas (pull factors), it is often assumed that this negative

  9. Big-Eyed Bugs Have Big Appetite for Pests

    Science.gov (United States)

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  10. A survey on Big Data Stream Mining

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Big Data can be static on one machine or distributed ... decision making, and process automation. Big data .... Concept Drifting: concept drifting mean the classifier .... transactions generated by a prefix tree structure. EstDec ...

  11. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  12. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  13. China: Big Changes Coming Soon

    Science.gov (United States)

    Rowen, Henry S.

    2011-01-01

    Big changes are ahead for China, probably abrupt ones. The economy has grown so rapidly for many years, over 30 years at an average of nine percent a year, that its size makes it a major player in trade and finance and increasingly in political and military matters. This growth is not only of great importance internationally, it is already having…

  14. Big data and urban governance

    NARCIS (Netherlands)

    Taylor, L.; Richter, C.; Gupta, J.; Pfeffer, K.; Verrest, H.; Ros-Tonen, M.

    2015-01-01

    This chapter examines the ways in which big data is involved in the rise of smart cities. Mobile phones, sensors and online applications produce streams of data which are used to regulate and plan the city, often in real time, but which presents challenges as to how the city’s functions are seen and

  15. Big Data for personalized healthcare

    NARCIS (Netherlands)

    Siemons, Liseth; Sieverink, Floor; Vollenbroek, Wouter; van de Wijngaert, Lidwien; Braakman-Jansen, Annemarie; van Gemert-Pijnen, Lisette

    2016-01-01

    Big Data, often defined according to the 5V model (volume, velocity, variety, veracity and value), is seen as the key towards personalized healthcare. However, it also confronts us with new technological and ethical challenges that require more sophisticated data management tools and data analysis

  16. Big data en gelijke behandeling

    NARCIS (Netherlands)

    Lammerant, Hans; de Hert, Paul; Blok, P.H.; Blok, P.H.

    2017-01-01

    In dit hoofdstuk bekijken we allereerst de voornaamste basisbegrippen inzake gelijke behandeling en discriminatie (paragraaf 6.2). Vervolgens kijken we haar het Nederlandse en Europese juridisch kader inzake non-discriminatie (paragraaf 6.3-6.5) en hoe die regels moeten worden toegepast op big

  17. Research Ethics in Big Data.

    Science.gov (United States)

    Hammer, Marilyn J

    2017-05-01

    The ethical conduct of research includes, in part, patient agreement to participate in studies and the protection of health information. In the evolving world of data science and the accessibility of large quantities of web-based data created by millions of individuals, novel methodologic approaches to answering research questions are emerging. This article explores research ethics in the context of big data.

  18. Big data e data science

    OpenAIRE

    Cavique, Luís

    2014-01-01

    Neste artigo foram apresentados os conceitos básicos de Big Data e a nova área a que deu origem, a Data Science. Em Data Science foi discutida e exemplificada a noção de redução da dimensionalidade dos dados.

  19. The Case for "Big History."

    Science.gov (United States)

    Christian, David

    1991-01-01

    Urges an approach to the teaching of history that takes the largest possible perspective, crossing time as well as space. Discusses the problems and advantages of such an approach. Describes a course on "big" history that begins with time, creation myths, and astronomy, and moves on to paleontology and evolution. (DK)

  20. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  1. Sampling Operations on Big Data

    Science.gov (United States)

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  2. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  3. Big Math for Little Kids

    Science.gov (United States)

    Greenes, Carole; Ginsburg, Herbert P.; Balfanz, Robert

    2004-01-01

    "Big Math for Little Kids," a comprehensive program for 4- and 5-year-olds, develops and expands on the mathematics that children know and are capable of doing. The program uses activities and stories to develop ideas about number, shape, pattern, logical reasoning, measurement, operations on numbers, and space. The activities introduce the…

  4. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  5. From Big Bang to Eternity?

    Indian Academy of Sciences (India)

    at different distances (that is, at different epochs in the past) to come to this ... that the expansion started billions of years ago from an explosive Big Bang. Recent research sheds new light on the key cosmological question about the distant ...

  6. Banking Wyoming big sagebrush seeds

    Science.gov (United States)

    Robert P. Karrfalt; Nancy Shaw

    2013-01-01

    Five commercially produced seed lots of Wyoming big sagebrush (Artemisia tridentata Nutt. var. wyomingensis (Beetle & Young) S.L. Welsh [Asteraceae]) were stored under various conditions for 5 y. Purity, moisture content as measured by equilibrium relative humidity, and storage temperature were all important factors to successful seed storage. Our results indicate...

  7. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  8. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  9. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  10. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  11. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  12. Game, cloud architecture and outreach for The BIG Bell Test

    Science.gov (United States)

    Abellan, Carlos; Tura, Jordi; Garcia, Marta; Beduini, Federica; Hirschmann, Alina; Pruneri, Valerio; Acin, Antonio; Marti, Maria; Mitchell, Morgan

    The BIG Bell test uses the input from the Bellsters, self-selected human participants introducing zeros and ones through an online videogame, to perform a suite of quantum physics experiments. In this talk, we will explore the videogame, the data infrastructure and the outreach efforts of the BIG Bell test collaboration. First, we will discuss how the game was designed so as to eliminate possible feedback mechanisms that could influence people's behavior. Second, we will discuss the cloud architecture design for scalability as well as explain how we sent each individual bit from the users to the labs. Also, and using all the bits collected via the BIG Bell test interface, we will show a data analysis on human randomness, e.g. are younger Bellsters more random than older Bellsters? Finally, we will talk about the outreach and communication efforts of the BIG Bell test collaboration, exploring both the social media campaigns as well as the close interaction with teachers and educators to bring the project into classrooms.

  13. Big Data - Smart Health Strategies

    Science.gov (United States)

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  14. Direct Bandgap Group IV Materials

    Science.gov (United States)

    2016-01-21

    AFRL-AFOSR-JP-TR-2017-0049 Direct Bandgap group IV Materials Hung Hsiang Cheng NATIONAL TAIWAN UNIVERSITY Final Report 01/21/2016 DISTRIBUTION A...NAME(S) AND ADDRESS(ES) NATIONAL TAIWAN UNIVERSITY 1 ROOSEVELT RD. SEC. 4 TAIPEI CITY, 10617 TW 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...14. ABSTRACT Direct bandgap group IV materials have been long sought for in both academia and industry for the implementation of photonic devices

  15. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  16. Fuzzy VIKOR approach for selection of big data analyst in procurement management

    Directory of Open Access Journals (Sweden)

    Surajit Bag

    2016-07-01

    Full Text Available Background: Big data and predictive analysis have been hailed as the fourth paradigm of science. Big data and analytics are critical to the future of business sustainability. The demand for data scientists is increasing with the dynamic nature of businesses, thus making it indispensable to manage big data, derive meaningful results and interpret management decisions. Objectives: The purpose of this study was to provide a brief conceptual review of big data and analytics and further illustrate the use of a multicriteria decision-making technique in selecting the right skilled candidate for big data and analytics in procurement management. Method: It is important for firms to select and recruit the right data analyst, both in terms of skills sets and scope of analysis. The nature of such a problem is complex and multicriteria decision-making, which deals with both qualitative and quantitative factors. In the current study, an application of the Fuzzy VIsekriterijumska optimizacija i KOmpromisno Resenje (VIKOR method was used to solve the big data analyst selection problem. Results: From this study, it was identified that Technical knowledge (C1, Intellectual curiosity (C4 and Business acumen (C5 are the strongest influential criteria and must be present in the candidate for the big data and analytics job. Conclusion: Fuzzy VIKOR is the perfect technique in this kind of multiple criteria decisionmaking problematic scenario. This study will assist human resource managers and procurement managers in selecting the right workforce for big data analytics.

  17. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    CERN Multimedia

    2008-01-01

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  18. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  19. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  20. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  1. Solution of a braneworld big crunch/big bang cosmology

    International Nuclear Information System (INIS)

    McFadden, Paul L.; Turok, Neil; Steinhardt, Paul J.

    2007-01-01

    We solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed V divided by the speed of light c. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in (V/c) 2 . At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios

  2. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  3. Big Data – Big Deal for Organization Design?

    OpenAIRE

    Janne J. Korhonen

    2014-01-01

    Analytics is an increasingly important source of competitive advantage. It has even been posited that big data will be the next strategic emphasis of organizations and that analytics capability will be manifested in organizational structure. In this article, I explore how analytics capability might be reflected in organizational structure using the notion of  “requisite organization” developed by Jaques (1998). Requisite organization argues that a new strategic emphasis requires the addition ...

  4. Nowcasting using news topics Big Data versus big bank

    OpenAIRE

    Thorsrud, Leif Anders

    2016-01-01

    The agents in the economy use a plethora of high frequency information, including news media, to guide their actions and thereby shape aggregate economic fluctuations. Traditional nowcasting approches have to a relatively little degree made use of such information. In this paper, I show how unstructured textual information in a business newspaper can be decomposed into daily news topics and used to nowcast quarterly GDP growth. Compared with a big bank of experts, here represented by o cial c...

  5. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  6. Research Dilemmas with Behavioral Big Data.

    Science.gov (United States)

    Shmueli, Galit

    2017-06-01

    Behavioral big data (BBD) refers to very large and rich multidimensional data sets on human and social behaviors, actions, and interactions, which have become available to companies, governments, and researchers. A growing number of researchers in social science and management fields acquire and analyze BBD for the purpose of extracting knowledge and scientific discoveries. However, the relationships between the researcher, data, subjects, and research questions differ in the BBD context compared to traditional behavioral data. Behavioral researchers using BBD face not only methodological and technical challenges but also ethical and moral dilemmas. In this article, we discuss several dilemmas, challenges, and trade-offs related to acquiring and analyzing BBD for causal behavioral research.

  7. Big power from walking

    Science.gov (United States)

    Illenberger, Patrin K.; Madawala, Udaya K.; Anderson, Iain A.

    2016-04-01

    Dielectric Elastomer Generators (DEG) offer an opportunity to capture the energy otherwise wasted from human motion. By integrating a DEG into the heel of standard footwear, it is possible to harness this energy to power portable devices. DEGs require substantial auxiliary systems which are commonly large, heavy and inefficient. A unique challenge for these low power generators is the combination of high voltage and low current. A void exists in the semiconductor market for devices that can meet these requirements. Until these become available, existing devices must be used in an innovative way to produce an effective DEG system. Existing systems such as the Bi-Directional Flyback (BDFB) and Self Priming Circuit (SPC) are an excellent example of this. The BDFB allows full charging and discharging of the DEG, improving power gained. The SPC allows fully passive voltage boosting, removing the priming source and simplifying the electronics. This paper outlines the drawbacks and benefits of active and passive electronic solutions for maximizing power from walking.

  8. Big Data for Precision Medicine

    Directory of Open Access Journals (Sweden)

    Daniel Richard Leff

    2015-09-01

    Full Text Available This article focuses on the potential impact of big data analysis to improve health, prevent and detect disease at an earlier stage, and personalize interventions. The role that big data analytics may have in interrogating the patient electronic health record toward improved clinical decision support is discussed. We examine developments in pharmacogenetics that have increased our appreciation of the reasons why patients respond differently to chemotherapy. We also assess the expansion of online health communications and the way in which this data may be capitalized on in order to detect public health threats and control or contain epidemics. Finally, we describe how a new generation of wearable and implantable body sensors may improve wellbeing, streamline management of chronic diseases, and improve the quality of surgical implants.

  9. Big Data hvor N=1

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind

    2017-01-01

    Forskningen vedrørende anvendelsen af ’big data’ indenfor sundhed er kun lige begyndt, og kan på sigt blive en stor hjælp i forhold til at tilrettelægge en mere personlig og helhedsorienteret sundhedsindsats for multisyge. Personlig sundhedsteknologi, som kort præsenteres i dette kapital, rummer et...... stor potentiale for at gennemføre ’big data’ analyser for den enkelte person, det vil sige hvor N=1. Der er store teknologiske udfordringer i at få lavet teknologier og metoder til at indsamle og håndtere personlige data, som kan deles, på tværs på en standardiseret, forsvarlig, robust, sikker og ikke...

  10. George and the big bang

    CERN Document Server

    Hawking, Lucy; Parsons, Gary

    2012-01-01

    George has problems. He has twin baby sisters at home who demand his parents’ attention. His beloved pig Freddy has been exiled to a farm, where he’s miserable. And worst of all, his best friend, Annie, has made a new friend whom she seems to like more than George. So George jumps at the chance to help Eric with his plans to run a big experiment in Switzerland that seeks to explore the earliest moment of the universe. But there is a conspiracy afoot, and a group of evildoers is planning to sabotage the experiment. Can George repair his friendship with Annie and piece together the clues before Eric’s experiment is destroyed forever? This engaging adventure features essays by Professor Stephen Hawking and other eminent physicists about the origins of the universe and ends with a twenty-page graphic novel that explains how the Big Bang happened—in reverse!

  11. Did the Big Bang begin?

    International Nuclear Information System (INIS)

    Levy-Leblond, J.

    1990-01-01

    It is argued that the age of the universe may well be numerically finite (20 billion years or so) and conceptually infinite. A new and natural time scale is defined on a physical basis using group-theoretical arguments. An additive notion of time is obtained according to which the age of the universe is indeed infinite. In other words, never did the Big Bang begin. This new time scale is not supposed to replace the ordinary cosmic time scale, but to supplement it (in the same way as rapidity has taken a place by the side of velocity in Einsteinian relativity). The question is discussed within the framework of conventional (big-bang) and classical (nonquantum) cosmology, but could easily be extended to more elaborate views, as the purpose is not so much to modify present theories as to reach a deeper understanding of their meaning

  12. Big Data in Drug Discovery.

    Science.gov (United States)

    Brown, Nathan; Cambruzzi, Jean; Cox, Peter J; Davies, Mark; Dunbar, James; Plumbley, Dean; Sellwood, Matthew A; Sim, Aaron; Williams-Jones, Bryn I; Zwierzyna, Magdalena; Sheppard, David W

    2018-01-01

    Interpretation of Big Data in the drug discovery community should enhance project timelines and reduce clinical attrition through improved early decision making. The issues we encounter start with the sheer volume of data and how we first ingest it before building an infrastructure to house it to make use of the data in an efficient and productive way. There are many problems associated with the data itself including general reproducibility, but often, it is the context surrounding an experiment that is critical to success. Help, in the form of artificial intelligence (AI), is required to understand and translate the context. On the back of natural language processing pipelines, AI is also used to prospectively generate new hypotheses by linking data together. We explain Big Data from the context of biology, chemistry and clinical trials, showcasing some of the impressive public domain sources and initiatives now available for interrogation. © 2018 Elsevier B.V. All rights reserved.

  13. Big Data and central banks

    OpenAIRE

    David Bholat

    2015-01-01

    This commentary recaps a Centre for Central Banking Studies event held at the Bank of England on 2–3 July 2014. The article covers three main points. First, it situates the Centre for Central Banking Studies event within the context of the Bank’s Strategic Plan and initiatives. Second, it summarises and reflects on major themes from the event. Third, the article links central banks’ emerging interest in Big Data approaches with their broader uptake by other economic agents.

  14. Big Bang or vacuum fluctuation

    International Nuclear Information System (INIS)

    Zel'dovich, Ya.B.

    1980-01-01

    Some general properties of vacuum fluctuations in quantum field theory are described. The connection between the ''energy dominance'' of the energy density of vacuum fluctuations in curved space-time and the presence of singularity is discussed. It is pointed out that a de-Sitter space-time (with the energy density of the vacuum fluctuations in the Einstein equations) that matches the expanding Friedman solution may describe the history of the Universe before the Big Bang. (P.L.)

  15. Big bang is not needed

    Energy Technology Data Exchange (ETDEWEB)

    Allen, A.D.

    1976-02-01

    Recent computer simulations indicate that a system of n gravitating masses breaks up, even when the total energy is negative. As a result, almost any initial phase-space distribution results in a universe that eventually expands under the Hubble law. Hence Hubble expansion implies little regarding an initial cosmic state. Especially it does not imply the singularly dense superpositioned state used in the big bang model.

  16. Big data: the management revolution.

    Science.gov (United States)

    McAfee, Andrew; Brynjolfsson, Erik

    2012-10-01

    Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.

  17. Big Data Comes to School

    Directory of Open Access Journals (Sweden)

    Bill Cope

    2016-03-01

    Full Text Available The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.

  18. Big Bayou Creek and Little Bayou Creek Watershed Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Kszos, L.A.; Peterson, M.J.; Ryon; Smith, J.G.

    1999-03-01

    Biological monitoring of Little Bayou and Big Bayou creeks, which border the Paducah Site, has been conducted since 1987. Biological monitoring was conducted by University of Kentucky from 1987 to 1991 and by staff of the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) from 1991 through March 1999. In March 1998, renewed Kentucky Pollutant Discharge Elimination System (KPDES) permits were issued to the US Department of Energy (DOE) and US Enrichment Corporation. The renewed DOE permit requires that a watershed monitoring program be developed for the Paducah Site within 90 days of the effective date of the renewed permit. This plan outlines the sampling and analysis that will be conducted for the watershed monitoring program. The objectives of the watershed monitoring are to (1) determine whether discharges from the Paducah Site and the Solid Waste Management Units (SWMUs) associated with the Paducah Site are adversely affecting instream fauna, (2) assess the ecological health of Little Bayou and Big Bayou creeks, (3) assess the degree to which abatement actions ecologically benefit Big Bayou Creek and Little Bayou Creek, (4) provide guidance for remediation, (5) provide an evaluation of changes in potential human health concerns, and (6) provide data which could be used to assess the impact of inadvertent spills or fish kill. According to the cleanup will result in these watersheds [Big Bayou and Little Bayou creeks] achieving compliance with the applicable water quality criteria.

  19. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  20. Surface urban heat island across 419 global big cities.

    Science.gov (United States)

    Peng, Shushi; Piao, Shilong; Ciais, Philippe; Friedlingstein, Pierre; Ottle, Catherine; Bréon, François-Marie; Nan, Huijuan; Zhou, Liming; Myneni, Ranga B

    2012-01-17

    Urban heat island is among the most evident aspects of human impacts on the earth system. Here we assess the diurnal and seasonal variation of surface urban heat island intensity (SUHII) defined as the surface temperature difference between urban area and suburban area measured from the MODIS. Differences in SUHII are analyzed across 419 global big cities, and we assess several potential biophysical and socio-economic driving factors. Across the big cities, we show that the average annual daytime SUHII (1.5 ± 1.2 °C) is higher than the annual nighttime SUHII (1.1 ± 0.5 °C) (P < 0.001). But no correlation is found between daytime and nighttime SUHII across big cities (P = 0.84), suggesting different driving mechanisms between day and night. The distribution of nighttime SUHII correlates positively with the difference in albedo and nighttime light between urban area and suburban area, while the distribution of daytime SUHII correlates negatively across cities with the difference of vegetation cover and activity between urban and suburban areas. Our results emphasize the key role of vegetation feedbacks in attenuating SUHII of big cities during the day, in particular during the growing season, further highlighting that increasing urban vegetation cover could be one effective way to mitigate the urban heat island effect.

  1. Big data reduction framework for value creation in sustainable enterprises

    OpenAIRE

    Rehman, Muhammad Habib ur; Chang, Victor; Batool, Aisha; Teh, Ying Wah

    2016-01-01

    Value creation is a major sustainability factor for enterprises, in addition to profit maximization and revenue generation. Modern enterprises collect big data from various inbound and outbound data sources. The inbound data sources handle data generated from the results of business operations, such as manufacturing, supply chain management, marketing, and human resource management, among others. Outbound data sources handle customer-generated data which are acquired directly or indirectly fr...

  2. Small data, data infrastructures and big data (Working Paper 1)

    OpenAIRE

    Kitchin, Rob; Lauriault, Tracey P.

    2014-01-01

    The production of academic knowledge has progressed for the past few centuries using small data studies characterized by sampled data generated to answer specific questions. It is a strategy that has been remarkably successful, enabling the sciences, social sciences and humanities to advance in leaps and bounds. This approach is presently being challenged by the development of big data. Small data studies will, however, continue to be important in the future because of their utility in answer...

  3. Small data in the era of big data

    OpenAIRE

    Kitchin, Rob; Lauriault, Tracey P.

    2015-01-01

    Academic knowledge building has progressed for the past few centuries using small data studies characterized by sampled data generated to answer specific questions. It is a strategy that has been remarkably successful, enabling the sciences, social sciences and humanities to advance in leaps and bounds. This approach is presently being challenged by the development of big data. Small data studies will however, we argue, continue to be popular and valuable in the fut...

  4. Real-Time Information Extraction from Big Data

    Science.gov (United States)

    2015-10-01

    Introduction Enormous amounts of data are being generated by a large number of sensors and devices (Internet of Things: IoT ), and this data is...brief summary in Section 7. Data Access Patterns for Current and Big Data Systems Many current solution architectures rely on accessing data resident...by highly skilled human experts based on their intuition and vast knowledge. We do not have, and cannot produce enough experts to fill our

  5. An overview of big data and data science education at South African universities

    Directory of Open Access Journals (Sweden)

    Eduan Kotzé

    2016-02-01

    Full Text Available Man and machine are generating data electronically at an astronomical speed and in such a way that society is experiencing cognitive challenges to analyse this data meaningfully. Big data firms, such as Google and Facebook, identified this problem several years ago and are continuously developing new technologies or improving existing technologies in order to facilitate the cognitive analysis process of these large data sets. The purpose of this article is to contribute to our theoretical understanding of the role that big data might play in creating new training opportunities for South African universities. The article investigates emerging literature on the characteristics and main components of big data, together with the Hadoop application stack as an example of big data technology. Due to the rapid development of big data technology, a paradigm shift of human resources is required to analyse these data sets; therefore, this study examines the state of big data teaching at South African universities. This article also provides an overview of possible big data sources for South African universities, as well as relevant big data skills that data scientists need. The study also investigates existing academic programs in South Africa, where the focus is on teaching advanced database systems. The study found that big data and data science topics are introduced to students on a postgraduate level, but that the scope is very limited. This article contributes by proposing important theoretical topics that could be introduced as part of the existing academic programs. More research is required, however, to expand these programs in order to meet the growing demand for data scientists with big data skills.

  6. Free-format RPG IV

    CERN Document Server

    Martin, Jim

    2013-01-01

    This how-to guide offers a concise and thorough introduction to the increased productivity, better readability, and easier program maintenance that comes with the free-format style of programming in RPG IV. Although free-format information is available in IBM manuals, it is not separated from everything else, thereby requiring hours of tedious research to track down the information needed. This book provides everything one needs to know to write RPG IV in the free-format style, and author Jim Martin not only teaches rules and syntax but also explains how this new style of coding has the pot

  7. Big Data Innovation Challenge : Pioneering Approaches to Data-Driven Development

    OpenAIRE

    World Bank Group

    2016-01-01

    Big data can sound remote and lacking a human dimension, with few obvious links to development and impacting the lives of the poor. Concepts such as anti-poverty targeting, market access or rural electrification seem far more relevant – and easier to grasp. And yet some of today’s most groundbreaking initiatives in these areas rely on big data. This publication profiles these and more, sho...

  8. Towards Cloud Processing of GGOS Big Data

    Science.gov (United States)

    Weston, Stuart; Kim, Bumjun; Litchfield, Alan; Gulyaev, Sergei; Hall, Dylan; Chorao, Carlos; Ruthven, Andrew; Davies, Glyn; Lagos, Bruno; Christie, Don

    2017-04-01

    We report on our initial steps towards development of a cloud-like correlation infrastructure for geodetic Very Long Baseline Interferometry (VLBI), which in its raw format is of the order of 10-100 TB (big data). Data is generated by multiple VLBI radio telescopes, and is then used by for geodetic, geophysical, and astrometric research and operational activities through the International VLBI Service (IVS), as well as for corrections of GPS satellite orbits. Currently IVS data is correlated in several international Correlators (Correlation Centres), which receive data from individual radio telescope stations either in hard drives via regular mail service or via fibre using e-transfer mode. The latter is strongly limited by connectivity of existing correlation centres, which creates bottle necks and slows down the turnover of the data. This becomes critical in many applications - for example, it currently takes 1-2 weeks to generate the dUT1 parameter for corrections of GNSS orbits while less than 1-2 days delay is desirable. We started with a blade server at the AUT campus to emulate a cloud server using Virtual Machines (VMWare). The New Zealand Data Head node is connected to the high speed (100 Gbps) network ring circuit courtesy of the Research and Education Advanced Network New Zealand (REANNZ), with the additional nodes at remote physical sites connected via 10 Gbps fibre. We use real Australian Long Baseline Array (LBA) observational data from 6 radio telescopes in Australia, South Africa and New Zealand (15 baselines) of 1.5 hours in duration making 8 TB to emulate data transfer from remote locations and to provide a meaningful benchmark dataset for correlation. Data was successfully transferred using bespoke UDT network transfer tools and correlated with the speed-up factor of 0.8 using DiFX software correlator. In partnership with the New Zealand office of Catalyst IT Ltd we have moved this environment into Catalyst Cloud and report on the first

  9. Turning big bang into big bounce. I. Classical dynamics

    Science.gov (United States)

    Dzierżak, Piotr; Małkiewicz, Przemysław; Piechocki, Włodzimierz

    2009-11-01

    The big bounce (BB) transition within a flat Friedmann-Robertson-Walker model is analyzed in the setting of loop geometry underlying the loop cosmology. We solve the constraint of the theory at the classical level to identify physical phase space and find the Lie algebra of the Dirac observables. We express energy density of matter and geometrical functions in terms of the observables. It is the modification of classical theory by the loop geometry that is responsible for BB. The classical energy scale specific to BB depends on a parameter that should be fixed either by cosmological data or determined theoretically at quantum level, otherwise the energy scale stays unknown.

  10. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  11. 11. IV avati Draakoni galeriis...

    Index Scriptorium Estoniae

    2005-01-01

    Tanel Saare (sünd. 1979) näitus "Gott und huhn episode IV: seed shower". Eksponeeritakse väljavõtteid aktsioonidest aastatel 2000-2004 Turus, Nürnbergis, Berliinis, Lohusalus ja Soulis. Osa aktsioone toimus koos rühmitusega Non Grata

  12. The Uses of Big Data in Cities.

    Science.gov (United States)

    Bettencourt, Luís M A

    2014-03-01

    There is much enthusiasm currently about the possibilities created by new and more extensive sources of data to better understand and manage cities. Here, I explore how big data can be useful in urban planning by formalizing the planning process as a general computational problem. I show that, under general conditions, new sources of data coordinated with urban policy can be applied following fundamental principles of engineering to achieve new solutions to important age-old urban problems. I also show that comprehensive urban planning is computationally intractable (i.e., practically impossible) in large cities, regardless of the amounts of data available. This dilemma between the need for planning and coordination and its impossibility in detail is resolved by the recognition that cities are first and foremost self-organizing social networks embedded in space and enabled by urban infrastructure and services. As such, the primary role of big data in cities is to facilitate information flows and mechanisms of learning and coordination by heterogeneous individuals. However, processes of self-organization in cities, as well as of service improvement and expansion, must rely on general principles that enforce necessary conditions for cities to operate and evolve. Such ideas are the core of a developing scientific theory of cities, which is itself enabled by the growing availability of quantitative data on thousands of cities worldwide, across different geographies and levels of development. These three uses of data and information technologies in cities constitute then the necessary pillars for more successful urban policy and management that encourages, and does not stifle, the fundamental role of cities as engines of development and innovation in human societies.

  13. The Natural Science Underlying Big History

    Directory of Open Access Journals (Sweden)

    Eric J. Chaisson

    2014-01-01

    Full Text Available Nature’s many varied complex systems—including galaxies, stars, planets, life, and society—are islands of order within the increasingly disordered Universe. All organized systems are subject to physical, biological, or cultural evolution, which together comprise the grander interdisciplinary subject of cosmic evolution. A wealth of observational data supports the hypothesis that increasingly complex systems evolve unceasingly, uncaringly, and unpredictably from big bang to humankind. These are global history greatly extended, big history with a scientific basis, and natural history broadly portrayed across ∼14 billion years of time. Human beings and our cultural inventions are not special, unique, or apart from Nature; rather, we are an integral part of a universal evolutionary process connecting all such complex systems throughout space and time. Such evolution writ large has significant potential to unify the natural sciences into a holistic understanding of who we are and whence we came. No new science (beyond frontier, nonequilibrium thermodynamics is needed to describe cosmic evolution’s major milestones at a deep and empirical level. Quantitative models and experimental tests imply that a remarkable simplicity underlies the emergence and growth of complexity for a wide spectrum of known and diverse systems. Energy is a principal facilitator of the rising complexity of ordered systems within the expanding Universe; energy flows are as central to life and society as they are to stars and galaxies. In particular, energy rate density—contrasting with information content or entropy production—is an objective metric suitable to gauge relative degrees of complexity among a hierarchy of widely assorted systems observed throughout the material Universe. Operationally, those systems capable of utilizing optimum amounts of energy tend to survive, and those that cannot are nonrandomly eliminated.

  14. Big Data Strategy for Telco: Network Transformation

    OpenAIRE

    F. Amin; S. Feizi

    2014-01-01

    Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and ...

  15. Big Data in Shipping - Challenges and Opportunities

    OpenAIRE

    Rødseth, Ørnulf Jan; Perera, Lokukaluge Prasad; Mo, Brage

    2016-01-01

    Big Data is getting popular in shipping where large amounts of information is collected to better understand and improve logistics, emissions, energy consumption and maintenance. Constraints to the use of big data include cost and quality of on-board sensors and data acquisition systems, satellite communication, data ownership and technical obstacles to effective collection and use of big data. New protocol standards may simplify the process of collecting and organizing the data, including in...

  16. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  17. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  18. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  19. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  20. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  1. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  2. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  3. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  4. Towards Geo-spatial Information Science in Big Data Era

    Directory of Open Access Journals (Sweden)

    LI Deren

    2016-04-01

    Full Text Available Since the 1990s, with the advent of worldwide information revolution and the development of internet, geospatial information science have also come of age, which pushed forward the building of digital Earth and cyber city. As we entered the 21st century, with the development and integration of global information technology and industrialization, internet of things and cloud computing came into being, human society enters into the big data era. This article covers the key features (ubiquitous, multi-dimension and dynamics, internet+networking, full automation and real-time, from sensing to recognition, crowdsourcing and VGI, and service-oriented of geospatial information science in the big data era and addresses the key technical issues (non-linear four dimensional Earth reference frame system, space based enhanced GNSS, space-air and land unified network communication techniques, on board processing techniques for multi-sources image data, smart interface service techniques for space-borne information, space based resource scheduling and network security, design and developing of a payloads based multi-functional satellite platform. That needs to be resolved to provide a new definition of geospatial information science in big data era. Based on the discussion in this paper, the author finally proposes a new definition of geospatial information science (geomatics, i.e. Geomatics is a multiple discipline science and technology which, using a systematic approach, integrates all the means for spatio-temporal data acquisition, information extraction, networked management, knowledge discovering, spatial sensing and recognition, as well as intelligent location based services of any physical objects and human activities around the earth and its environment. Starting from this new definition, geospatial information science will get much more chances and find much more tasks in big data era for generation of smart earth and smart city . Our profession

  5. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  6. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  7. The faces of Big Science.

    Science.gov (United States)

    Schatz, Gottfried

    2014-06-01

    Fifty years ago, academic science was a calling with few regulations or financial rewards. Today, it is a huge enterprise confronted by a plethora of bureaucratic and political controls. This change was not triggered by specific events or decisions but reflects the explosive 'knee' in the exponential growth that science has sustained during the past three-and-a-half centuries. Coming to terms with the demands and benefits of 'Big Science' is a major challenge for today's scientific generation. Since its foundation 50 years ago, the European Molecular Biology Organization (EMBO) has been of invaluable help in meeting this challenge.

  8. Big Data and central banks

    Directory of Open Access Journals (Sweden)

    David Bholat

    2015-04-01

    Full Text Available This commentary recaps a Centre for Central Banking Studies event held at the Bank of England on 2–3 July 2014. The article covers three main points. First, it situates the Centre for Central Banking Studies event within the context of the Bank’s Strategic Plan and initiatives. Second, it summarises and reflects on major themes from the event. Third, the article links central banks’ emerging interest in Big Data approaches with their broader uptake by other economic agents.

  9. Inhomogeneous Big Bang Nucleosynthesis Revisited

    OpenAIRE

    Lara, J. F.; Kajino, T.; Mathews, G. J.

    2006-01-01

    We reanalyze the allowed parameters for inhomogeneous big bang nucleosynthesis in light of the WMAP constraints on the baryon-to-photon ratio and a recent measurement which has set the neutron lifetime to be 878.5 +/- 0.7 +/- 0.3 seconds. For a set baryon-to-photon ratio the new lifetime reduces the mass fraction of He4 by 0.0015 but does not significantly change the abundances of other isotopes. This enlarges the region of concordance between He4 and deuterium in the parameter space of the b...

  10. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  11. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  12. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  13. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  14. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  15. The I-V Measurement System for Solar Cells Based on MCU

    International Nuclear Information System (INIS)

    Chen Fengxiang; Ai Yu; Wang Jiafu; Wang Lisheng

    2011-01-01

    In this paper, an I-V measurement system for solar cells based on Single-chip Microcomputer (MCU) is presented. According to the test principles of solar cells, this measurement system mainly comprises of two parts-data collecting, data processing and displaying. The MCU mainly used as to acquire data, then the collecting results is sent to the computer by serial port. The I-V measurement results of our test system are shown in the human-computer interaction interface based on our hardware circuit. By comparing the test results of our I-V tester and the results of other commercial I-V tester, we found errors for most parameters are less than 5%, which shows our I-V test result is reliable. Because the MCU can be applied in many fields, this I-V measurement system offers a simple prototype for portable I-V tester for solar cells.

  16. The I-V Measurement System for Solar Cells Based on MCU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Fengxiang; Ai Yu; Wang Jiafu; Wang Lisheng, E-mail: phonixchen79@yahoo.com.cn [Department of physics science and technology, Wuhan University of Technology, Wuhan city, Hubei Province, 430070 (China)

    2011-02-01

    In this paper, an I-V measurement system for solar cells based on Single-chip Microcomputer (MCU) is presented. According to the test principles of solar cells, this measurement system mainly comprises of two parts-data collecting, data processing and displaying. The MCU mainly used as to acquire data, then the collecting results is sent to the computer by serial port. The I-V measurement results of our test system are shown in the human-computer interaction interface based on our hardware circuit. By comparing the test results of our I-V tester and the results of other commercial I-V tester, we found errors for most parameters are less than 5%, which shows our I-V test result is reliable. Because the MCU can be applied in many fields, this I-V measurement system offers a simple prototype for portable I-V tester for solar cells.

  17. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  18. Evidence of Big Five and Aggressive Personalities in Gait Biomechanics.

    Science.gov (United States)

    Satchell, Liam; Morris, Paul; Mills, Chris; O'Reilly, Liam; Marshman, Paul; Akehurst, Lucy

    2017-01-01

    Behavioral observation techniques which relate action to personality have long been neglected (Furr and Funder in Handbook of research methods in personality psychology, The Guilford Press, New York, 2007) and, when employed, often use human judges to code behavior. In the current study we used an alternative to human coding (biomechanical research techniques) to investigate how personality traits are manifest in gait. We used motion capture technology to record 29 participants walking on a treadmill at their natural speed. We analyzed their thorax and pelvis movements, as well as speed of gait. Participants completed personality questionnaires, including a Big Five measure and a trait aggression questionnaire. We found that gait related to several of our personality measures. The magnitude of upper body movement, lower body movement, and walking speed, were related to Big Five personality traits and aggression. Here, we present evidence that some gait measures can relate to Big Five and aggressive personalities. We know of no other examples of research where gait has been shown to correlate with self-reported measures of personality and suggest that more research should be conducted between largely automatic movement and personality.

  19. [Big data, medical language and biomedical terminology systems].

    Science.gov (United States)

    Schulz, Stefan; López-García, Pablo

    2015-08-01

    A variety of rich terminology systems, such as thesauri, classifications, nomenclatures and ontologies support information and knowledge processing in health care and biomedical research. Nevertheless, human language, manifested as individually written texts, persists as the primary carrier of information, in the description of disease courses or treatment episodes in electronic medical records, and in the description of biomedical research in scientific publications. In the context of the discussion about big data in biomedicine, we hypothesize that the abstraction of the individuality of natural language utterances into structured and semantically normalized information facilitates the use of statistical data analytics to distil new knowledge out of textual data from biomedical research and clinical routine. Computerized human language technologies are constantly evolving and are increasingly ready to annotate narratives with codes from biomedical terminology. However, this depends heavily on linguistic and terminological resources. The creation and maintenance of such resources is labor-intensive. Nevertheless, it is sensible to assume that big data methods can be used to support this process. Examples include the learning of hierarchical relationships, the grouping of synonymous terms into concepts and the disambiguation of homonyms. Although clear evidence is still lacking, the combination of natural language technologies, semantic resources, and big data analytics is promising.

  20. Baryon symmetric big bang cosmology

    Science.gov (United States)

    Stecker, F. W.

    1978-01-01

    Both the quantum theory and Einsteins theory of special relativity lead to the supposition that matter and antimatter were produced in equal quantities during the big bang. It is noted that local matter/antimatter asymmetries may be reconciled with universal symmetry by assuming (1) a slight imbalance of matter over antimatter in the early universe, annihilation, and a subsequent remainder of matter; (2) localized regions of excess for one or the other type of matter as an initial condition; and (3) an extremely dense, high temperature state with zero net baryon number; i.e., matter/antimatter symmetry. Attention is given to the third assumption, which is the simplest and the most in keeping with current knowledge of the cosmos, especially as pertains the universality of 3 K background radiation. Mechanisms of galaxy formation are discussed, whereby matter and antimatter might have collided and annihilated each other, or have coexisted (and continue to coexist) at vast distances. It is pointed out that baryon symmetric big bang cosmology could probably be proved if an antinucleus could be detected in cosmic radiation.

  1. Georges et le big bang

    CERN Document Server

    Hawking, Lucy; Parsons, Gary

    2011-01-01

    Georges et Annie, sa meilleure amie, sont sur le point d'assister à l'une des plus importantes expériences scientifiques de tous les temps : explorer les premiers instants de l'Univers, le Big Bang ! Grâce à Cosmos, leur super ordinateur, et au Grand Collisionneur de hadrons créé par Éric, le père d'Annie, ils vont enfin pouvoir répondre à cette question essentielle : pourquoi existons nous ? Mais Georges et Annie découvrent qu'un complot diabolique se trame. Pire, c'est toute la recherche scientifique qui est en péril ! Entraîné dans d'incroyables aventures, Georges ira jusqu'aux confins de la galaxie pour sauver ses amis...Une plongée passionnante au coeur du Big Bang. Les toutes dernières théories de Stephen Hawking et des plus grands scientifiques actuels.

  2. Astronomical Surveys and Big Data

    Directory of Open Access Journals (Sweden)

    Mickaelian Areg M.

    2016-03-01

    Full Text Available Recent all-sky and large-area astronomical surveys and their catalogued data over the whole range of electromagnetic spectrum, from γ-rays to radio waves, are reviewed, including such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and POSS II-based catalogues (APM, MAPS, USNO, GSC in the optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio range, and many others, as well as the most important surveys giving optical images (DSS I and II, SDSS, etc., proper motions (Tycho, USNO, Gaia, variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS, and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA. An overall understanding of the coverage along the whole wavelength range and comparisons between various surveys are given: galaxy redshift surveys, QSO/AGN, radio, Galactic structure, and Dark Energy surveys. Astronomy has entered the Big Data era, with Astrophysical Virtual Observatories and Computational Astrophysics playing an important role in using and analyzing big data for new discoveries.

  3. Big data in oncologic imaging.

    Science.gov (United States)

    Regge, Daniele; Mazzetti, Simone; Giannini, Valentina; Bracco, Christian; Stasi, Michele

    2017-06-01

    Cancer is a complex disease and unfortunately understanding how the components of the cancer system work does not help understand the behavior of the system as a whole. In the words of the Greek philosopher Aristotle "the whole is greater than the sum of parts." To date, thanks to improved information technology infrastructures, it is possible to store data from each single cancer patient, including clinical data, medical images, laboratory tests, and pathological and genomic information. Indeed, medical archive storage constitutes approximately one-third of total global storage demand and a large part of the data are in the form of medical images. The opportunity is now to draw insight on the whole to the benefit of each individual patient. In the oncologic patient, big data analysis is at the beginning but several useful applications can be envisaged including development of imaging biomarkers to predict disease outcome, assessing the risk of X-ray dose exposure or of renal damage following the administration of contrast agents, and tracking and optimizing patient workflow. The aim of this review is to present current evidence of how big data derived from medical images may impact on the diagnostic pathway of the oncologic patient.

  4. 76 FR 67130 - Bridger-Teton National Forest; Big Piney Ranger District; Wyoming; Environmental Impact Statement...

    Science.gov (United States)

    2011-10-31

    ... management actions, and (2) minimize food and other types of habituation and bear/human conflicts. Updated... project area is within the DFC 10 (Simultaneous Development of Resources, Opportunities for Human Experiences and Support for Big-game and a Wide Variety of Wildlife Species. Approximately five percent of the...

  5. Leveraging Mobile Network Big Data for Developmental Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Some argue that big data and big data users offer advantages to generate evidence. ... Supported by IDRC, this research focused on transportation planning in urban ... Using mobile network big data for land use classification CPRsouth 2015.

  6. Uruguay; 2011 Article IV Consultation

    OpenAIRE

    International Monetary Fund

    2011-01-01

    This 2011 Article IV Consultation highlights that the growth momentum in Uruguay has continued into 2011 but a slowdown is under way, led by weaker exports and slower public investment. Uruguay’s economic and financial vulnerabilities are modest, and the government has reduced debt vulnerabilities significantly and built important financial buffers. Executive Directors have commended authorities’ skillful macroeconomic management that has underpinned Uruguay’s excellent economic performance, ...

  7. Austria; 2013 Article IV Consultation

    OpenAIRE

    International Monetary Fund

    2013-01-01

    This paper presents details of Austria’s 2013 Article IV Consultation. Austria has been growing economically but is facing challenges in the financial sector. Full implementation of medium-term fiscal adjustment plans require specifying several measures and plans that need gradual strengthening to take expected further bank restructuring cost into account. It suggests that strong early bank intervention and resolution tools, a better designed deposit insurance system, and a bank-financed reso...

  8. A New Look at Big History

    Science.gov (United States)

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  9. A little big history of Tiananmen

    NARCIS (Netherlands)

    Quaedackers, E.; Grinin, L.E.; Korotayev, A.V.; Rodrigue, B.H.

    2011-01-01

    This contribution aims at demonstrating the usefulness of studying small-scale subjects such as Tiananmen, or the Gate of Heavenly Peace, in Beijing - from a Big History perspective. By studying such a ‘little big history’ of Tiananmen, previously overlooked yet fundamental explanations for why

  10. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  11. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  12. Big data and software defined networks

    CERN Document Server

    Taheri, Javid

    2018-01-01

    Big Data Analytics and Software Defined Networking (SDN) are helping to drive the management of data usage of the extraordinary increase of computer processing power provided by Cloud Data Centres (CDCs). This new book investigates areas where Big-Data and SDN can help each other in delivering more efficient services.

  13. Big Science and Long-tail Science

    CERN Document Server

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  14. The Death of the Big Men

    DEFF Research Database (Denmark)

    Martin, Keir

    2010-01-01

    Recently Tolai people og Papua New Guinea have adopted the term 'Big Shot' to decribe an emerging post-colonial political elite. The mergence of the term is a negative moral evaluation of new social possibilities that have arisen as a consequence of the Big Shots' privileged position within a glo...

  15. An embedding for the big bang

    Science.gov (United States)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  16. Probing the pre-big bang universe

    International Nuclear Information System (INIS)

    Veneziano, G.

    2000-01-01

    Superstring theory suggests a new cosmology whereby a long inflationary phase preceded a non singular big bang-like event. After discussing how pre-big bang inflation naturally arises from an almost trivial initial state of the Universe, I will describe how present or near-future experiments can provide sensitive probes of how the Universe behaved in the pre-bang era

  17. Starting Small, Thinking Big - Continuum Magazine | NREL

    Science.gov (United States)

    , Thinking Big Stories NREL Helps Agencies Target New Federal Sustainability Goals Student Engagements Help solar power in the territory. Photo by Don Buchanan, VIEO Starting Small, Thinking Big NREL helps have used these actions to optimize that energy use.'" NREL's cross-organizational work supports

  18. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  19. Practice variation in Big-4 transparency reports

    NARCIS (Netherlands)

    Girdhar, Sakshi; Jeppesen, K.K.

    2018-01-01

    Purpose The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach The study draws on a

  20. Big data analysis for smart farming

    NARCIS (Netherlands)

    Kempenaar, C.; Lokhorst, C.; Bleumer, E.J.B.; Veerkamp, R.F.; Been, Th.; Evert, van F.K.; Boogaardt, M.J.; Ge, L.; Wolfert, J.; Verdouw, C.N.; Bekkum, van Michael; Feldbrugge, L.; Verhoosel, Jack P.C.; Waaij, B.D.; Persie, van M.; Noorbergen, H.

    2016-01-01

    In this report we describe results of a one-year TO2 institutes project on the development of big data technologies within the milk production chain. The goal of this project is to ‘create’ an integration platform for big data analysis for smart farming and to develop a show case. This includes both

  1. A Twin Study of Normative Personality and DSM-IV Personality Disorder Criterion Counts: Evidence for Separate Genetic Influences.

    Science.gov (United States)

    Czajkowski, Nikolai; Aggen, Steven H; Krueger, Robert F; Kendler, Kenneth S; Neale, Michael C; Knudsen, Gun Peggy; Gillespie, Nathan A; Røysamb, Espen; Tambs, Kristian; Reichborn-Kjennerud, Ted

    2018-03-21

    Both normative personality and DSM-IV personality disorders have been found to be heritable. However, there is limited knowledge about the extent to which the genetic and environmental influences underlying DSM personality disorders are shared with those of normative personality. The aims of this study were to assess the phenotypic similarity between normative and pathological personality and to investigate the extent to which genetic and environmental influences underlying individual differences in normative personality account for symptom variance across DSM-IV personality disorders. A large population-based sample of adult twins was assessed for DSM-IV personality disorder criteria with structured interviews at two waves spanning a 10-year interval. At the second assessment, participants also completed the Big Five Inventory, a self-report instrument assessing the five-factor normative personality model. The proportion of genetic and environmental liabilities unique to the individual personality disorder measures, and hence not shared with the five Big Five Inventory domains, were estimated by means of multivariate Cholesky twin decompositions. The median percentage of genetic liability to the 10 DSM-IV personality disorders assessed at wave 1 that was not shared with the Big Five domains was 64%, whereas for the six personality disorders that were assessed concurrently at wave 2, the median was 39%. Conversely, the median proportions of unique environmental liability in the personality disorders for wave 1 and wave 2 were 97% and 96%, respectively. The results indicate that a moderate-to-sizable proportion of the genetic influence underlying DSM-IV personality disorders is not shared with the domain constructs of the Big Five model of normative personality. Caution should be exercised in assuming that normative personality measures can serve as proxies for DSM personality disorders when investigating the etiology of these disorders.

  2. Cloud Based Big Data Infrastructure: Architectural Components and Automated Provisioning

    OpenAIRE

    Demchenko, Yuri; Turkmen, Fatih; Blanchet, Christophe; Loomis, Charles; Laat, Caees de

    2016-01-01

    This paper describes the general architecture and functional components of the cloud based Big Data Infrastructure (BDI). The proposed BDI architecture is based on the analysis of the emerging Big Data and data intensive technologies and supported by the definition of the Big Data Architecture Framework (BDAF) that defines the following components of the Big Data technologies: Big Data definition, Data Management including data lifecycle and data structures, Big Data Infrastructure (generical...

  3. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  4. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  5. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  6. Practice Variation in Big-4 Transparency Reports

    DEFF Research Database (Denmark)

    Girdhar, Sakshi; Klarskov Jeppesen, Kim

    2018-01-01

    Purpose: The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach: The study draws...... on a qualitative research approach, in which the content of transparency reports is analyzed and semi-structured interviews are conducted with key people from the Big-4 firms who are responsible for developing the transparency reports. Findings: The findings show that the content of transparency reports...... is inconsistent and the transparency reporting practice is not uniform within the Big-4 networks. Differences were found in the way in which the transparency reporting practices are coordinated globally by the respective central governing bodies of the Big-4. The content of the transparency reports...

  7. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  8. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  9. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  10. The Scope of Big Data in One Medicine: Unprecedented Opportunities and Challenges.

    Science.gov (United States)

    McCue, Molly E; McCoy, Annette M

    2017-01-01

    Advances in high-throughput molecular biology and electronic health records (EHR), coupled with increasing computer capabilities have resulted in an increased interest in the use of big data in health care. Big data require collection and analysis of data at an unprecedented scale and represents a paradigm shift in health care, offering (1) the capacity to generate new knowledge more quickly than traditional scientific approaches; (2) unbiased collection and analysis of data; and (3) a holistic understanding of biology and pathophysiology. Big data promises more personalized and precision medicine for patients with improved accuracy and earlier diagnosis, and therapy tailored to an individual's unique combination of genes, environmental risk, and precise disease phenotype. This promise comes from data collected from numerous sources, ranging from molecules to cells, to tissues, to individuals and populations-and the integration of these data into networks that improve understanding of heath and disease. Big data-driven science should play a role in propelling comparative medicine and "one medicine" (i.e., the shared physiology, pathophysiology, and disease risk factors across species) forward. Merging of data from EHR across institutions will give access to patient data on a scale previously unimaginable, allowing for precise phenotype definition and objective evaluation of risk factors and response to therapy. High-throughput molecular data will give insight into previously unexplored molecular pathophysiology and disease etiology. Investigation and integration of big data from a variety of sources will result in stronger parallels drawn at the molecular level between human and animal disease, allow for predictive modeling of infectious disease and identification of key areas of intervention, and facilitate step-changes in our understanding of disease that can make a substantial impact on animal and human health. However, the use of big data comes with significant

  11. Identification and characterization of a dipeptidyl peptidase IV inhibitor from aronia juice

    Energy Technology Data Exchange (ETDEWEB)

    Kozuka, Miyuki [Department of Health and Nutrition, Faculty of Human Science, Hokkaido Bunkyo University, Eniwa 061-1449 (Japan); Yamane, Takuya, E-mail: t-yamane@pharm.hokudai.ac.jp [Faculty of Pharmaceutical Sciences, Hokkaido University, Kita-ku, Sapporo 060-0812 (Japan); Nakano, Yoshihisa [Center for Research and Development Bioresources, Research Organization for University-Community Collaborations, Osaka Prefecture University, Sakai, Osaka 599-8570 (Japan); Nakagaki, Takenori [Institute of Food Sciences, Nakagaki Consulting Engineer Co., Ltd, Nishi-ku, Sakai 593-8328 (Japan); Ohkubo, Iwao [Department of Nutrition, School of Nursing and Nutrition, Tenshi College, Higashi-ku, Sapporo 065-0013 (Japan); Ariga, Hiroyoshi [Faculty of Pharmaceutical Sciences, Hokkaido University, Kita-ku, Sapporo 060-0812 (Japan)

    2015-09-25

    Aronia berries have many potential effects on health, including an antioxidant effect, effect for antimutagenesis, hepatoprotection and cardioprotection, an antidiabetic effect and inhibition of cancer cell proliferation. Previous human studies have shown that aronia juice may be useful for treatment of obesity disorders. In this study, we found that aronia juice has an inhibitory effect against dipeptidyl peptidase IV (DPP IV) (EC 3.4.14.5). DPP IV is a peptidase that cleaves the N-terminal region of incretins such as glucagon-dependent insulinotropic polypeptide (GIP) and glucagon-like peptide-1 (GLP-1). Inactivation of incretins by DPP IV induces reduction of insulin secretion. Furthermore, we identified that cyanidin 3, 5-diglucoside as the DPP IV inhibitor in aronia juice. DPP IV was inhibited more strongly by cyanidin 3, 5-diglucoside than by cyanidin and cyanidin 3-glucoside. The results suggest that DPP IV is inhibited by cyanidin 3, 5-diglucoside present in aronia juice. The antidiabetic effect of aronia juice may be mediated through DPP IV inhibition by cyanidin 3, 5-diglucoside. - Highlights: • DPP IV activity is inhibited by aronia juice. • DPP IV inhibitor is cyanidin 3, 5-diglucoside in aronia juice. • DPP IV is inhibited by cyanidin 3, 5-diglucoside more than cyanidin and cyanidin 3-glucoside.

  12. Big Data and Grand Challenges

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2016-01-01

    The paper discusses the future of the Humanities and the role of Digital Humanities. Taking the point of departure in Rens Bods “A new History of the humanities” (2013) it is argued that the current crisis within the Humanities and Digital Humanities is not as much about different notions of cult...

  13. Intelligent search in Big Data

    Science.gov (United States)

    Birialtsev, E.; Bukharaev, N.; Gusenkov, A.

    2017-10-01

    An approach to data integration, aimed on the ontology-based intelligent search in Big Data, is considered in the case when information objects are represented in the form of relational databases (RDB), structurally marked by their schemes. The source of information for constructing an ontology and, later on, the organization of the search are texts in natural language, treated as semi-structured data. For the RDBs, these are comments on the names of tables and their attributes. Formal definition of RDBs integration model in terms of ontologies is given. Within framework of the model universal RDB representation ontology, oil production subject domain ontology and linguistic thesaurus of subject domain language are built. Technique of automatic SQL queries generation for subject domain specialists is proposed. On the base of it, information system for TATNEFT oil-producing company RDBs was implemented. Exploitation of the system showed good relevance with majority of queries.

  14. Big Data in Transport Geography

    DEFF Research Database (Denmark)

    Reinau, Kristian Hegner; Agerholm, Niels; Lahrmann, Harry Spaabæk

    for studies that explicitly compare the quality of this new type of data to traditional data sources. With the current focus on Big Data in the transport field, public transport planners are increasingly looking towards smart card data to analyze and optimize flows of passengers. However, in many cases...... it is not all public transport passengers in a city, region or country with a smart card system that uses the system, and in such cases, it is important to know what biases smart card data has in relation to giving a complete view upon passenger flows. This paper therefore analyses the quality and biases...... of smart card data in Denmark, where public transport passengers may use a smart card, may pay with cash for individual trips or may hold a season ticket for a certain route. By analyzing smart card data collected in Denmark in relation to data on sales of cash tickets, sales of season tickets, manual...

  15. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  16. Was the Big Bang hot?

    Science.gov (United States)

    Wright, E. L.

    1983-01-01

    Techniques for verifying the spectrum defined by Woody and Richards (WR, 1981), which serves as a base for dust-distorted models of the 3 K background, are discussed. WR detected a sharp deviation from the Planck curve in the 3 K background. The absolute intensity of the background may be determined by the frequency dependence of the dipole anisotropy of the background or the frequency dependence effect in galactic clusters. Both methods involve the Doppler shift; analytical formulae are defined for characterization of the dipole anisotropy. The measurement of the 30-300 GHz spectra of cold galactic dust may reveal the presence of significant amounts of needle-shaped grains, which would in turn support a theory of a cold Big Bang.

  17. Big Bang nucleosynthesis in crisis?

    International Nuclear Information System (INIS)

    Hata, N.; Scherrer, R.J.; Steigman, G.; Thomas, D.; Walker, T.P.; Bludman, S.; Langacker, P.

    1995-01-01

    A new evaluation of the constraint on the number of light neutrino species (N ν ) from big bang nucleosynthesis suggests a discrepancy between the predicted light element abundances and those inferred from observations, unless the inferred primordial 4 He abundance has been underestimated by 0.014±0.004 (1σ) or less than 10% (95% C.L.) of 3 He survives stellar processing. With the quoted systematic errors in the observed abundances and a conservative chemical evolution parametrization, the best fit to the combined data is N ν =2.1±0.3 (1σ) and the upper limit is N ν ν =3) at the 98.6% C.L. copyright 1995 The American Physical Society

  18. Crystalline cerium(IV) phosphates

    International Nuclear Information System (INIS)

    Herman, R.G.; Clearfield, A.

    1976-01-01

    The ion exchange behaviour of seven crystalline cerium(IV) phosphates towards some of the alkali metal cations is described. Only two of the compounds (A and C) possess ion exchange properties in acidic solutions. Four others show some ion exchange characteristics in basic media with some of the alkali cations. Compound G does not behave as an ion exchanger in solutions of pH + , but show very little Na + uptake. Compound E undergoes ion exchange with Na + and Cs + , but not with Li+. Both Li + and Na + are sorbed by compounds A and C. The results are indicative of structures which show steric exclusion phenomena. (author)

  19. PREPARATION OF OXOPORPHINATOMANGANESE (IV) COMPLEX

    Energy Technology Data Exchange (ETDEWEB)

    Willner, I.; Otvos, J.; Calvin, M.

    1980-07-01

    Oxo-manganese-tetraphenylporphyrin (O=Mn{sup IV}-TPP) has been prepared by an oxygen-transfer reaction from iodosylbenzene to MnIITPP and characterized by its i.r. and field desorption mass spectra, which are identical to those of the product obtained by direct oxidation of Mn{sup III}(TPP) in an aqueous medium; it transfers oxygen to triphenylphosphine to produce triphenylphosphine oxide, and it is suggested that similar intermediates are important in oxygen activation by cytochrome P-450 as well as in the photosynthetic evolution of oxygen.

  20. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  1. [Relevance of big data for molecular diagnostics].

    Science.gov (United States)

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  2. Big Pharma: a former insider's view.

    Science.gov (United States)

    Badcott, David

    2013-05-01

    There is no lack of criticisms frequently levelled against the international pharmaceutical industry (Big Pharma): excessive profits, dubious or even dishonest practices, exploiting the sick and selective use of research data. Neither is there a shortage of examples used to support such opinions. A recent book by Brody (Hooked: Ethics, the Medical Profession and the Pharmaceutical Industry, 2008) provides a précis of the main areas of criticism, adopting a twofold strategy: (1) An assumption that the special nature and human need for pharmaceutical medicines requires that such products should not be treated like other commodities and (2) A multilevel descriptive approach that facilitates an ethical analysis of relationships and practices. At the same time, Brody is fully aware of the nature of the fundamental dilemma: the apparent addiction to (and denial of) the widespread availability of gifts and financial support for conferences etc., but recognises that 'Remove the industry and its products, and a considerable portion of scientific medicine's power to help the patient vanishes' (Brody 2008, p. 5). The paper explores some of the relevant issues, and argues that despite the identified shortcomings and a need for rigorous and perhaps enhanced regulation, and realistic price control, the commercially competitive pharmaceutical industry remains the best option for developing safer and more effective medicinal treatments. At the same time, adoption of a broader ethical basis for the industry's activities, such as a triple bottom line policy, would register an important move in the right direction and go some way toward answering critics.

  3. Lean Big Data integration in systems biology and systems pharmacology.

    Science.gov (United States)

    Ma'ayan, Avi; Rouillard, Andrew D; Clark, Neil R; Wang, Zichen; Duan, Qiaonan; Kou, Yan

    2014-09-01

    Data sets from recent large-scale projects can be integrated into one unified puzzle that can provide new insights into how drugs and genetic perturbations applied to human cells are linked to whole-organism phenotypes. Data that report how drugs affect the phenotype of human cell lines and how drugs induce changes in gene and protein expression in human cell lines can be combined with knowledge about human disease, side effects induced by drugs, and mouse phenotypes. Such data integration efforts can be achieved through the conversion of data from the various resources into single-node-type networks, gene-set libraries, or multipartite graphs. This approach can lead us to the identification of more relationships between genes, drugs, and phenotypes as well as benchmark computational and experimental methods. Overall, this lean 'Big Data' integration strategy will bring us closer toward the goal of realizing personalized medicine. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. SDN Low Latency for Medical Big Data Using Wavelets

    Directory of Open Access Journals (Sweden)

    Fadia Shah

    2017-06-01

    Full Text Available New era is the age of 5G. The network has moved from the simple internet connection towards advanced LTE connections and transmission. The information and communication technology has reshaped telecommunication. For this, among many types of big data, Medical Big Data is one of the most sensitive forms of data. Wavelet is a technical tool to reduce the size of this data to make it available for the user for more time. It is also responsible for low latency and high speed data transmission over the network. The key concern is the Medical Big Data should be accurate and reliable enough so that the recommended treatment should be the concerned one. This paper proposed the scheme to support the concept of data availability without losing crucial information, via Wavelet the Medical Data compression and through SDN supportive architecture by making data availability over the wireless network. Such scheme is in favor of the efficient use of technology for the benefit of human beings in the support of medical treatments.

  5. Psycho-informatics: Big Data shaping modern psychometrics.

    Science.gov (United States)

    Markowetz, Alexander; Błaszkiewicz, Konrad; Montag, Christian; Switala, Christina; Schlaepfer, Thomas E

    2014-04-01

    For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Processing Solutions for Big Data in Astronomy

    Science.gov (United States)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  8. Big Data as Governmentality in International Development

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    2017-01-01

    Statistics have long shaped the field of visibility for the governance of development projects. The introduction of big data has altered the field of visibility. Employing Dean's “analytics of government” framework, we analyze two cases—malaria tracking in Kenya and monitoring of food prices...... in Indonesia. Our analysis shows that big data introduces a bias toward particular types of visualizations. What problems are being made visible through big data depends to some degree on how the underlying data is visualized and who is captured in the visualizations. It is also influenced by technical factors...

  9. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  10. Astroinformatics: the big data of the universe

    OpenAIRE

    Barmby, Pauline

    2016-01-01

    In astrophysics we like to think that our field was the originator of big data, back when it had to be carried around in big sky charts and books full of tables. These days, it's easier to move astrophysics data around, but we still have a lot of it, and upcoming telescope  facilities will generate even more. I discuss how astrophysicists approach big data in general, and give examples from some Western Physics & Astronomy research projects.  I also give an overview of ho...

  11. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text Available “Big Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  12. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  13. Smart Information Management in Health Big Data.

    Science.gov (United States)

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  14. Big Data solutions on a small scale: Evaluating accessible high-performance computing for social research

    Directory of Open Access Journals (Sweden)

    Dhiraj Murthy

    2014-11-01

    Full Text Available Though full of promise, Big Data research success is often contingent on access to the newest, most advanced, and often expensive hardware systems and the expertise needed to build and implement such systems. As a result, the accessibility of the growing number of Big Data-capable technology solutions has often been the preserve of business analytics. Pay as you store/process services like Amazon Web Services have opened up possibilities for smaller scale Big Data projects. There is high demand for this type of research in the digital humanities and digital sociology, for example. However, scholars are increasingly finding themselves at a disadvantage as available data sets of interest continue to grow in size and complexity. Without a large amount of funding or the ability to form interdisciplinary partnerships, only a select few find themselves in the position to successfully engage Big Data. This article identifies several notable and popular Big Data technologies typically implemented using large and extremely powerful cloud-based systems and investigates the feasibility and utility of development of Big Data analytics systems implemented using low-cost commodity hardware in basic and easily maintainable configurations for use within academic social research. Through our investigation and experimental case study (in the growing field of social Twitter analytics, we found that not only are solutions like Cloudera’s Hadoop feasible, but that they can also enable robust, deep, and fruitful research outcomes in a variety of use-case scenarios across the disciplines.

  15. Test Review: Advanced Clinical Solutions for WAIS-IV and WMS-IV

    Science.gov (United States)

    Chu, Yiting; Lai, Mark H. C.; Xu, Yining; Zhou, Yuanyuan

    2012-01-01

    The authors review the "Advanced Clinical Solutions for WAIS-IV and WMS-IV". The "Advanced Clinical Solutions (ACS) for the Wechsler Adult Intelligence Scale-Fourth Edition" (WAIS-IV; Wechsler, 2008) and the "Wechsler Memory Scale-Fourth Edition" (WMS-IV; Wechsler, 2009) was published by Pearson in 2009. It is a…

  16. Big questions, big science: meeting the challenges of global ecology.

    Science.gov (United States)

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  17. Inhibition of PAF-induced expression of CD11b and shedding of L-selectin on human neutrophils and eosinophils by the type IV selective PDE inhibitor, rolipram

    NARCIS (Netherlands)

    Dijkhuizen, B; deMonchy, JGR; Dubois, AEJ; Gerritsen, J; Kauffman, HF

    We quantitatively determined whether the selective phosphodiesterase (PDE) inhibitor, rolipram, inhibits changes in the adhesion molecules CD11b and L-selectin on platelet-activating factor (PAF)-stimulated human neutrophils and eosinophils in vitro. Incubations were performed in human whole blood

  18. Dipeptidyl peptidase IV inhibitors derived from a mangrove flora Rhizophora mucronata: An in silico approach

    Directory of Open Access Journals (Sweden)

    Selvaraj Gurudeeban

    2012-08-01

    Full Text Available Dipeptidyl peptidase IV (DPP IV is responsible for conversion of glucose tolerance (GLP-1, into inactive form. The inhibition of DPP IV would be beneficial in the treatment of diabetes mellitus. Therefore, the aim of the present study was to isolate and evaluate cystine, phenyl acetic acid, acrylamide, caprylone and oleic acid from Rhizophora mucronata inhibitory action on DPP IV inhibitors using in silico approach. In silico analysis of cystine, phenyl acetic acid, acrylamide, caprylone and oleic acid on human apo DPP IV protein was done by using Autodoc 4.0. Among the five compounds cysteine acts as an inhibitor with binding energy -5.89 kcal/mol, seven hydrogen bond interactions at residues VAL459, VAL 459, GLU408, GLU206, ARG358, GLU205 and SER209 to suppresses the action of DPP IV protein.

  19. NOAA Big Data Partnership RFI

    Science.gov (United States)

    de la Beaujardiere, J.

    2014-12-01

    In February 2014, the US National Oceanic and Atmospheric Administration (NOAA) issued a Big Data Request for Information (RFI) from industry and other organizations (e.g., non-profits, research laboratories, and universities) to assess capability and interest in establishing partnerships to position a copy of NOAA's vast data holdings in the Cloud, co-located with easy and affordable access to analytical capabilities. This RFI was motivated by a number of concerns. First, NOAA's data facilities do not necessarily have sufficient network infrastructure to transmit all available observations and numerical model outputs to all potential users, or sufficient infrastructure to support simultaneous computation by many users. Second, the available data are distributed across multiple services and data facilities, making it difficult to find and integrate data for cross-domain analysis and decision-making. Third, large datasets require users to have substantial network, storage, and computing capabilities of their own in order to fully interact with and exploit the latent value of the data. Finally, there may be commercial opportunities for value-added products and services derived from our data. Putting a working copy of data in the Cloud outside of NOAA's internal networks and infrastructures should reduce demands and risks on our systems, and should enable users to interact with multiple datasets and create new lines of business (much like the industries built on government-furnished weather or GPS data). The NOAA Big Data RFI therefore solicited information on technical and business approaches regarding possible partnership(s) that -- at no net cost to the government and minimum impact on existing data facilities -- would unleash the commercial potential of its environmental observations and model outputs. NOAA would retain the master archival copy of its data. Commercial partners would not be permitted to charge fees for access to the NOAA data they receive, but

  20. BIG SKY CARBON SEQUESTRATION PARTNERSHIP

    Energy Technology Data Exchange (ETDEWEB)

    Susan M. Capalbo

    2005-01-31

    The Big Sky Carbon Sequestration Partnership, led by Montana State University, is comprised of research institutions, public entities and private sectors organizations, and the Confederated Salish and Kootenai Tribes and the Nez Perce Tribe. Efforts under this Partnership in Phase I fall into four areas: evaluation of sources and carbon sequestration sinks that will be used to determine the location of pilot demonstrations in Phase II; development of GIS-based reporting framework that links with national networks; designing an integrated suite of monitoring, measuring, and verification technologies and assessment frameworks; and initiating a comprehensive education and outreach program. The groundwork is in place to provide an assessment of storage capabilities for CO{sub 2} utilizing the resources found in the Partnership region (both geological and terrestrial sinks), that would complement the ongoing DOE research. Efforts are underway to showcase the architecture of the GIS framework and initial results for sources and sinks. The region has a diverse array of geological formations that could provide storage options for carbon in one or more of its three states. Likewise, initial estimates of terrestrial sinks indicate a vast potential for increasing and maintaining soil C on forested, agricultural, and reclaimed lands. Both options include the potential for offsetting economic benefits to industry and society. Steps have been taken to assure that the GIS-based framework is consistent among types of sinks within the Big Sky Partnership area and with the efforts of other western DOE partnerships. The Partnership recognizes the critical importance of measurement, monitoring, and verification technologies to support not only carbon trading but all policies and programs that DOE and other agencies may want to pursue in support of GHG mitigation. The efforts in developing and implementing MMV technologies for geological sequestration reflect this concern. Research is