WorldWideScience

Sample records for program big beaver

  1. The coal deposits of the Alkali Butte, the Big Sand Draw, and the Beaver Creek fields, Fremont County, Wyoming

    Science.gov (United States)

    Thompson, Raymond M.; White, Vincent L.

    1952-01-01

    northwestward to the Wind River. This report is based almost entirely upon geologic investigations made in 1949 as a part of the program of the Department of the Interior for development of the Missouri River basin. Some coal sections were measured in 1950 and the additional information on the Big Sand Draw coal field was obtained in 1951. A geologic map of the Beaver Creek field was not prepared for this report because most of the significant coal occurs below a depth of 1,400 ft and is not exposed on the surface. Mr. George Downey, Lander, Wyo. , supplied much helpful information on the Big Sand Draw coal field and the area in general. Topographic contours shown on figures 11, 12, 13, and 14 are from unpublished plane-table sheets made by E. D. Woodruff in 1912. Previous geologic investigations of the region have been made by E. G. Woodruff and D. E. Winchester (1912), by C. J. Hares (1916), by A. J. Collier (1920), and C. M. Bauer (1934). Except for the work of Woodruff and Winchester, which was an areal examination for the purpose of classifying the public lands, the geological investigatiohs were of a general nature and give little detail of the coal beds. Berryhill (1950) summarizes Woodruff and Winchester's work.

  2. A linear programming model of diet choice of free-living beavers

    NARCIS (Netherlands)

    Nolet, BA; VanderVeer, PJ; Evers, EGJ; Ottenheim, MM

    1995-01-01

    Linear programming has been remarkably successful in predicting the diet choice of generalist herbivores. We used this technique to test the diet choice of free-living beavers (Castor fiber) in the Biesbosch (The Netherlands) under different Foraging goals, i.e. maximization of intake of energy,

  3. Seventh International Beaver Symposium

    Directory of Open Access Journals (Sweden)

    Yuri A. Gorshkov

    2016-05-01

    Full Text Available The paper presents data on the seventh international Beaver Symposium. Brief historical background about previous Beaver Symposia beaver is shown. Data on the sections of symposium, number of participants and reports are presented.

  4. A Big Data Analytics Methodology Program in the Health Sector

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  5. Busy beavers gone wild

    Directory of Open Access Journals (Sweden)

    Grégory Lafitte

    2009-06-01

    Full Text Available We show some incompleteness results a la Chaitin using the busy beaver functions. Then, with the help of ordinal logics, we show how to obtain a theory in which the values of the busy beaver functions can be provably established and use this to reveal a structure on the provability of the values of these functions.

  6. Beaver County Crash Data

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Contains locations and information about every crash incident reported to the police in Beaver County from 2011 to 2015. Fields include injury severity, fatalities,...

  7. Evolution of the Air Toxics under the Big Sky Program

    Science.gov (United States)

    Marra, Nancy; Vanek, Diana; Hester, Carolyn; Holian, Andrij; Ward, Tony; Adams, Earle; Knuth, Randy

    2011-01-01

    As a yearlong exploration of air quality and its relation to respiratory health, the "Air Toxics Under the Big Sky" program offers opportunities for students to learn and apply science process skills through self-designed inquiry-based research projects conducted within their communities. The program follows a systematic scope and sequence…

  8. The genetic legacy of multiple beaver reintroductions in Central Europe.

    Directory of Open Access Journals (Sweden)

    Christiane Frosch

    Full Text Available The comeback of the Eurasian beaver (Castor fiber throughout western and central Europe is considered a major conservation success. Traditionally, several subspecies are recognised by morphology and mitochondrial haplotype, each linked to a relict population. During various reintroduction programs in the 20th century, beavers from multiple source localities were released and now form viable populations. These programs differed in their reintroduction strategies, i.e., using pure subspecies vs. mixed source populations. This inhomogeneity in management actions generated ongoing debates regarding the origin of present beaver populations and appropriate management plans for the future. By sequencing of the mitochondrial control region and microsatellite genotyping of 235 beaver individuals from five selected regions in Germany, Switzerland, Luxembourg, and Belgium we show that beavers from at least four source origins currently form admixed, genetically diverse populations that spread across the study region. While regional occurrences of invasive North American beavers (n = 20 were found, all but one C. fiber bore the mitochondrial haplotype of the autochthonous western Evolutionary Significant Unit (ESU. Considering this, as well as the viability of admixed populations and the fact that the fusion of different lineages is already progressing in all studied regions, we argue that admixture between different beaver source populations should be generally accepted.

  9. Quantifying the multiple, environmental benefits of reintroducing the Eurasian Beaver

    Science.gov (United States)

    Brazier, Richard; Puttock, Alan; Graham, Hugh; Anderson, Karen; Cunliffe, Andrew; Elliott, Mark

    2016-04-01

    . Secondly, the River Otter Beaver Trial will be discussed. In 2015 Natural England granted a five year licence to monitor beavers living wild upon the River Otter, Devon. The River Otter, ca. 280 km2, is a dynamic, spatey system with downstream areas exhibiting poor ecological status, primarily due to sediment and phosphorus loading, which both impact on fish numbers. The impacts of Eurasian Beaver upon English river systems are currently poorly understood, with the outcome of this pilot study having significant implications for river restoration and management. This project, the first of its kind in England, is monitoring the impacts of beavers upon the River Otter catchment with three main scientific objectives: (1) Characterise the existing structure of the River Otter riparian zone and quantify any changes during the 2015-2019 period; (2) Quantify the impact of beaver activity on water flow at a range of scales in the Otter catchment; (3) Evaluate the impact of beaver activity on water quality. Finally, lessons learnt from these monitoring programs will be discussed in light of the need for more natural solutions to flood and diffuse pollution management. We conclude that whilst our work demonstrates multiple positive benefits of Beaver reintroduction, considerably more, scale-appropriate monitoring is required before such results could be extrapolated to landscape scales.

  10. Big Bayou Creek and Little Bayou Creek Watershed Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Kszos, L.A.; Peterson, M.J.; Ryon; Smith, J.G.

    1999-03-01

    Biological monitoring of Little Bayou and Big Bayou creeks, which border the Paducah Site, has been conducted since 1987. Biological monitoring was conducted by University of Kentucky from 1987 to 1991 and by staff of the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) from 1991 through March 1999. In March 1998, renewed Kentucky Pollutant Discharge Elimination System (KPDES) permits were issued to the US Department of Energy (DOE) and US Enrichment Corporation. The renewed DOE permit requires that a watershed monitoring program be developed for the Paducah Site within 90 days of the effective date of the renewed permit. This plan outlines the sampling and analysis that will be conducted for the watershed monitoring program. The objectives of the watershed monitoring are to (1) determine whether discharges from the Paducah Site and the Solid Waste Management Units (SWMUs) associated with the Paducah Site are adversely affecting instream fauna, (2) assess the ecological health of Little Bayou and Big Bayou creeks, (3) assess the degree to which abatement actions ecologically benefit Big Bayou Creek and Little Bayou Creek, (4) provide guidance for remediation, (5) provide an evaluation of changes in potential human health concerns, and (6) provide data which could be used to assess the impact of inadvertent spills or fish kill. According to the cleanup will result in these watersheds [Big Bayou and Little Bayou creeks] achieving compliance with the applicable water quality criteria.

  11. Beaver assisted river valley formation

    Science.gov (United States)

    Westbrook, C.J.; Cooper, D.J.; Baker, B.W.

    2011-01-01

    We examined how beaver dams affect key ecosystem processes, including pattern and process of sediment deposition, the composition and spatial pattern of vegetation, and nutrient loading and processing. We provide new evidence for the formation of heterogeneous beaver meadows on riverine system floodplains and terraces where dynamic flows are capable of breaching in-channel beaver dams. Our data show a 1.7-m high beaver dam triggered overbank flooding that drowned vegetation in areas deeply flooded, deposited nutrient-rich sediment in a spatially heterogeneous pattern on the floodplain and terrace, and scoured soils in other areas. The site quickly de-watered following the dam breach by high stream flows, protecting the deposited sediment from future re-mobilization by overbank floods. Bare sediment either exposed by scouring or deposited by the beaver flood was quickly colonized by a spatially heterogeneous plant community, forming a beaver meadow. Many willow and some aspen seedlings established in the more heavily disturbed areas, suggesting the site may succeed to a willow carr plant community suitable for future beaver re-occupation. We expand existing theory beyond the beaver pond to include terraces within valleys. This more fully explains how beavers can help drive the formation of alluvial valleys and their complex vegetation patterns as was first postulated by Ruedemann and Schoonmaker in 1938. ?? 2010 John Wiley & Sons, Ltd.

  12. Big Data: Are Biomedical and Health Informatics Training Programs Ready?

    Science.gov (United States)

    Hersh, W.; Ganesh, A. U. Jai

    2014-01-01

    Summary Objectives The growing volume and diversity of health and biomedical data indicate that the era of Big Data has arrived for healthcare. This has many implications for informatics, not only in terms of implementing and evaluating information systems, but also for the work and training of informatics researchers and professionals. This article addresses the question: What do biomedical and health informaticians working in analytics and Big Data need to know? Methods We hypothesize a set of skills that we hope will be discussed among academic and other informaticians. Results The set of skills includes: Programming - especially with data-oriented tools, such as SQL and statistical programming languages; Statistics - working knowledge to apply tools and techniques; Domain knowledge - depending on one’s area of work, bioscience or health care; and Communication - being able to understand needs of people and organizations, and articulate results back to them. Conclusions Biomedical and health informatics educational programs must introduce concepts of analytics, Big Data, and the underlying skills to use and apply them into their curricula. The development of new coursework should focus on those who will become experts, with training aiming to provide skills in “deep analytical talent” as well as those who need knowledge to support such individuals. PMID:25123740

  13. Are youth mentoring programs good value-for-money? An evaluation of the Big Brothers Big Sisters Melbourne Program.

    Science.gov (United States)

    Moodie, Marjory L; Fisher, Jane

    2009-01-30

    The Big Brothers Big Sisters (BBBS) program matches vulnerable young people with a trained, supervised adult volunteer as mentor. The young people are typically seriously disadvantaged, with multiple psychosocial problems. Threshold analysis was undertaken to determine whether investment in the program was a worthwhile use of limited public funds. The potential cost savings were based on US estimates of life-time costs associated with high-risk youth who drop out-of-school and become adult criminals. The intervention was modelled for children aged 10-14 years residing in Melbourne in 2004. If the program serviced 2,208 of the most vulnerable young people, it would cost AUD 39.5 M. Assuming 50% were high-risk, the associated costs of their adult criminality would be AUD 3.3 billion. To break even, the program would need to avert high-risk behaviours in only 1.3% (14/1,104) of participants. This indicative evaluation suggests that the BBBS program represents excellent 'value for money'.

  14. Big Data: Big Confusion? Big Challenges?

    Science.gov (United States)

    2015-05-01

    12th Annual Acquisition Research Symposium 12th Annual Acquisition Research Symposium Big Data: Big Confusion? Big Challenges? Mary Maureen...currently valid OMB control number. 1. REPORT DATE MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Big ...Data: Big Confusion? Big Challenges? 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  15. Can Viral Videos Help Beaver Restore Streams?

    Science.gov (United States)

    Castro, J. M.; Pollock, M. M.; Lewallen, G.; Jordan, C.; Woodruff, K.

    2015-12-01

    Have you watched YouTube lately? Did you notice the plethora of cute animal videos? Researchers, including members of our Beaver Restoration Research team, have been studying the restoration potential of beaver for decades, yet in the past few years, beaver have gained broad acclaim and some much deserved credit for restoration of aquatic systems in North America. Is it because people can now see these charismatic critters in action from the comfort of their laptops? While the newly released Beaver Restoration Guidebook attempts to answer many questions, sadly, this is not one of them. We do, however, address the use of beaver (Castor canadensis) in stream, wetland, and floodplain restoration and discuss the many positive effects of beaver on fluvial ecosystems. Our team, composed of researchers from NOAA National Marine Fisheries Service, US Fish and Wildlife Service, US Forest Service, and Portland State University, has developed a scientifically rigorous, yet accessible, practitioner's guide that provides a synthesis of the best available science for using beaver to improve ecosystem functions. Divided into two broad sections -- Beaver Ecology and Beaver Restoration and Management -- the guidebook focuses on the many ways in which beaver improve habitat, primarily through the construction of dams that impound water and retain sediment. In Beaver Ecology, we open with a discussion of the general effects that beaver dams have on physical and biological processes, and we close with "Frequently Asked Questions" and "Myth Busters". In Restoration and Management, we discuss common emerging restoration techniques and methods for mitigating unwanted beaver effects, followed by case studies from pioneering practitioners who have used many of these beaver restoration techniques in the field. The lessons they have learned will help guide future restoration efforts. We have also included a comprehensive beaver ecology library of over 1400 references from scientific journals

  16. Annual Big Game Hunting Program : Parker River National Wildlife Refuge : CY 1993

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This 1993 Annual Big Game Hunting Program outlines the reasons and regulations for white-tailed deer hunting on Parker River National Wildlife Refuge. The...

  17. Annual Big Game Hunting Program : Parker River National Wildlife Refuge : CY 1990

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This 1990 Annual Big Game Hunting Program outlines the reasons and regulations for white-tailed deer hunting on Parker River National Wildlife Refuge. The...

  18. A rare Uroglena bloom in Beaver Lake, Arkansas, spring 2015

    Science.gov (United States)

    Green, William R.; Hufhines, Brad

    2017-01-01

    A combination of factors triggered a Uroglena volvox bloom and taste and odor event in Beaver Lake, a water-supply reservoir in northwest Arkansas, in late April 2015. Factors contributing to the bloom included increased rainfall and runoff containing increased concentrations of dissolved organic carbon, followed by a stable pool, low nutrient concentrations, and an expansion of lake surface area and littoral zone. This was the first time U. volvox was identified in Beaver Lake and the first time it was recognized as a source of taste and odor. Routine water quality samples happened to be collected by the US Geological Survey and the Beaver Water District throughout the reservoir during the bloom—. Higher than normal rainfall in March 2015 increased the pool elevation in Beaver Lake by 2.3 m (by early April), increased the surface area by 10%, and increased the littoral zone by 1214 ha; these conditions persisted for 38 days, resulting from flood water being retained behind the dam. Monitoring programs that cover a wide range of reservoir features, including dissolved organic carbon, zooplankton, and phytoplankton, are valuable in explaining unusual events such as this Uroglena bloom.

  19. Haematology and Serum Biochemistry Parameters and Variations in the Eurasian Beaver (Castor fiber.

    Directory of Open Access Journals (Sweden)

    Simon J Girling

    Full Text Available Haematology parameters (N = 24 and serum biochemistry parameters (N = 35 were determined for wild Eurasian beavers (Castor fiber, between 6 months - 12 years old. Of the population tested in this study, N = 18 Eurasian beavers were from Norway and N = 17 originating from Bavaria but now living extensively in a reserve in England. All blood samples were collected from beavers via the ventral tail vein. All beavers were chemically restrained using inhalant isoflurane in 100% oxygen prior to blood sampling. Results were determined for haematological and serum biochemical parameters for the species and were compared between the two different populations with differences in means estimated and significant differences being noted. Standard blood parameters for the Eurasian beaver were determined and their ranges characterised using percentiles. Whilst the majority of blood parameters between the two populations showed no significant variation, haemoglobin, packed cell volume, mean cell haemoglobin and white blood cell counts showed significantly greater values (p<0.01 in the Bavarian origin population than the Norwegian; neutrophil counts, alpha 2 globulins, cholesterol, sodium: potassium ratios and phosphorus levels showed significantly (p<0.05 greater values in Bavarian versus Norwegian; and potassium, bile acids, gamma globulins, urea, creatinine and total calcium values levels showed significantly (p<0.05 greater values in Norwegian versus Bavarian relict populations. No significant differences were noted between male and female beavers or between sexually immature (<3 years old and sexually mature (≥3 years old beavers in the animals sampled. With Eurasian beaver reintroduction encouraged by legislation throughout Europe, knowledge of baseline blood values for the species and any variations therein is essential when assessing their health and welfare and the success or failure of any reintroduction program. This is the first study to produce

  20. Beaver Census on the Erie Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The objective of this study was to determine an approximate population number for beaver (castor canadensis) on the Sugar Lake division of the Erie Wildlife Refuge....

  1. Sustaining Employability: A Process for Introducing Cloud Computing, Big Data, Social Networks, Mobile Programming and Cybersecurity into Academic Curricula

    National Research Council Canada - National Science Library

    Razvan Bologa; Ana-Ramona Lupu; Catalin Boja; Tiberiu Marian Georgescu

    2017-01-01

    ... curricula of business students: cloud computing, big data, mobile programming, and social networks and cybersecurity (CAMSS). The results are useful for those trying to implement similar curricular reforms, or to companies that need to manage talent pipelines.

  2. Recycling at Penn State's Beaver Stadium. "Recycle on the Go" Success Story

    Science.gov (United States)

    US Environmental Protection Agency, 2009

    2009-01-01

    With a 13-year-old recycling program, The Pennsylvania State University's (Penn State) Beaver Stadium in the past diverted nearly 30 tons of recyclables per year from local landfills. A new initiative to promote recycling in the stadium's tailgating area has helped Penn State more than triple its old recycling record, collecting 112 tons in 2008.…

  3. LSVT LOUD and LSVT BIG: Behavioral Treatment Programs for Speech and Body Movement in Parkinson Disease

    Directory of Open Access Journals (Sweden)

    Cynthia Fox

    2012-01-01

    Full Text Available Recent advances in neuroscience have suggested that exercise-based behavioral treatments may improve function and possibly slow progression of motor symptoms in individuals with Parkinson disease (PD. The LSVT (Lee Silverman Voice Treatment Programs for individuals with PD have been developed and researched over the past 20 years beginning with a focus on the speech motor system (LSVT LOUD and more recently have been extended to address limb motor systems (LSVT BIG. The unique aspects of the LSVT Programs include the combination of (a an exclusive target on increasing amplitude (loudness in the speech motor system; bigger movements in the limb motor system, (b a focus on sensory recalibration to help patients recognize that movements with increased amplitude are within normal limits, even if they feel “too loud” or “too big,” and (c training self-cueing and attention to action to facilitate long-term maintenance of treatment outcomes. In addition, the intensive mode of delivery is consistent with principles that drive activity-dependent neuroplasticity and motor learning. The purpose of this paper is to provide an integrative discussion of the LSVT Programs including the rationale for their fundamentals, a summary of efficacy data, and a discussion of limitations and future directions for research.

  4. Where and How Wolves (Canis lupus) Kill Beavers (Castor canadensis).

    Science.gov (United States)

    Gable, Thomas D; Windels, Steve K; Bruggink, John G; Homkes, Austin T

    2016-01-01

    Beavers (Castor canadensis) can be a significant prey item for wolves (Canis lupus) in boreal ecosystems due to their abundance and vulnerability on land. How wolves hunt beavers in these systems is largely unknown, however, because observing predation is challenging. We inferred how wolves hunt beavers by identifying kill sites using clusters of locations from GPS-collared wolves in Voyageurs National Park, Minnesota. We identified 22 sites where wolves from 4 different packs killed beavers. We classified these kill sites into 8 categories based on the beaver-habitat type near which each kill occurred. Seasonal variation existed in types of kill sites as 7 of 12 (58%) kills in the spring occurred at sites below dams and on shorelines, and 8 of 10 (80%) kills in the fall occurred near feeding trails and canals. From these kill sites we deduced that the typical hunting strategy has 3 components: 1) waiting near areas of high beaver use (e.g., feeding trails) until a beaver comes near shore or ashore, 2) using vegetation, the dam, or other habitat features for concealment, and 3) immediately attacking the beaver, or ambushing the beaver by cutting off access to water. By identifying kill sites and inferring hunting behavior we have provided the most complete description available of how and where wolves hunt and kill beavers.

  5. Where and How Wolves (Canis lupus Kill Beavers (Castor canadensis.

    Directory of Open Access Journals (Sweden)

    Thomas D Gable

    Full Text Available Beavers (Castor canadensis can be a significant prey item for wolves (Canis lupus in boreal ecosystems due to their abundance and vulnerability on land. How wolves hunt beavers in these systems is largely unknown, however, because observing predation is challenging. We inferred how wolves hunt beavers by identifying kill sites using clusters of locations from GPS-collared wolves in Voyageurs National Park, Minnesota. We identified 22 sites where wolves from 4 different packs killed beavers. We classified these kill sites into 8 categories based on the beaver-habitat type near which each kill occurred. Seasonal variation existed in types of kill sites as 7 of 12 (58% kills in the spring occurred at sites below dams and on shorelines, and 8 of 10 (80% kills in the fall occurred near feeding trails and canals. From these kill sites we deduced that the typical hunting strategy has 3 components: 1 waiting near areas of high beaver use (e.g., feeding trails until a beaver comes near shore or ashore, 2 using vegetation, the dam, or other habitat features for concealment, and 3 immediately attacking the beaver, or ambushing the beaver by cutting off access to water. By identifying kill sites and inferring hunting behavior we have provided the most complete description available of how and where wolves hunt and kill beavers.

  6. Competition favors elk over beaver in a riparian willow ecosystem

    Science.gov (United States)

    Baker, B.W.; Peinetti, H.R.; Coughenour, M.C.; Johnson, T.L.

    2012-01-01

    Beaver (Castor spp.) conservation requires an understanding of their complex interactions with competing herbivores. Simulation modeling offers a controlled environment to examine long-term dynamics in ecosystems driven by uncontrollable variables. We used a new version of the SAVANNA ecosystem model to investigate beaver (C. Canadensis) and elk (Cervus elapses) competition for willow (Salix spp.). We initialized the model with field data from Rocky Mountain National Park, Colorado, USA, to simulate a 4-ha riparian ecosystem containing beaver, elk, and willow. We found beaver persisted indefinitely when elk density was or = 30 elk km_2. The loss of tall willow preceded rapid beaver declines, thus willow condition may predict beaver population trajectory in natural environments. Beaver were able to persist with slightly higher elk densities if beaver alternated their use of foraging sites in a rest-rotation pattern rather than maintained continuous use. Thus, we found asymmetrical competition for willow strongly favored elk over beaver in a simulated montane ecosystem. Finally, we discuss application of the SAVANNA model and mechanisms of competition relative to beaver persistence as metapopulations, ecological resistance and alternative state models, and ecosystem regulation.

  7. Supporting Imagers' VOICE: A National Training Program in Comparative Effectiveness Research and Big Data Analytics.

    Science.gov (United States)

    Kang, Stella K; Rawson, James V; Recht, Michael P

    2017-12-05

    Provided methodologic training, more imagers can contribute to the evidence basis on improved health outcomes and value in diagnostic imaging. The Value of Imaging Through Comparative Effectiveness Research Program was developed to provide hands-on, practical training in five core areas for comparative effectiveness and big biomedical data research: decision analysis, cost-effectiveness analysis, evidence synthesis, big data principles, and applications of big data analytics. The program's mixed format consists of web-based modules for asynchronous learning as well as in-person sessions for practical skills and group discussion. Seven diagnostic radiology subspecialties and cardiology are represented in the first group of program participants, showing the collective potential for greater depth of comparative effectiveness research in the imaging community. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  8. Comparing and evaluating terminology services application programming interfaces: RxNav, UMLSKS and LexBIG.

    Science.gov (United States)

    Pathak, Jyotishman; Peters, Lee; Chute, Christopher G; Bodenreider, Olivier

    2010-01-01

    To facilitate the integration of terminologies into applications, various terminology services application programming interfaces (API) have been developed in the recent past. In this study, three publicly available terminology services API, RxNav, UMLSKS and LexBIG, are compared and functionally evaluated with respect to the retrieval of information from one biomedical terminology, RxNorm, to which all three services provide access. A list of queries is established covering a wide spectrum of terminology services functionalities such as finding RxNorm concepts by their name, or navigating different types of relationships. Test data were generated from the RxNorm dataset to evaluate the implementation of the functionalities in the three API. The results revealed issues with various aspects of the API implementation (eg, handling of obsolete terms by LexBIG) and documentation (eg, navigational paths used in RxNav) that were subsequently addressed by the development teams of the three API investigated. Knowledge about such discrepancies helps inform the choice of an API for a given use case.

  9. Geotagging Digital Collections: BeaverTracks Mobile Project

    Science.gov (United States)

    Griggs, Kim

    2011-01-01

    BeaverTracks Historical Locations and Walking Tour is a mobile project at Oregon State University (OSU), where the author serves as programmer/analyst. It connects the past to the present by linking historic images to current campus locations. The goal of BeaverTracks is to showcase and bring attention to OSU Libraries' digital collections as well…

  10. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends

    Science.gov (United States)

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called “big data” challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. The MapReduce programming framework uses two tasks common in functional programming: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by

  11. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends.

    Science.gov (United States)

    Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by

  12. Hydraulic characteristics and dynamics of beaver dams in a Midwestern U.S. agricultural waershed

    Science.gov (United States)

    M.C. McCullough; D.E. Eisenhauer; M.G. Dosskey; D.M. Admiraal

    2006-01-01

    Populations of Noth America beaver (castor canadensis) have increased in the past decades throughout the Midwestern U.S., leading to an increase in the frequency of beaver dams in small streams. Beaver dams form ponds and slow water velocity. Multiple dams create a stair-step effect on the water surface profile. The hydraulic and geomorphic influence of beaver dams on...

  13. Big Data Meets Physics Education Research: From MOOCs to University-Led High School Programs

    Science.gov (United States)

    Seaton, Daniel

    2017-01-01

    The Massive Open Online Course (MOOC) movement has catalyzed discussions of digital learning on campuses around the world and highlighted the increasingly large, complex datasets related to learning. Physics Education Research can and should play a key role in measuring outcomes of this most recent wave of digital education. In this talk, I will discuss big data and learning analytics through multiple modes of teaching and learning enabled by the open-source edX platform: open-online, flipped, and blended. Open-Online learning will be described through analysis of MOOC offerings from Harvard and MIT, where 2.5 million unique users have led to 9 million enrollments across nearly 300 courses. Flipped instruction will be discussed through an Advanced Placement program at Davidson College that empowers high school teachers to use AP aligned, MOOC content directly in their classrooms with only their students. Analysis of this program will be highlighted, including results from a pilot study showing a positive correlation between content usage and externally validated AP exam scores. Lastly, blended learning will be discussed through specific residential use cases at Davidson College and MIT, highlighting unique course models that blend open-online and residential experiences. My hope for this talk is that listeners will better understand the current wave of digital education and the opportunities it provides for data-driven teaching and learning.

  14. Big-bird programs: Effect of strain, sex, and debone time on meat quality of broilers.

    Science.gov (United States)

    Brewer, V B; Kuttappan, V A; Emmert, J L; Meullenet, J-F C; Owens, C M

    2012-01-01

    The industry trend toward early deboning of chickens has led to the need to explore the effect on meat quality, including the effects of strain and sex. An experiment was conducted using broilers of 4 different high-yielding commercial strains chosen because of their common use in big-bird production. Of each strain, 360 birds were commercially processed at 59, 61, and 63 d of age in 2 replicates per day. Breast fillets were harvested at 2, 4, and 6 h postmortem (PM). Muscle pH and instrumental color (L*, a*, and b*) were measured at the time of deboning and at 24 h PM. Fillets were cooked to 76°C and cook loss was calculated, followed by Meullenet-Owens razor shear (MORS) analysis. Muscle pH significantly decreased over time as aging before deboning increased. Furthermore, L* values significantly increased as aging time increased, with the fillets deboned at 6 h PM having the highest L* value, followed by 4 h, and then 2 h PM. After 24 h, the fillets deboned at 6 h still had the highest L* compared with those deboned at 2 or 4 h PM. Fillets from strain B had the highest L* values. Fillets deboned at 2 h PM had significantly higher cook losses and MORS energy (indicating tougher fillets) than fillets deboned at 4 or 6 h PM, but there was no difference in cook loss due to strain at any deboning time. Fillets deboned at 4 h PM also had higher MORS energy than fillets deboned at 6 h PM, and differences in MORS energy among the strains were observed at 4 h PM. There was no difference in instrumental color values or cook loss due to sex. However, fillets of males had significantly greater MORS energy (tougher fillets) when deboned at 2, 4, and 6 h PM than those of females. Results of this study suggest that deboning time, sex, and strain can affect meat quality in big-bird market programs.

  15. Bowdoin NWR : Information on Beaver Creek flow 1936-1986

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This document provides a timeline of Beaver Creek flows, near Bowdoin National Wildlife Refuge, from 1936 to 1986. Parts Bowdoin National Wildlife Refuge lie within...

  16. Beaver Census in the Erie National Wildlife Refuge: Seneca Division

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The main objective of this internship project was to approximately determine the population of beaver ( castor canadensis) in the Seneca division of the Erie...

  17. Corporate Social Responsibility programs of Big Food in Australia: a content analysis of industry documents.

    Science.gov (United States)

    Richards, Zoe; Thomas, Samantha L; Randle, Melanie; Pettigrew, Simone

    2015-12-01

    To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing. © 2015 Public Health Association of Australia.

  18. Beaver Management in Norway - A Review of Recent Literature and Current Problems

    OpenAIRE

    Parker, Howard; Rosell, Frank

    2012-01-01

    Beginning with the total protection of the beaver (Castor fiber) in Norway in 1845, beaver management has undergone numerous changes as population development, resource exploitation goals and management objectives have evolved. Presently, new beaver management by-laws are being developed. This report briefly summarizes the historical development of beaver management in Norway, reviews the recent literature of particular relevance for the development new by-laws and makes recommendations for t...

  19. Habitat and conservation status of the beaver in the Sierra San Luis Sonora, Mexico

    Science.gov (United States)

    Karla Pelz Serrano; Eduardo Ponce Guevara; Carlos A. Lopez Gonzalez

    2005-01-01

    The status of beaver (Castor canadensis) in northeastern Sonora, Mexico, is uncertain. We surveyed the Cajon Bonito River to assess the beaver’s status and habitat and found five colonies. Limiting factors appear to be pollution due to animal waste, deforestation of riparian trees, and human exploitation. Beavers did not appear to require habitat...

  20. Installation Restoration Program Preliminary Assessment, Big Mountain Radio Relay Station, Alaska

    Science.gov (United States)

    1989-04-01

    are not as abundant as tuffs or lava flows. 3 Soil formation and development largely dates from the close of the Wisconsin Glaciation, when glaciers...environmental contamination that may have an adverse impact on public health or the environment and to select a remedial action through preparation of...Telephone Switching Station (ATSS-4A) capabilities were added to Big Mountain RRS, Kalakaket Creek RRS, Pedro Dome RRS, and Neklasson Lake RRS. These

  1. Big Society, Big Deal?

    Science.gov (United States)

    Thomson, Alastair

    2011-01-01

    Political leaders like to put forward guiding ideas or themes which pull their individual decisions into a broader narrative. For John Major it was Back to Basics, for Tony Blair it was the Third Way and for David Cameron it is the Big Society. While Mr. Blair relied on Lord Giddens to add intellectual weight to his idea, Mr. Cameron's legacy idea…

  2. Big data, big governance

    NARCIS (Netherlands)

    drs. Frans van den Reep

    2016-01-01

    “Natuurlijk is het leuk dat mijn koelkast zelf melk bestelt op basis van data gerelateerde patronen. Deep learning op basis van big data kent grote beloften,” zegt Frans van der Reep van Inholland. Geen wonder dat dit op de Hannover Messe tijdens de Wissenstag van ScienceGuide een hoofdthema zal

  3. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  4. Sustaining Employability: A Process for Introducing Cloud Computing, Big Data, Social Networks, Mobile Programming and Cybersecurity into Academic Curricula

    Directory of Open Access Journals (Sweden)

    Razvan Bologa

    2017-12-01

    Full Text Available This article describes a process for introducing modern technological subjects into the academic curricula of non-technical universities. The process described may increase and contribute to social sustainability by enabling non-technical students’ access to the field of the Internet of Things and the broader Industry 4.0. The process has been defined and tested during a curricular reform project that took place in two large universities in Eastern Europe. In this article, the authors describe the results and impact, over multiple years, of a project financed by the European Union that aimed to introduce the following subjects into the academic curricula of business students: cloud computing, big data, mobile programming, and social networks and cybersecurity (CAMSS. The results are useful for those trying to implement similar curricular reforms, or to companies that need to manage talent pipelines.

  5. Curriculum Development Based on the Big Picture Assessment of the Mechanical Engineering Program

    Science.gov (United States)

    Sabri, Mohd Anas Mohd; Khamis, Nor Kamaliana; Tahir, Mohd Faizal Mat; Wahid, Zaliha; Kamal, Ahmad; Ihsan, Ariffin Mohd; Sulong, Abu Bakar; Abdullah, Shahrum

    2013-01-01

    One of the major concerns of the Engineering Accreditation Council (EAC) is the need for an effective monitoring and evaluation of program outcome domains that can be associated with courses taught under the Mechanical Engineering program. However, an effective monitoring method that can determine the results of each program outcome using Bloom's…

  6. Big Java late objects

    CERN Document Server

    Horstmann, Cay S

    2012-01-01

    Big Java: Late Objects is a comprehensive introduction to Java and computer programming, which focuses on the principles of programming, software engineering, and effective learning. It is designed for a two-semester first course in programming for computer science students.

  7. Outcomes of a 'One Health' Monitoring Approach to a Five-Year Beaver (Castor fiber) Reintroduction Trial in Scotland.

    Science.gov (United States)

    Goodman, Gidona; Meredith, Anna; Girling, Simon; Rosell, Frank; Campbell-Palmer, Roisin

    2017-03-01

    The Scottish Beaver Trial, involving the translocation and release of 16 wild Norwegian beavers (Castor fiber) to Scotland, provides a good example of a 'One Health' scientific monitoring approach, with independent monitoring partners on ecology and public health feeding into veterinary health surveillance. Pathogen detection did not prohibit beaver release, although eight beavers were seropositive for Leptospira spp. Six deaths (37.5%) occurred during Rabies quarantine, followed by the death of two animals shortly after release and two wild-born kits due to suspected predation. Two host-specific parasites, the beaver fluke (Stichorchis subtriquetrus) and beaver beetle (Platypsyllus castoris) were also reintroduced.

  8. Creswell's Energy Efficient Construction Program: A Big Project for a Small School.

    Science.gov (United States)

    Kelsh, Bruce

    1982-01-01

    In Creswell (Oregon) High School's award winning vocational education program, students study energy efficient construction along with basic building skills. Part of the program has been the active recruitment of female, minority, disadvantaged, and handicapped students into the vocational area. Students have assembled solar hot water collectors,…

  9. "I Am Not a Big Man": Evaluation of the Issue Investigation Program

    Science.gov (United States)

    Cincera, Jan; Simonova, Petra

    2017-01-01

    The article evaluates a Czech environmental education program focused on developing competence in issue investigation. In the evaluation, a simple quasi-experimental design with experimental (N = 200) and control groups was used. The results suggest that the program had a greater impact on girls than on boys, and that it increased their internal…

  10. The Design and Redesign of a Clinical Ladder Program: Thinking Big and Overcoming Challenges.

    Science.gov (United States)

    Warman, Geri-Anne; Williams, Faye; Herrero, Ashlea; Fazeli, Pariya; White-Williams, Connie

    Clinical Ladder Programs or Clinical Advancement Programs (CAPs) are an essential component of staff nurse professional development, satisfaction, and retention. There is a need for more evidence regarding developing CAPs. CAP initially launched in 2004. Nurses accomplished tasks in four main areas: clinical, education, leadership, and research, which reflected and incorporated the 14 Forces of Magnetism. In February 2012, the newly revised program was launched and renamed Professional Nursing Development Program. The new program was based on the 5 Magnet® model components, the Synergy Professional Practice Model, and a point system which enabled nurses to utilize activities in many areas, thereby allowing them to capitalize on their strengths. The purpose of this article is to discuss the development, revision, implementation, and lessons learned in creating and revising CAP.

  11. Survey of Beaver-related Restoration Practices in Rangeland Streams of the Western USA

    Science.gov (United States)

    Pilliod, David S.; Rohde, Ashley T.; Charnley, Susan; Davee, Rachael R.; Dunham, Jason B.; Gosnell, Hannah; Grant, Gordon E.; Hausner, Mark B.; Huntington, Justin L.; Nash, Caroline

    2018-01-01

    Poor condition of many streams and concerns about future droughts in the arid and semi-arid western USA have motivated novel restoration strategies aimed at accelerating recovery and increasing water resources. Translocation of beavers into formerly occupied habitats, restoration activities encouraging beaver recolonization, and instream structures mimicking the effects of beaver dams are restoration alternatives that have recently gained popularity because of their potential socioeconomic and ecological benefits. However, beaver dams and dam-like structures also harbor a history of social conflict. Hence, we identified a need to assess the use of beaver-related restoration projects in western rangelands to increase awareness and accountability, and identify gaps in scientific knowledge. We inventoried 97 projects implemented by 32 organizations, most in the last 10 years. We found that beaver-related stream restoration projects undertaken mostly involved the relocation of nuisance beavers. The most common goal was to store water, either with beaver dams or artificial structures. Beavers were often moved without regard to genetics, disease, or potential conflicts with nearby landowners. Few projects included post-implementation monitoring or planned for longer term issues, such as what happens when beavers abandon a site or when beaver dams or structures breach. Human dimensions were rarely considered and water rights and other issues were mostly unresolved or addressed through ad-hoc agreements. We conclude that the practice and implementation of beaver-related restoration has outpaced research on its efficacy and best practices. Further scientific research is necessary, especially research that informs the establishment of clear guidelines for best practices.

  12. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......Astrophysics and cosmology are rich with data. The advent of wide-area digital cameras on large aperture telescopes has led to ever more ambitious surveys of the sky. Data volumes of entire surveys a decade ago can now be acquired in a single night and real-time analysis is often desired. Thus...... with label and measurement noise. We argue that this makes astronomy a great domain for computer science research, as it pushes the boundaries of data analysis. In the following, we will present this exciting application area for data scientists. We will focus on exemplary results, discuss main challenges...

  13. Working with and Visualizing Big Data Efficiently with Python for the DARPA XDATA Program

    Science.gov (United States)

    2017-08-01

    program focused on computational techniques and software tools for analyzing large volumes of data, both semi-structured (e.g. tabular, relational...effort performed was to support the DARPA XDATA program by developing computational techniques and software tools for analyzing large volumes of data...serve up a mixture of CSV files, SQL databases, and MongoDB databases using a common interface. In addition, any client computation done on Blaze

  14. Echinococcus multilocularis Detection in Live Eurasian Beavers (Castor fiber Using a Combination of Laparoscopy and Abdominal Ultrasound under Field Conditions.

    Directory of Open Access Journals (Sweden)

    Róisín Campbell-Palmer

    Full Text Available Echinococcus multilocularis is an important pathogenic zoonotic parasite of health concern, though absent in the United Kingdom. Eurasian beavers (Castor fiber may act as a rare intermediate host, and so unscreened wild caught individuals may pose a potential risk of introducing this parasite to disease-free countries through translocation programs. There is currently no single definitive ante-mortem diagnostic test in intermediate hosts. An effective non-lethal diagnostic, feasible under field condition would be helpful to minimise parasite establishment risk, where indiscriminate culling is to be avoided. This study screened live beavers (captive, n = 18 or wild-trapped in Scotland, n = 12 and beaver cadavers (wild Scotland, n = 4 or Bavaria, n = 11, for the presence of E. multilocularis. Ultrasonography in combination with minimally invasive surgical examination of the abdomen by laparoscopy was viable under field conditions for real-time evaluation in beavers. Laparoscopy alone does not allow the operator to visualize the parenchyma of organs such as the liver, or inside the lumen of the gastrointestinal tract, hence the advantage of its combination with abdominal ultrasonography. All live beavers and Scottish cadavers were largely unremarkable in their haematology and serum biochemistry with no values suspicious for liver pathology or potentially indicative of E. multilocularis infection. This correlated well with ultrasound, laparoscopy, and immunoblotting, which were unremarkable in these individuals. Two wild Bavarian individuals were suspected E. multilocularis positive at post-mortem, through the presence of hepatic cysts. Sensitivity and specificity of a combination of laparoscopy and abdominal ultrasonography in the detection of parasitic liver cyst lesions was 100% in the subset of cadavers (95%Confidence Intervals 34.24-100%, and 86.7-100% respectively. For abdominal ultrasonography alone sensitivity was only 50% (95%CI 9

  15. The Big Sky inside

    Science.gov (United States)

    Adams, Earle; Ward, Tony J.; Vanek, Diana; Marra, Nancy; Hester, Carolyn; Knuth, Randy; Spangler, Todd; Jones, David; Henthorn, Melissa; Hammill, Brock; Smith, Paul; Salisbury, Rob; Reckin, Gene; Boulafentis, Johna

    2009-01-01

    The University of Montana (UM)-Missoula has implemented a problem-based program in which students perform scientific research focused on indoor air pollution. The Air Toxics Under the Big Sky program (Jones et al. 2007; Adams et al. 2008; Ward et al. 2008) provides a community-based framework for understanding the complex relationship between poor…

  16. Big Bang! An Evaluation of NASA's Space School Musical Program for Elementary and Middle School Learners

    Science.gov (United States)

    Haden, C.; Styers, M.; Asplund, S.

    2015-12-01

    Music and the performing arts can be a powerful way to engage students in learning about science. Research suggests that content-rich songs enhance student understanding of science concepts by helping students develop content-based vocabulary, by providing examples and explanations of concepts, and connecting to personal and situational interest in a topic. Building on the role of music in engaging students in learning, and on best practices in out-of-school time learning, the NASA Discovery and New Frontiers program in association with Jet Propulsion Laboratory, Marshall Space Flight Center, and KidTribe developed Space School Musical. Space School Musical consists of a set of nine songs and 36 educational activities to teach elementary and middle school learners about the solar system and space science through an engaging storyline and the opportunity for active learning. In 2014, NASA's Jet Propulsion Laboratory contracted with Magnolia Consulting, LLC to conduct an evaluation of Space School Musical. Evaluators used a mixed methods approach to address evaluation questions related to educator professional development experiences, program implementation and perceptions, and impacts on participating students. Measures included a professional development feedback survey, facilitator follow-up survey, facilitator interviews, and a student survey. Evaluation results showed that educators were able to use the program in a variety of contexts and in different ways to best meet their instructional needs. They noted that the program worked well for diverse learners and helped to build excitement for science through engaging all learners in the musical. Students and educators reported positive personal and academic benefits to participating students. We present findings from the evaluation and lessons learned about integration of the arts into STEM education.

  17. Net Zero Pilot Program Lights the Path to Big Savings in Guam

    Energy Technology Data Exchange (ETDEWEB)

    PNNL

    2016-11-03

    Case study describes how the Army Reserve 9th Mission Support Command (MSC) reduced lighting energy consumption by 62% for a total savings of 125,000 kWh and more than $50,000 per year by replacing over 400 fluorescent troffers with 36 W LED troffers. This project was part of the Army Reserve Net Zero Pilot Program, initiated in 2013, to reduce energy and water consumption, waste generation, and utility costs.

  18. Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs

    Directory of Open Access Journals (Sweden)

    Mohd Usama

    2017-11-01

    Full Text Available At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS, Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.

  19. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...

  20. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  1. SELECTIVE FORAGING ON WOODY SPECIES BY THE BEAVER CASTOR FIBER, AND ITS IMPACT ON A RIPARIAN WILLOW FOREST

    NARCIS (Netherlands)

    NOLET, BA; HOEKSTRA, A; OTTENHEIM, MM

    1994-01-01

    Beavers were re-introduced in the Biesbosch, The Netherlands, a wood dominated by willows Salix spp. Conservationists expected that herbivory by beavers would enhance succession to a mixed broad-leaved forest. Willows formed the staple food of the beavers, but they removed only 1.4% of the standing

  2. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  3. [Advances in early childhood development: from neurons to big scale programs].

    Science.gov (United States)

    Pérez-Escamilla, Rafael; Rizzoli-Córdoba, Antonio; Alonso-Cuevas, Aranzazú; Reyes-Morales, Hortensia

    Early childhood development (ECD) is the basis of countries' economic and social development and their ability to meet the Sustainable Development Goals (SDGs). Gestation and the first three years of life are critical for children to have adequate physical, psychosocial, emotional and cognitive development for the rest of their lives. Nurturing care and protection of children during gestation and early childhood are necessary for the development of trillions of neurons and trillions of synapses necessary for development. ECD requires access to good nutrition and health services from gestation, responsive caregiving according to the child's developmental stage, social protection and child welfare, and early stimulation and learning opportunities. Six actions are recommended to improve national ECD programs: expand political will and funding; create a supportive, evidence-based policy environment; build capacity through inter-sectoral coordination; ensure fair and transparent governance of programs and services; increase support for multidisciplinary research; and promote the development of leaders. Mexico has made significant progress under the leadership of the Health Ministry, but still faces significant challenges. The recent creation of a national inter-sectoral framework to enable ECD with support of international organizations and the participation of civil society organizations can help overcome these challenges. Copyright © 2017 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  4. 75 FR 16728 - Beaver Creek Landscape Management Project, Ashland Ranger District, Custer National Forest...

    Science.gov (United States)

    2010-04-02

    ... the project area by managing for early development (post disturbance), mid development closed, mid... Forest Service Beaver Creek Landscape Management Project, Ashland Ranger District, Custer National Forest... disclose the effects of ] managing forest vegetation in a manner that increases resiliency of the Beaver...

  5. Comeback of the beaver Castor fiber: An overview of old and new conservation problems

    NARCIS (Netherlands)

    Nolet, B.A.; Rosell, F.

    1998-01-01

    Due to over-hunting c. 1200 Eurasian beavers Castor fiber survived in eight relict populations in Europe and Asia at the beginning of the 20th century. Following hunting restrictions and translocation programmes in IS countries, the Eurasian beaver became re-established over much of its former

  6. Red fox, Vulpes vulpes, kills a European beaver, Castor fiber, kit

    OpenAIRE

    Kile, Nils B.; Nakken, Petter J.; Rosell, Frank; Espeland, Sigurd

    1996-01-01

    We observed an adult Red Fox (Vulpes vulpes) attack, kill and partially consume a 2-month-old female kit European Beaver (Castor fiber) near its lodge in Norway. The inner organs were consumed first. One adult beaver apparently attempted to frighten the fox away by tail-slapping.

  7. Habitat characteristics at den sites of the Point Arena mountain beaver (Aplodontia rufa nigra)

    Science.gov (United States)

    William J. Zielinski; John E. Hunter; Robin Hamlin; Keith M. Slauson; M. J. Mazurek

    2010-01-01

    The Point Arena mountain beaver (Aplodontia rufa nigra) is a federally listed endangered species, but has been the subject of few studies. Mountain beavers use burrows that include a single subterranean den. Foremost among the information needs for this subspecies is a description of the above-ground habitat features associated with dens. Using...

  8. 76 FR 13344 - Beaver Creek Landscape Management Project, Ashland Ranger District, Custer National Forest...

    Science.gov (United States)

    2011-03-11

    ... Forest Service Beaver Creek Landscape Management Project, Ashland Ranger District, Custer National Forest... Environmental Impact Statement for the Beaver Creek Landscape Management Project in the Federal Register (75 FR... Creek Landscape Management Project was published in the Federal Register on October 15, 2010 (75 FR...

  9. Development and viability of a translocated beaver Castor fiber population in the Netherlands

    NARCIS (Netherlands)

    Nolet, B.A.; Baveco, J.M.

    1996-01-01

    We monitored survival, reproduction and emigration of a translocated beaver Castor fiber population in the Netherlands for five years and used a stochastic model to assess its viability. Between 1988 and 1991, 42 beavers were released in the Biesbosch National Park. The mortality was initially high

  10. Factors affecting scent-marking behaviour in Eurasion beaver (Castor fiber)

    NARCIS (Netherlands)

    Rosell, F.; Nolet, B.A.

    1997-01-01

    We tested the hypothesis that a main function of territory marking in Eurasian beaver (Castor fiber) is defense of the territory. The results showed that: (1) beaver colonies with close neighbors scent-mark more often than isolated ones; (2) the number of scent markings increased significantly with

  11. Big Data: Are Biomedical and Health Informatics Training Programs Ready? Contribution of the IMIA Working Group for Health and Medical Informatics Education.

    Science.gov (United States)

    Otero, P; Hersh, W; Jai Ganesh, A U

    2014-08-15

    The growing volume and diversity of health and biomedical data indicate that the era of Big Data has arrived for healthcare. This has many implications for informatics, not only in terms of implementing and evaluating information systems, but also for the work and training of informatics researchers and professionals. This article addresses the question: What do biomedical and health informaticians working in analytics and Big Data need to know? We hypothesize a set of skills that we hope will be discussed among academic and other informaticians. The set of skills includes: Programming - especially with data-oriented tools, such as SQL and statistical programming languages; Statistics - working knowledge to apply tools and techniques; Domain knowledge - depending on one's area of work, bioscience or health care; and Communication - being able to understand needs of people and organizations, and articulate results back to them. Biomedical and health informatics educational programs must introduce concepts of analytics, Big Data, and the underlying skills to use and apply them into their curricula. The development of new coursework should focus on those who will become experts, with training aiming to provide skills in "deep analytical talent" as well as those who need knowledge to support such individuals.

  12. Environmental assessment for an experimental skunk removal program to increase duck production on Big Stone National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The primary goal of Big Stone National Wildlife Refuge is duck production; specifically, waterfowl maintenance, preservation and enhancement of diversity of...

  13. Beaver Evidence - Historical Range of Beaver in the State of California, with an emphasis on areas within the range of coho salmon and steelhead trout

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project examines historical, archaeological, and geological evidence to re-evaluate the existing management paradigm that beaver are non-native to most of...

  14. New finding of Trichinella britovi in a European beaver (Castor fiber) in Latvia.

    Science.gov (United States)

    Segliņa, Zanda; Bakasejevs, Eduards; Deksne, Gunita; Spuņģis, Voldemārs; Kurjušina, Muza

    2015-08-01

    We report the first finding of Trichinella britovi in a European beaver. In Latvia, beaver is a common game animal and frequently used in human diet. A high prevalence of Trichinella infections in Latvia is present in the most common hosts-carnivores and omnivores. In total, 182 European beaver muscle samples were tested for Trichinella larvae accordingly to the reference method of European Communities Commission Regulation (EC) No. 2075/2005 (2005). Trichinella britovi larvae were detected in one animal (prevalence 0.5%; intensity 5.92 larvae per gram of muscle). This finding suggests that the consumption of European beaver meat can be a risk to human health. Further studies are needed in order to determine if the present observation represents an isolated individual case or low prevalence of Trichinella infection in beavers.

  15. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  16. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Jeppesen, Jacob; Vaarst Andersen, Kristina; Lauto, Giancarlo

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...... and locations, having a diverse knowledge set and capable of tackling more and more complex problems. This prose the question if Big Egos continues to dominate in this rising paradigm of big science. Using a dataset consisting of full bibliometric coverage from a Large Scale Research Facility, we utilize...... a stochastic actor oriented model (SAOM) to analyze both network endogeneous mechanisms and individual agency driving the collaboration network and further if being a Big Ego in Big Science translates to increasing performance. Our findings suggest that the selection of collaborators is not based...

  17. Not Just for Big Dogs: the NSF Career Program from AN Undergraduate College Perspective

    Science.gov (United States)

    Harpp, K. S.

    2011-12-01

    Relatively few NSF CAREER grants are awarded to faculty at undergraduate colleges, leading to a perception that the program is geared for major research institutions. The goal of this presentation is to dispel this misconception by describing a CAREER grant at a small, liberal arts institution. Because high quality instruction is the primary mission of undergraduate colleges, the career development plan for this proposal was designed to use research as a teaching tool. Instead of distinct sets of objectives for the research and education components, the proposal's research and teaching plans were integrated across the curriculum to maximize opportunities for undergraduate engagement. The driving philosophy was that students learn science by doing it. The proposal plan therefore created opportunities for students to be involved in hands-on, research-driven projects from their first through senior years. The other guiding principle was that students become engaged in science when they experience its real life applications. Stage 1 of the project provided mechanisms to draw students into science in two ways. The first was development of an inquiry-based curriculum for introductory classes, emphasizing practical applications and hands-on learning. The goal was to energize, generate confidence, and provide momentum for early science students to pursue advanced courses. The second mechanism was the development of a science outreach program for area K-9 schools, designed and implemented by undergraduates, an alternative path for students to discover science. Stages 2 and 3 consisted of increasingly advanced project-based courses, with in-depth training in research skills. The courses were designed along chemical, geological, and environmental themes, to capture the most student interest. The students planned their projects within a set of constraints designed to lead them to fundamental concepts and centered on questions of importance to the local community, thereby

  18. Reintroduction of the European beaver (Castor fiber L. into Serbia and return of its parasite: The case of Stichorchis subtriquetrus

    Directory of Open Access Journals (Sweden)

    Ćirović D.

    2009-01-01

    Full Text Available After becoming extinct in the second half of the 20th century, the European beaver (Castor fiber L., 1758 was successfully reintroduced from Bavaria into Serbia during 2004-2005. In the necropsy of an adult female beaver (found dead in December of 2007, we discovered some parasites identified as Stichorchis subtriquetrus in the colon and peritoneal area. This is the first occurrence of the given specific parasite of beavers in Serbia. Decoding of a subcutaneous implanted microchip has confirmed that our specimen was one of the released beavers. We therefore conclude that the parasite in question was reintroduced into Serbia with the beavers originating from Bavaria.

  19. Big Buildings Meet Big Data

    National Research Council Canada - National Science Library

    Paul Ehrlich

    2013-01-01

      Big data comprises government and business servers that ar e collecting and analyzing massi ve amounts of data on everything from weather to web browsing, shopping habits to emails, on to phone calls...

  20. Hydrological and geomorphological consequences of beavers activity in the Struga Czechowska valley (Tuchola Pinewood Forest, Poland)

    Science.gov (United States)

    Brykała, Dariusz; Gierszewski, Piotr; Błaszkiewicz, Mirosław; Kordowski, Jarosław; Tyszkowski, Sebastian; Słowiński, Michał; Kaszubski, Michał; Brauer, Achim

    2016-04-01

    Since last years, after the process of beavers' (Castor fiber) reintroduction to the Polish environment, on the Struga Czechowska river (Tuchola Pinewood Forest, Poland) was observed large beaver activity, especially along the outlet from the Lake Głęboczek. It expresses in relief transformation of the valley bottom and its slopes. Created by beavers small ponds functioning as local sediment traps. Periodically the dams were destroyed. This led to rapid water drainage. The effects of such events were observed in the period between December 2014 and May 2015. Inventory of beaver dams along the Struga Czechowska river, which had made in 2015, shows that dams were distributed on average every 50 m. There were 30 dams on three sections of river. Only 6 were built there in 2015, and the remaining were older and abandoned, but one-third of them still damming water of stream. The average water damming by beaver dams amounts 0.2 m, and maximum 0.6 m. The width of the beaver dams reached there almost always the value of 3 m, and their height reached average up to 0.8 m was identical to the bankfull depth. Cascade character of the beaver dams operation has its consequences in functioning of erosional and accumulation parts of watercourses (alternately). Analysis of hydrograph of the Struga Czechowska water levels shows, that since December 2014 there were nine rapid drainages of beaver ponds located above the paleolake Trzechowskie. Damaged dams were very quickly rebuilt, and water in ponds was again stored. The average time of restoration the dam amounts 10 hours, and maximum 3 days. Rapid flows from beaver ponds resulted in intensive bottom and lateral erosion of stream channel and a creation of soil falls on the slopes of valley below destroyed dams. Products of erosion were accumulated along watercourse at a distance of 200 meters, and then in the stream channel in form of sandy bars. Especially intensive accumulation occurred at flat surface of paleolake. Maximum

  1. Modeling intrinsic potential for beaver (Castor canadensis) habitat to inform restoration and climate change adaptation.

    Science.gov (United States)

    Dittbrenner, Benjamin J; Pollock, Michael M; Schilling, Jason W; Olden, Julian D; Lawler, Joshua J; Torgersen, Christian E

    2018-01-01

    Through their dam-building activities and subsequent water storage, beaver have the potential to restore riparian ecosystems and offset some of the predicted effects of climate change by modulating streamflow. Thus, it is not surprising that reintroducing beaver to watersheds from which they have been extirpated is an often-used restoration and climate-adaptation strategy. Identifying sites for reintroduction, however, requires detailed information about habitat factors-information that is not often available at broad spatial scales. Here we explore the potential for beaver relocation throughout the Snohomish River Basin in Washington, USA with a model that identifies some of the basic building blocks of beaver habitat suitability and does so by relying solely on remotely sensed data. More specifically, we developed a generalized intrinsic potential model that draws on remotely sensed measures of stream gradient, stream width, and valley width to identify where beaver could become established if suitable vegetation were to be present. Thus, the model serves as a preliminary screening tool that can be applied over relatively large extents. We applied the model to 5,019 stream km and assessed the ability of the model to correctly predict beaver habitat by surveying for beavers in 352 stream reaches. To further assess the potential for relocation, we assessed land ownership, use, and land cover in the landscape surrounding stream reaches with varying levels of intrinsic potential. Model results showed that 33% of streams had moderate or high intrinsic potential for beaver habitat. We found that no site that was classified as having low intrinsic potential had any sign of beavers and that beaver were absent from nearly three quarters of potentially suitable sites, indicating that there are factors preventing the local population from occupying these areas. Of the riparian areas around streams with high intrinsic potential for beaver, 38% are on public lands and 17% are

  2. Modeling intrinsic potential for beaver (Castor canadensis) habitat to inform restoration and climate change adaptation

    Science.gov (United States)

    Dittbrenner, Benjamin J.; Pollack, Michael M.; Schilling, Jason W.; Olden, Julian D.; Lawler, Joshua J.; Torgersen, Christian E.

    2018-01-01

    Through their dam-building activities and subsequent water storage, beaver have the potential to restore riparian ecosystems and offset some of the predicted effects of climate change by modulating streamflow. Thus, it is not surprising that reintroducing beaver to watersheds from which they have been extirpated is an often-used restoration and climate-adaptation strategy. Identifying sites for reintroduction, however, requires detailed information about habitat factors—information that is not often available at broad spatial scales. Here we explore the potential for beaver relocation throughout the Snohomish River Basin in Washington, USA with a model that identifies some of the basic building blocks of beaver habitat suitability and does so by relying solely on remotely sensed data. More specifically, we developed a generalized intrinsic potential model that draws on remotely sensed measures of stream gradient, stream width, and valley width to identify where beaver could become established if suitable vegetation were to be present. Thus, the model serves as a preliminary screening tool that can be applied over relatively large extents. We applied the model to 5,019 stream km and assessed the ability of the model to correctly predict beaver habitat by surveying for beavers in 352 stream reaches. To further assess the potential for relocation, we assessed land ownership, use, and land cover in the landscape surrounding stream reaches with varying levels of intrinsic potential. Model results showed that 33% of streams had moderate or high intrinsic potential for beaver habitat. We found that no site that was classified as having low intrinsic potential had any sign of beavers and that beaver were absent from nearly three quarters of potentially suitable sites, indicating that there are factors preventing the local population from occupying these areas. Of the riparian areas around streams with high intrinsic potential for beaver, 38% are on public lands and 17

  3. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  4. Putting Big Data to Work: Community Colleges Use Detailed Reports to Design Smarter Workforce Training and Education Programs

    Science.gov (United States)

    Woods, Bob

    2013-01-01

    In this article, Bob Woods reports that "Big data" is all the rage on college campuses, and it makes sense that administrators would use what they know to boost student outcomes. Woods points out that community colleges around the country are using the data: (1) to guide the systematic expansion of its curriculum, providing targeted…

  5. Spatial and Temporal Variability of Channel Retention in a Lowland Temperate Forest Stream Settled by European Beaver (Castor fiber

    Directory of Open Access Journals (Sweden)

    Mateusz Grygoruk

    2014-09-01

    Full Text Available Beaver ponds remain a challenge for forest management in those countries where expansion of beaver (Castor fiber is observed. Despite undoubted economic losses generated in forests by beaver, their influence on hydrology of forest streams especially in terms of increasing channel retention (amount of water stored in the river channel, is considered a positive aspect of their activity. In our study, we compared water storage capacities of a lowland forest stream settled by beaver in order to unravel the possible temporal variability of beaver’s influence on channel retention. We compared distribution, total damming height, volumes and areas of beaver ponds in the valley of Krzemianka (Northeast Poland in the years 2006 (when a high construction activity of beaver was observed and in 2013 (when the activity of beaver decreased significantly. The study revealed a significant decrease of channel retention of beaver ponds from over 15,000 m3 in 2006 to 7000 m3 in 2013. The total damming height of the cascade of beaver ponds decreased from 6.6 to 5.6 m. Abandoned beaver ponds that transferred into wetlands, where lost channel retention was replaced by soil and groundwater retention, were more constant over time and less vulnerable to the external disturbance means of water storage than channel retention. We concluded that abandoned beaver ponds played an active role in increasing channel retention of the river analyzed for approximately 5 years. We also concluded that if the construction activity of beaver was used as a tool (ecosystem service in increasing channel retention of the river valley, the permanent presence of beaver in the riparian zone of forest streams should have been assured.

  6. Simulation modeling to understand how selective foraging by beaver can drive the structure and function of a willow community

    Science.gov (United States)

    Peinetti, H.R.; Baker, B.W.; Coughenour, M.B.

    2009-01-01

    Beaver-willow (Castor-Salix) communities are a unique and vital component of healthy wetlands throughout the Holarctic region. Beaver selectively forage willow to provide fresh food, stored winter food, and construction material. The effects of this complex foraging behavior on the structure and function of willow communities is poorly understood. Simulation modeling may help ecologists understand these complex interactions. In this study, a modified version of the SAVANNA ecosystem model was developed to better understand how beaver foraging affects the structure and function of a willow community in a simulated riparian ecosystem in Rocky Mountain National Park, Colorado (RMNP). The model represents willow in terms of plant and stem dynamics and beaver foraging in terms of the quantity and quality of stems cut to meet the energetic and life history requirements of beaver. Given a site where all stems were equally available, the model suggested a simulated beaver family of 2 adults, 2 yearlings, and 2 kits required a minimum of 4 ha of willow (containing about10 stems m-2) to persist in a steady-state condition. Beaver created a willow community where the annual net primary productivity (ANPP) was 2 times higher and plant architecture was more diverse than the willow community without beaver. Beaver foraging created a plant architecture dominated by medium size willow plants, which likely explains how beaver can increase ANPP. Long-term simulations suggested that woody biomass stabilized at similar values even though availability differed greatly at initial condition. Simulations also suggested that willow ANPP increased across a range of beaver densities until beaver became food limited. Thus, selective foraging by beaver increased productivity, decreased biomass, and increased structural heterogeneity in a simulated willow community.

  7. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  8. Population dynamics of two beaver species in Finland inferred from citizen‐science census data

    National Research Council Canada - National Science Library

    Brommer, J. E; Alakoski, R; Selonen, V; Kauhala, K

    2017-01-01

    ...‐census open population N‐mixture model, which was recently developed to handle such challenging census data, to describe the dynamics in the Finnish population sizes of the reintroduced native Eurasian beaver ( Castor fiber...

  9. Beaver-mediated methane emission: The effects of population growth in Eurasia and the Americas.

    Science.gov (United States)

    Whitfield, Colin J; Baulch, Helen M; Chun, Kwok P; Westbrook, Cherie J

    2015-02-01

    Globally, greenhouse gas budgets are dominated by natural sources, and aquatic ecosystems are a prominent source of methane (CH(4)) to the atmosphere. Beaver (Castor canadensis and Castor fiber) populations have experienced human-driven change, and CH(4) emissions associated with their habitat remain uncertain. This study reports the effect of near extinction and recovery of beavers globally on aquatic CH4 emissions and habitat. Resurgence of native beaver populations and their introduction in other regions accounts for emission of 0.18-0.80 Tg CH(4) year(-1) (year 2000). This flux is approximately 200 times larger than emissions from the same systems (ponds and flowing waters that became ponds) circa 1900. Beaver population recovery was estimated to have led to the creation of 9500-42 000 km(2) of ponded water, and increased riparian interface length of >200 000 km. Continued range expansion and population growth in South America and Europe could further increase CH(4) emissions.

  10. Big Dreams

    Science.gov (United States)

    Benson, Michael T.

    2015-01-01

    The Keen Johnson Building is symbolic of Eastern Kentucky University's historic role as a School of Opportunity. It is a place that has inspired generations of students, many from disadvantaged backgrounds, to dream big dreams. The construction of the Keen Johnson Building was inspired by a desire to create a student union facility that would not…

  11. Modeling the Capacity of Riverscapes to Support Dam-Building Beaver

    Science.gov (United States)

    Macfarlane, W.; Wheaton, J. M.

    2012-12-01

    Beaver (Castor canadensis) dam-building activities lead to a cascade of aquatic and riparian effects that increase the complexity of streams. As a result, beaver are increasingly being used as a critical component of passive stream and riparian restoration strategies. We developed the spatially-explicit Beaver Assessment and Restoration Tool (BRAT) to assess the capacity of the landscape in and around streams and rivers to support dam-building activity for beaver. Capacity was assessed in terms of readily available nation-wide GIS datasets to assess key habitat capacity indicators: water availability, relative abundance of preferred food/building materials and stream power. Beaver capacity was further refined by: 1) ungulate grazing capacity 2) proximity to human conflicts (e.g., irrigation diversions, settlements) 3) conservation/management objectives (endangered fish habitat) and 4) projected benefits related to beaver re-introductions (e.g., repair incisions). Fuzzy inference systems were used to assess the relative importance of these inputs which allowed explicit incorporation of uncertainty resulting from categorical ambiguity of inputs into the capacity model. Results indicate that beaver capacity varies widely within the study area, but follows predictable spatial patterns that correspond to distinct River Styles and landscape units. We present a case study application and verification/validation data from the Escalante River Watershed in southern Utah, and show how the models can be used to help resource managers develop and implement restoration and conservation strategies employing beaver that will have the greatest potential to yield increases in biodiversity and ecosystem services.

  12. Channel aggradation by beaver dams on a small agricultural stream in Eastern Nebraska

    Science.gov (United States)

    M.C. McCullough; J.L. Harper; D.E. Eisenhauer; M.G. Dosskey

    2004-01-01

    We assessed the effect of beaver dams on channel gradation of an incised stream in an agricultural area of eastern Nebraska. A topographic survey was conducted of a reach of Little Muddy Creek where beaver are known to have been building dams for twelve years. Results indicating that over this time period the thalweg elevation has aggraded an average of 0.65 m by...

  13. Meta-analysis of environmental effects of beaver in relation to artificial dams

    Science.gov (United States)

    Ecke, Frauke; Levanoni, Oded; Audet, Joachim; Carlson, Peter; Eklöf, Karin; Hartman, Göran; McKie, Brendan; Ledesma, José; Segersten, Joel; Truchy, Amélie; Futter, Martyn

    2017-11-01

    Globally, artificial river impoundment, nutrient enrichment and biodiversity loss impair freshwater ecosystem integrity. Concurrently, beavers, ecosystem engineers recognized for their ability to construct dams and create ponds, are colonizing sites across the Holarctic after widespread extirpation in the 19th century, including areas outside their historical range. This has the potential to profoundly alter hydrology, hydrochemistry and aquatic ecology in both newly colonized and recolonized areas. To further our knowledge of the effects of beaver dams on aquatic environments, we extracted 1366 effect sizes from 89 studies on the impoundment of streams and lakes. Effects were assessed for 16 factors related to hydrogeomorphology, biogeochemistry, ecosystem functioning and biodiversity. Beaver dams affected concentrations of organic carbon in water, mercury in water and biota, sediment conditions and hydrological properties. There were no overall adverse effects caused by beaver dams or ponds on salmonid fish. Age was an important determinant of effect magnitude. While young ponds were a source of phosphorus, there was a tendency for phosphorus retention in older systems. Young ponds were a source methylmercury in water, but old ponds were not. To provide additional context, we also evaluated similarities and differences between environmental effects of beaver-constructed and artificial dams (767 effect sizes from 75 studies). Both are comparable in terms of effects on, for example, biodiversity, but have contrasting effects on nutrient retention and mercury. These results are important for assessing the role of beavers in enhancing and/or degrading ecological integrity in changing Holarctic freshwater systems.

  14. Population genetic structure in natural and reintroduced beaver (Castor fiber populations in Central Europe

    Directory of Open Access Journals (Sweden)

    Kautenburger, R.

    2008-12-01

    Full Text Available Castor fiber Linnaeus, 1758 is the only indigenous species of the genus Castor in Europe and Asia. Due to extensive hunting until the beginning of the 20th century, the distribution of the formerly widespread Eurasian beaver was dramatically reduced. Only a few populations remained and these were in isolated locations, such as the region of the German Elbe River. The loss of genetic diversity in small or captive populations throughgenetic drift and inbreeding is a severe conservation problem. However, the reintroduction of beaver populations from several regions in Europe has shown high viability and populations today are growing fast. In the present study we analysed the population genetic structure of a natural and two reintroduced beaver populations in Germany and Austria. Furthermore, we studied the genetic differentiation between two beaver species, C. fiber and the American beaver (C. canadensis, using RAPD (Random Amplified Polymorphic DNA as a genetic marker. The reintroduced beaver populations of different origins and the autochthonous population of the Elbe River showed a similar low genetic heterogeneity. There was an overall high genetic similarity in the species C. fiber, and no evidence was found for a clear subspecific structure in the populations studied.

  15. The hydrological modeling in terms of determining the potential European beaver effect

    Science.gov (United States)

    Szostak, Marta; Jagodzińska, Jadwiga

    2017-06-01

    The objective of the paper was the hydrological analysis, in terms of categorizing main watercourses (based on coupled catchments) and marking areas covered by potential impact of the occurrence and activities of the European beaver Castor fiber. At the analysed area - the Forest District Głogów Małopolski there is a population of about 200 beavers in that Forest District. Damage inflicted by beavers was detected on 33.0 ha of the Forest District, while in the area of 13.9 ha the damage was small (below 10%). The monitoring of the beavers' behaviour and the analysis of their influence on hydrology of the area became an important element of using geoinformationtools in the management of forest areas. ArcHydro ArcGIS Esri module was applied, as an integrated set of tools for hydrographical analysis and modelling. Further steps of the procedure are hydrologic analyses such as: marking river networks on the DTM, filling holes, making maps of the flow direction, making the map of the accumulation flow, defining and segmentation of streams, marking elementary basins, marking coupled basins, making dams in the places, where beavers occur and localization of the area with a visible impact of damming. The result of the study includes maps prepared for the Forest District: the map of main rivers and their basins, categories of watercourses and compartments particularly threatened by beaver's foraging.

  16. Big Data and Big Science

    CERN Document Server

    Di Meglio, Alberto

    2014-04-14

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  17. Use of linear and areal habitat models to establish and distribute beaver Castor fiber harvest quotas in Norway

    Directory of Open Access Journals (Sweden)

    Howard Parker

    2013-12-01

    Full Text Available In Norway the Eurasian beaver Castor fiber harvest is quota-regulated. Once the annual quota for each municipality has been determined it is distributed to landowner-organized beaver management units. Municipal wildlife managers can choose between two distributional models: the traditional “areal model” whereby each management unit receives its portion of the municipal quota based on the relative area of beaver habitat within the township that it contains, or the more recently developed “linear model” based on the relative length of beaver-utilized shoreline it contains. The linear model was developed in an attempt to increase the precision of the quota distribution process and is based on the fact that beaver occupy landscapes in a linear fashion along strips of shoreline rather than exploiting extensive areas. The assumption was that the linear model would provide a more precise and just method of distributing the municipal quota among landowners. Here we test the hypothesis that the length of beaverutilized shoreline is a better predictor of beaver colony density than the area of beaver habitat on 13 beaver management units of typical size (794 – 2200 hectares in Bø Township, Norway, during 2 years. As hypothesized, the number of beaver occupied sites on management units correlated significantly (p≤ 0.001 with the length of beaver-utilized shoreline, but not with the area of beaver habitat. Therefore municipalities should employ the linear model when a precise distribution of quotas is necessary. The density of Eurasian beaver colonies at the landscape scale (>100 km2 in south-central Scandinavia averages approximately 1 occupied site per 4 km2. This figure can be employed by municipal wildlife managers to estimate the colony density in their townships, and to calculate municipal quotas, when more precise census information is lacking.

  18. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  19. Beaver ponds' impact on fluvial processes (Beskid Niski Mts., SE Poland).

    Science.gov (United States)

    Giriat, Dorota; Gorczyca, Elżbieta; Sobucki, Mateusz

    2016-02-15

    Beaver (Castor sp.) can change the riverine environment through dam-building and other activities. The European beaver (Castor fiber) was extirpated in Poland by the nineteenth century, but populations are again present as a result of reintroductions that began in 1974. The goal of this paper is to assess the impact of beaver activity on montane fluvial system development by identifying and analysing changes in channel and valley morphology following expansion of beaver into a 7.5 km-long headwater reach of the upper Wisłoka River in southeast Poland. We document the distribution of beaver in the reach, the change in river profile, sedimentation type and storage in beaver ponds, and assess how beaver dams and ponds have altered channel and valley bottom morphology. The upper Wisłoka River fluvial system underwent a series of anthropogenic disturbances during the last few centuries. The rapid spread of C. fiber in the upper Wisłoka River valley was promoted by the valley's morphology, including a low-gradient channel and silty-sand deposits in the valley bottom. At the time of our survey (2011), beaver ponds occupied 17% of the length of the study reach channel. Two types of beaver dams were noted: in-channel dams and valley-wide dams. The primary effect of dams, investigated in an intensively studied 300-m long subreach (Radocyna Pond), was a change in the longitudinal profile from smooth to stepped, a local reduction of the water surface slope, and an increase in the variability of both the thalweg profile and surface water depths. We estimate the current rate of sedimentation in beaver ponds to be about 14 cm per year. A three-stage scheme of fluvial processes in the longitudinal and transverse profile of the river channel is proposed. C. fiber reintroduction may be considered as another important stage of the upper Wisłoka fluvial system development. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Do beaver dams reduce habitat connectivity and salmon productivity in expansive river floodplains?

    Directory of Open Access Journals (Sweden)

    Rachel L. Malison

    2016-09-01

    Full Text Available Beaver have expanded in their native habitats throughout the northern hemisphere in recent decades following reductions in trapping and reintroduction efforts. Beaver have the potential to strongly influence salmon populations in the side channels of large alluvial rivers by building dams that create pond complexes. Pond habitat may improve salmon productivity or the presence of dams may reduce productivity if dams limit habitat connectivity and inhibit fish passage. Our intent in this paper is to contrast the habitat use and production of juvenile salmon on expansive floodplains of two geomorphically similar salmon rivers: the Kol River in Kamchatka, Russia (no beavers and the Kwethluk River in Alaska (abundant beavers, and thereby provide a case study on how beavers may influence salmonids in large floodplain rivers. We examined important rearing habitats in each floodplain, including springbrooks, beaver ponds, beaver-influenced springbrooks, and shallow shorelines of the river channel. Juvenile coho salmon dominated fish assemblages in all habitats in both rivers but other species were present. Salmon density was similar in all habitat types in the Kol, but in the Kwethluk coho and Chinook densities were 3–12× lower in mid- and late-successional beaver ponds than in springbrook and main channel habitats. In the Kol, coho condition (length: weight ratios was similar among habitats, but Chinook condition was highest in orthofluvial springbrooks. In the Kwethluk, Chinook condition was similar among habitats, but coho condition was lowest in main channel versus other habitats (0.89 vs. 0.99–1.10. Densities of juvenile salmon were extremely low in beaver ponds located behind numerous dams in the orthofluvial zone of the Kwethluk River floodplain, whereas juvenile salmon were abundant in habitats throughout the entire floodplain in the Kol River. If beavers were not present on the Kwethluk, floodplain habitats would be fully interconnected

  1. Trends in Rocky Mountain amphibians and the role of beaver as a keystone species

    Science.gov (United States)

    Hossack, Blake R.; Gould, William R.; Patla, Debra A.; Muths, Erin L.; Daley, Rob; Legg, Kristin; Corn, P. Stephen

    2015-01-01

    Despite prevalent awareness of global amphibian declines, there is still little information on trends for many widespread species. To inform land managers of trends on protected landscapes and identify potential conservation strategies, we collected occurrence data for five wetland-breeding amphibian species in four national parks in the U.S. Rocky Mountains during 2002–2011. We used explicit dynamics models to estimate variation in annual occupancy, extinction, and colonization of wetlands according to summer drought and several biophysical characteristics (e.g., wetland size, elevation), including the influence of North American beaver (Castor canadensis). We found more declines in occupancy than increases, especially in Yellowstone and Grand Teton national parks (NP), where three of four species declined since 2002. However, most species in Rocky Mountain NP were too rare to include in our analysis, which likely reflects significant historical declines. Although beaver were uncommon, their creation or modification of wetlands was associated with higher colonization rates for 4 of 5 amphibian species, producing a 34% increase in occupancy in beaver-influenced wetlands compared to wetlands without beaver influence. Also, colonization rates and occupancy of boreal toads (Anaxyrus boreas) and Columbia spotted frogs (Rana luteiventris) were ⩾2 times higher in beaver-influenced wetlands. These strong relationships suggest management for beaver that fosters amphibian recovery could counter declines in some areas. Our data reinforce reports of widespread declines of formerly and currently common species, even in areas assumed to be protected from most forms of human disturbance, and demonstrate the close ecological association between beaver and wetland-dependent species.

  2. Endoparasites of the European beaver (Castor fiber L. 1758 in north-eastern Poland

    Directory of Open Access Journals (Sweden)

    Demiaszkiewicz Aleksander W.

    2014-06-01

    Full Text Available Parasitological examination after necropsies of 48 European beavers from Podlaskie and Warmisko-Mazurskie provinces were performed between April 2011 and November 2012. All helminthes were isolated from the contents of the gastro-intestinal tract and their species were determined. In addition, blood samples and faeces were examined. All beavers were infected with six species of parasites. Stichorchis subtriqetrus trematodes were found in 93.7% of animals. They were localized mainly in the caecum, less in the colon, and single juvenile parasites were found in the small intestine. The intensity of infection ranged from two to 893 parasites. Travassosius rufus nematodes (10-4336 specimens were present in the stomach of 68.7% of the beavers. In the small intestine of four (8.3% beavers, two-six specimens of Psilotrema castoris were found. This is the first record of this species in Poland and the third of its discovery in the world. Furthermore, in the small intestine of one beaver, two Trichostrongylus capricola nematodes were detected. In the liver of one beaver, pathological changes caused by hydatid cestode Echinococus granulosus occurred. Inflammatory changes of the gastric mucosa caused by Travassosius rufus and of caecum caused by Stichorchis subtriquertus, were observed. Coproscopy was performed with the use of Baermann, flotation, and decantation methods. All results of Baermann method were negative. Examinations with flotation and decantation methods confirmed necropsy findings. Using the flotation method, single oocysts of Eimeria sprehni in one beaver were detected. A blood test conducted by Kingston and Morton method did not reveal the presence of protozoa or microfilariae.

  3. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  4. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  5. A SIMULINK environment for flight dynamics and control analysis: Application to the DHC-2 Beaver. Part 1: Implementation of a model library in SIMULINK. Part 2: Nonlinear analysis of the Beaver autopilot

    Science.gov (United States)

    Rauw, Marc O.

    1993-01-01

    The design of advanced Automatic Aircraft Control Systems (AACS's) can be improved upon considerably if the designer can access all models and tools required for control system design and analysis through a graphical user interface, from within one software environment. This MSc-thesis presents the first step in the development of such an environment, which is currently being done at the Section for Stability and Control of Delft University of Technology, Faculty of Aerospace Engineering. The environment is implemented within the commercially available software package MATLAB/SIMULINK. The report consists of two parts. Part 1 gives a detailed description of the AACS design environment. The heart of this environment is formed by the SIMULINK implementation of a nonlinear aircraft model in block-diagram format. The model has been worked out for the old laboratory aircraft of the Faculty, the DeHavilland DHC-2 'Beaver', but due to its modular structure, it can easily be adapted for other aircraft. Part 1 also describes MATLAB programs which can be applied for finding steady-state trimmed-flight conditions and for linearization of the aircraft model, and it shows how the built-in simulation routines of SIMULINK have been used for open-loop analysis of the aircraft dynamics. Apart from the implementation of the models and tools, a thorough treatment of the theoretical backgrounds is presented. Part 2 of this report presents a part of an autopilot design process for the 'Beaver' aircraft, which clearly demonstrates the power and flexibility of the AACS design environment from part 1. Evaluations of all longitudinal and lateral control laws by means of nonlinear simulations are treated in detail. A floppy disk containing all relevant MATLAB programs and SIMULINK models is provided as a supplement.

  6. Big queues

    CERN Document Server

    Ganesh, Ayalvadi; Wischik, Damon

    2004-01-01

    Big Queues aims to give a simple and elegant account of how large deviations theory can be applied to queueing problems. Large deviations theory is a collection of powerful results and general techniques for studying rare events, and has been applied to queueing problems in a variety of ways. The strengths of large deviations theory are these: it is powerful enough that one can answer many questions which are hard to answer otherwise, and it is general enough that one can draw broad conclusions without relying on special case calculations.

  7. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  8. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  9. Really big numbers

    CERN Document Server

    Schwartz, Richard Evan

    2014-01-01

    In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...

  10. Evaluating landowner-based beaver relocation as a tool to restore salmon habitat

    Directory of Open Access Journals (Sweden)

    V.M. Petro

    2015-01-01

    Full Text Available Relocating American beavers (Castor canadensis from unwanted sites to desirable sites (i.e., where damage exceeds stakeholder capacity has been posited as a method to enhance in-stream habitat for salmonids in the Pacific Northwest region of the US; however, no studies have evaluated this method. From September–December 2011, we trapped and relocated 38 nuisance beavers using guidelines available to Oregon landowners. Release sites were selected from models that identified high values of beaver dam habitat suitability and where dams would increase intrinsic potential of coho salmon (Oncorhynchus kisutch. Mean distance moved from release sites within 16 weeks post-release was 3.3 ±0.2 (SE stream km (max 29.2 km. Mean survival rate for relocated beavers was 0.47 ±0.12 (95% CI: 0.26–0.69 for 16 weeks post-release, while the probabilities of an individual dying to predation or disease/illness during the same period were 0.26 (95% CI: 0.09–0.43 and 0.16 (95% CI: 0.01–0.30, respectively. Dam construction was limited and ephemeral due to winter high flows, providing no in-stream habitat for coho. We conclude beaver relocation options available to landowners in Oregon may not be an effective option for stream restoration in coastal forestlands due to infrequent dam occurrence and short dam longevity.

  11. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...... indtil videre begrænset. Denne artikel belyser, hvordan den offentlige sundhedssektor kan genanvende og udnytte en stadig større mængde data under hensyntagen til offentlige værdier. Artiklen bygger på et casestudie af anvendelsen af store mængder sundhedsdata i Dansk AlmenMedicinsk Database (DAMD......). Analysen viser, at (gen)brug af data i nye sammenhænge er en flerspektret afvejning mellem ikke alene økonomiske rationaler og kvalitetshensyn, men også kontrol over personfølsomme data og etiske implikationer for borgeren. I DAMD-casen benyttes data på den ene side ”i den gode sags tjeneste” til...

  12. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  13. Development of a reliable method for determining sex for a primitive rodent, the Point Arena mountain beaver (Aplodontia rufa nigra)

    Science.gov (United States)

    Kristine L. Pilgrim; William J. Zielinski; Fredrick V. Schlexer; Michael K. Schwartz

    2012-01-01

    The mountain beaver (Aplodontia rufa) is a primitive species of rodent, often considered a living fossil. The Point Arena mountain beaver (Aplodontia rufa nigra) is an endangered subspecies that occurs in a very restricted range in northern California. Efforts to recover this taxon have been limited by the lack of knowledge on their demography, particularly sex and age...

  14. Geology of the central Mineral Mountains, Beaver County, Utah

    Energy Technology Data Exchange (ETDEWEB)

    Sibbett, B.S.; Nielson, D.L.

    1980-03-01

    The Mineral Mountains are located in Beaver and Millard Counties, southwestern Utah. The range is a horst located in the transition zone between the Basin and Range and Colorado Plateau geologic provinces. A multiple-phase Tertiary pluton forms most of the range, with Paleozoic rocks exposed on the north and south and Precambrian metamorphic rocks on the west in the Roosevelt Hot Springs KGRA (Known Geothermal Resource Area). Precambrian banded gneiss and Cambrian carbonate rocks have been intruded by foliated granodioritic to monzonitic rocks of uncertain age. The Tertiary pluton consists of six major phases of quartz monzonitic to leucocratic granitic rocks, two diorite stocks, and several more mafic units that form dikes. During uplift of the mountain block, overlying rocks and the upper part of the pluton were partially removed by denudation faulting to the west. The interplay of these low-angle faults and younger northerly trending Basin and Range faults is responsible for the structural control of the Roosevelt Hot Springs geothermal system. The structural complexity of the Roosevelt Hot Springs KGRA is unique within the range, although the same tectonic style continues throughout the range. During the Quaternary, rhyolite volcanism was active in the central part of the range and basaltic volcanism occurred in the northern portion of the map area. The heat source for the geothermal system is probably related to the Quaternary rhyolite volcanic activity.

  15. Coupled stream and population dynamics: Modeling the role beaver (Castor canadensis) play in generating juvenile steelhead (Oncorhynchus mykiss) habitat

    Science.gov (United States)

    Jordan, C.; Bouwes, N.; Wheaton, J. M.; Pollock, M.

    2013-12-01

    Over the past several centuries, the population of North American Beaver has been dramatically reduced through fur trapping. As a result, the geomorphic impacts long-term beaver occupancy and activity can have on fluvial systems have been lost, both from the landscape and from our collective memory such that physical and biological models of floodplain system function neither consider nor have the capacity to incorporate the role beaver can play in structuring the dynamics of streams. Concomitant with the decline in beaver populations was an increasing pressure on streams and floodplains through human activity, placing numerous species of stream rearing fishes in peril, most notably the ESA listing of trout and salmon populations across the entirety of the Western US. The rehabilitation of stream systems is seen as one of the primary means by which population and ecosystem recovery can be achieved, yet the methods of stream rehabilitation are applied almost exclusively with the expected outcome of a static idealized stream planform, occasionally with an acknowledgement of restoring processes rather than form and only rarely with the goal of a beaver dominated riverscape. We have constructed an individual based model of trout and beaver populations that allows the exploration of fish population dynamics as a function of stream habitat quality and quantity. We based the simulation tool on Bridge Creek (John Day River basin, Oregon) where we have implemented a large-scale restoration experiment using wooden posts to provide beavers with stable platforms for dam building and to simulate the dams themselves. Extensive monitoring captured geomorphic and riparian changes, as well as fish and beaver population responses; information we use to parameterize the model as to the geomorphic and fish response to dam building beavers. In the simulation environment, stream habitat quality and quantity can be manipulated directly through rehabilitation actions and indirectly

  16. Impact of beaver dams on abundance and distribution of anadromous salmonids in two lowland streams in Lithuania.

    Directory of Open Access Journals (Sweden)

    Tomas Virbickas

    Full Text Available European beaver dams impeded movements of anadromous salmonids as it was established by fishing survey, fish tagging and redd counts in two lowland streams in Lithuania. Significant differences in abundancies of other litophilic fish species and evenness of representation by species in the community were detected upstream and downstream of the beaver dams. Sea trout parr marked with RFID tags passed through several successive beaver dams in upstream direction, but no tagged fish were detected above the uppermost dam. Increase in abundances of salmonid parr in the stream between the beaver dams and decrease below the dams were recorded in November, at the time of spawning of Atlantic salmon and sea trout, but no significant changes were detected in the sections upstream of the dams. After construction of several additional beaver dams in the downstream sections of the studied streams, abundance of Atlantic salmon parr downstream of the dams decreased considerably in comparison with that estimated before construction.

  17. The hydrological modeling in terms of determining the potential European beaver effect

    Directory of Open Access Journals (Sweden)

    Szostak Marta

    2017-06-01

    Full Text Available The objective of the paper was the hydrological analysis, in terms of categorizing main watercourses (based on coupled catchments and marking areas covered by potential impact of the occurrence and activities of the European beaver Castor fiber. At the analysed area – the Forest District Głogów Małopolski there is a population of about 200 beavers in that Forest District. Damage inflicted by beavers was detected on 33.0 ha of the Forest District, while in the area of 13.9 ha the damage was small (below 10%. The monitoring of the beavers’ behaviour and the analysis of their influence on hydrology of the area became an important element of using geoinformationtools in the management of forest areas.

  18. Beaver-mediated lateral hydrologic connectivity, fluvial carbon and nutrient flux, and aquatic ecosystem metabolism

    Science.gov (United States)

    Wegener, Pam; Covino, Tim; Wohl, Ellen

    2017-06-01

    River networks that drain mountain landscapes alternate between narrow and wide valley segments. Within the wide segments, beaver activity can facilitate the development and maintenance of complex, multithread planform. Because the narrow segments have limited ability to retain water, carbon, and nutrients, the wide, multithread segments are likely important locations of retention. We evaluated hydrologic dynamics, nutrient flux, and aquatic ecosystem metabolism along two adjacent segments of a river network in the Rocky Mountains, Colorado: (1) a wide, multithread segment with beaver activity; and, (2) an adjacent (directly upstream) narrow, single-thread segment without beaver activity. We used a mass balance approach to determine the water, carbon, and nutrient source-sink behavior of each river segment across a range of flows. While the single-thread segment was consistently a source of water, carbon, and nitrogen, the beaver impacted multithread segment exhibited variable source-sink dynamics as a function of flow. Specifically, the multithread segment was a sink for water, carbon, and nutrients during high flows, and subsequently became a source as flows decreased. Shifts in river-floodplain hydrologic connectivity across flows related to higher and more variable aquatic ecosystem metabolism rates along the multithread relative to the single-thread segment. Our data suggest that beaver activity in wide valleys can create a physically complex hydrologic environment that can enhance hydrologic and biogeochemical buffering, and promote high rates of aquatic ecosystem metabolism. Given the widespread removal of beaver, determining the cumulative effects of these changes is a critical next step in restoring function in altered river networks.

  19. Bacterial and Archaeal Diversity in the Gastrointestinal Tract of the North American Beaver (Castor canadensis.

    Directory of Open Access Journals (Sweden)

    Robert J Gruninger

    Full Text Available The North American Beaver (Castor canadensis is the second largest living rodent and an iconic symbol of Canada. The beaver is a semi-aquatic browser whose diet consists of lignocellulose from a variety of plants. The beaver is a hindgut fermenter and has an enlarged ceacum that houses a complex microbiome. There have been few studies examining the microbial diversity in gastrointestinal tract of hindgut fermenting herbivores. To examine the bacterial and archaeal communities inhabiting the gastrointestinal tract of the beaver, the microbiome of the ceacum and feaces was examined using culture-independent methods. DNA from the microbial community of the ceacum and feaces of 4 adult beavers was extracted, and the16S rRNA gene was sequenced using either bacterial or archaeal specific primers. A total of 1447 and 1435 unique bacterial OTUs were sequenced from the ceacum and feaces, respectively. On average, the majority of OTUs within the ceacum were classified as Bacteroidetes (49.2% and Firmicutes (47.6%. The feaces was also dominated by OTUs from Bacteroidetes (36.8% and Firmicutes (58.9%. The composition of bacterial community was not significantly different among animals. The composition of the ceacal and feacal microbiome differed, but this difference is due to changes in the abundance of closely related OTUs, not because of major differences in the taxonomic composition of the communities. Within these communities, known degraders of lignocellulose were identified. In contrast, to the bacterial microbiome, the archaeal community was dominated by a single species of methanogen, Methanosphaera stadtmanae. The data presented here provide the first insight into the microbial community within the hindgut of the beaver.

  20. Oxygen isotopic determination of climatic variation using phosphate from beaver bone, tooth enamel, and dentine

    Science.gov (United States)

    Stuart-Williams, Hilary Le Q.; Schwarcz, Henry P.

    1997-06-01

    The δ 18O of Canadian beaver ( Castor canadensis) teeth should reflect variations in the isotopic composition of the water in which the beavers live, as their incisors grow rapidly and continuously. We observe seasonal variations in phosphate δ 18O using samples of enamel taken along the length of single teeth. In the spring the δ 18O of the enamel being deposited gradually declines, reflecting a retarded input of δ 18O depleted winter water. After mid-year, enamel δ 18O is higher than average (as represented by the δ 18O of bone phosphate from the same animal) and passes through a maximum in late summer or early fall. Overall, the amplitude of seasonal excursions in enamel δ 18O (4‰) is much smaller than the expected summer-winter range in the δ 18O of meteoric water (> 10‰). This is because hydrologic mixing processes, gradual admixing of environmental water with beaver body water, long-term plant growth, and oxygen inputs of relatively constant value (particularly atmospheric oxygen) tend to even out summer-winter differences in the δ 18O of oxygen inputs to the beaver. The δ 18O of bone from adult beavers was uniform at 11.9 ± 0.5‰ over the study area. Analyses of a Sangamon age giant beaver ( Castoroides ohioensis) incisor from Hopwood Farm, Illinois, show a slightly larger 5.5‰ seasonal cycle of δ 18O with an average enamel δ 18O of 18‰. This suggests that average temperatures were warmer during the Sangamon than today and that seasonal temperature differences and/or relative humidity variations were larger.

  1. Using ecosystem engineers as tools in habitat restoration and rewilding: beaver and wetlands.

    Science.gov (United States)

    Law, Alan; Gaywood, Martin J; Jones, Kevin C; Ramsay, Paul; Willby, Nigel J

    2017-12-15

    Potential for habitat restoration is increasingly used as an argument for reintroducing ecosystem engineers. Beaver have well known effects on hydromorphology through dam construction, but their scope to restore wetland biodiversity in areas degraded by agriculture is largely inferred. Our study presents the first formal monitoring of a planned beaver-assisted restoration, focussing on changes in vegetation over 12years within an agriculturally-degraded fen following beaver release, based on repeated sampling of fixed plots. Effects are compared to ungrazed exclosures which allowed the wider influence of waterlogging to be separated from disturbance through tree felling and herbivory. After 12years of beaver presence mean plant species richness had increased on average by 46% per plot, whilst the cumulative number of species recorded increased on average by 148%. Heterogeneity, measured by dissimilarity of plot composition, increased on average by 71%. Plants associated with high moisture and light conditions increased significantly in coverage, whereas species indicative of high nitrogen decreased. Areas exposed to both grazing and waterlogging generally showed the most pronounced change in composition, with effects of grazing seemingly additive, but secondary, to those of waterlogging. Our study illustrates that a well-known ecosystem engineer, the beaver, can with time transform agricultural land into a comparatively species-rich and heterogeneous wetland environment, thus meeting common restoration objectives. This offers a passive but innovative solution to the problems of wetland habitat loss that complements the value of beavers for water or sediment storage and flow attenuation. The role of larger herbivores has been significantly overlooked in our understanding of freshwater ecosystem function; the use of such species may yet emerge as the missing ingredient in successful restoration. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights

  2. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  3. The demographic response of bank-dwelling beavers to flow regulation: A comparison on the Green and Yampa rivers

    Science.gov (United States)

    Breck, S.W.; Wilson, K.R.; Andersen, D.C.

    2001-01-01

    We assessed the effects of flow regulation on the demography of beavers (Castor canadensis) by comparing the density, home-range size, and body size of bank-dwelling beavers on two sixth-order alluvial river systems, the flow-regulated Green River and the free-flowing Yampa River, from 1997 to 2000. Flow regulation on the Green River has altered fluvial geomorphic processes, influencing the availability of willow and cottonwood, which, in turn, has influenced the demography of beavers. Beaver density was higher on the Green River (0.5–0.6 colonies per kilometre of river) than on the Yampa River (0.35 colonies per kilometre of river). Adult and subadult beavers on the Green River were in better condition, as indicated by larger body mass and tail size. There was no detectable difference in home-range size, though there were areas on the Yampa River that no beavers used. We attribute the improved habitat quality on the Green River to a greater availability of willow. We suggest that the sandy flats and sandbars that form during base flows and the ice cover that forms over winter on the Yampa River increase the energy expended by the beavers to obtain food and increase predation risk and thus lowers the availability of woody forage.

  4. A global review on the influence of beavers (Castor fiber, Castor canadensis) on river and floodplain dynamics

    Science.gov (United States)

    Larsen, Annegret; Lane, Stuart; Larsen, Joshua

    2017-04-01

    Beavers (Castor fiber, Castor canadensis) have the ability to actively engineer their habitat, which they can do most effectively in lower order streams and their floodplains. Hence, this engineering has the potential to alter the hydrology, geomorphology, biogeochemistry, and ecology of river systems and the feedbacks between them. Thus, the beaver is often referred to as an 'ecosystem engineer' and is reflected in their recognition as a key species when restoring ecosystems. This capacity to engineer low order streams also shapes a range of positive and negative perceptions on their influence. On the one hand they may be perceived as capable of undermining existing river engineering schemes and the land use of associated floodplains, and on the other hand beavers may provide an alternative to traditional 'hard' engineering, potentially improving river restoration success. The aim of this review is to summarize research to date on the impacts of beavers on stream and floodplain hydrology, geomorphology, water-quality and ecology, and the feedbacks between them. Our review shows that: (1) research has been focused heavily on North American streams, with far less research outside this North American context; (2) there is a tendency to investigate beaver impacts from the perspective of individual disciplines, to the detriment of considering broader process feedbacks, notably at the interface of hydro-geomorphology and riparian ecology; (3) it remains unclear to which extent beavers genuinely engineered streams prior to human impact, pointing to the need for longer term (millennium scale) studies on how beavers have changed river-floodplain systems. Crucially, we conclude that the investigation of the effects of beavers on streams and floodplains, especially in a longer-term, and their use for river restoration can only be understood through the thorough investigation of antecedent hydro-geomorphic conditions which takes account of the ways in which beavers and humans

  5. Seasonal foraging responses of beavers to sodium-enhanced foods: An experimental assessment with field feeding trials

    Science.gov (United States)

    Strules, Jennifer; DeStefano, Stephen

    2015-01-01

    Salt drive is a seasonal phenomenon common to several classes of wild herbivores. Coincident with shifts of nutrient quality when plants resume growth in the spring, sodium is secondarily lost as surplus potassium is excreted. The beaver (Castor canadensis) is an herbivore whose dietary niche closely follows that of other herbivores that are subject to salt drive, but no published studies to date have assessed the likelihood of its occurrence. To quantify if beavers experience seasonal salt drive, we designed a field experiment to measure the foraging responses of beavers to sodium-enhanced foods. We used sodium-treated (salted) and control (no salt) food items (aspen [Populus tremuloides] and pine [Pinus spp.] sticks) during monthly feeding trials at beaver-occupied wetlands. If conventional ontogeny of salt drive was operant, we expected to observe greater utility of sodium-treated food items by beavers in May and June. Further, if water lilies (Nymphaea spp. and Nuphar spp.) supply beavers with sodium to meet dietary requirements as is widely speculated, we expected foraging responses to sodium-treated food items at wetlands where water lilies were absent to be greater than at wetlands where water lily was present. Aspen was selected by beavers in significantly greater amounts than pine. There was no difference between the mean percent consumed of salted and control aspen sticks by beavers at lily and non-lily wetlands, and no differences in temporal consumption associated with salted or control pine sticks at either wetland type. Salted pine was consumed in greater amounts than unsalted pine. We propose that the gastrointestinal or renal physiology of beavers may preclude solute loss, thereby preventing salt drive.

  6. Landscape consequences of natural gas extraction in Beaver and Butler Counties, Pennsylvania, 2004-2010

    Science.gov (United States)

    Roig-Silva, Coral M.; Slonecker, E. Terry; Milheim, Lesley E.; Malizia, Alexander R.

    2013-01-01

    Increased demands for cleaner burning energy, coupled with the relatively recent technological advances in accessing unconventional hydrocarbon-rich geologic formations, have led to an intense effort to find and extract natural gas from various underground sources around the country. One of these sources, the Marcellus Shale, located in the Allegheny Plateau, is currently undergoing extensive drilling and production. The technology used to extract gas in the Marcellus Shale is known as hydraulic fracturing and has garnered much attention because of its use of large amounts of fresh water, its use of proprietary fluids for the hydraulic-fracturing process, its potential to release contaminants into the environment, and its potential effect on water resources. Nonetheless, development of natural gas extraction wells in the Marcellus Shale is only part of the overall natural gas story in this area of Pennsylvania. Conventional natural gas wells, which sometimes use the same technique, are commonly located in the same general area as the Marcellus Shale and are frequently developed in clusters across the landscape. The combined effects of these two natural gas extraction methods create potentially serious patterns of disturbance on the landscape. This document quantifies the landscape changes and consequences of natural gas extraction for Beaver County and Butler County in Pennsylvania between 2004 and 2010. Patterns of landscape disturbance related to natural gas extraction activities were collected and digitized using National Agriculture Imagery Program (NAIP) imagery for 2004, 2005/2006, 2008, and 2010. The disturbance patterns were then used to measure changes in land cover and land use using the National Land Cover Database (NLCD) of 2001. A series of landscape metrics is also used to quantify these changes and is included in this publication.

  7. 77 FR 62147 - Approval and Promulgation of Air Quality Implementation Plans; Pennsylvania; Pittsburgh-Beaver...

    Science.gov (United States)

    2012-10-12

    ... the 1997 annual PM 2.5 NAAQS based upon air quality monitoring data for calendar years 2001-2003 (70... determinations regarding the Pittsburgh- Beaver Valley fine particulate matter (PM 2.5 ) nonattainment area... attained the 1997 annual PM 2.5 National Ambient Air Quality Standard (NAAQS). This determination of...

  8. 77 FR 34297 - Approval and Promulgation of Air Quality Implementation Plans; Pennsylvania; Pittsburgh-Beaver...

    Science.gov (United States)

    2012-06-11

    ... PM 2.5 NAAQS based upon air quality monitoring data for calendar years 2001-2003 (70 FR 944). These... make two determinations regarding the Pittsburgh-Beaver Valley fine particulate matter (PM 2.5... to determine that the Area has attained the 1997 annual PM 2.5 National Ambient Air Quality Standard...

  9. Estimating abundance and survival in the endangered Point Arena Mountain beaver using noninvasive genetic methods

    Science.gov (United States)

    William J. Zielinski; Fredrick V. Schlexer; T. Luke George; Kristine L. Pilgrim; Michael K. Schwartz

    2013-01-01

    The Point Arena mountain beaver (Aplodontia rufa nigra) is federally listed as an endangered subspecies that is restricted to a small geographic range in coastal Mendocino County, California. Management of this imperiled taxon requires accurate information on its demography and vital rates. We developed noninvasive survey methods, using hair snares to sample DNA and to...

  10. Reproductive characteristics of the Point Arena mountain beaver (Aplodontia rufa nigra)

    Science.gov (United States)

    William Zielinski; M. J. Mazurek

    2016-01-01

    Little is known about the ecology and life history of the federally endangered Point Arena mountain beaver (Aplodontia rufa nigra). The distribution of this primitive burrowing rodent is disjunct from the balance of the species’ range and occurs in a unique maritime environment of coastal grasslands and forests. Fundamental to protecting this taxon...

  11. 75 FR 77826 - White River National Forest; Eagle County, CO; Beaver Creek Mountain Improvements

    Science.gov (United States)

    2010-12-14

    ... to provide a world class venue for Alpine ski events--a key goal of the MDP. DATES: Comments... focuses on the actions necessary for Beaver Creek to host Alpine ski racing events. However, some elements... and guest service needs that are not specifically related to Alpine ski racing. These projects were...

  12. Infectious diseases as main causes of mortality to beavers Castor fiber after translocation to the Netherlands

    NARCIS (Netherlands)

    Nolet, B.A.; Broekhuizen, S.; Dorrestein, G.M.; Rienks, K.M.

    1997-01-01

    Between 1988 and 1994, 58 beavers were translocated from the Elbe region (Germany) to the Netherlands. In 43 animals, radio-transmitters were implanted with a pulse interval which was dependent on body temperature; subsequently, 22 of the released animals were found dead and the cause of death was

  13. Influence of flooding, freezing, and American beaver herbivory on survival of planted oak seedlings

    Science.gov (United States)

    Johnathan T. Reeves; Andrew W. Ezell; John D. Hodges; Emily B. Schultz; Andrew B. Self

    2016-01-01

    Good seedlings, proper planting, and competition control normally result in successful hardwood planting. However, other factors can have serious impact on planting success, such as the impact of flooding, freezing, and the American beaver (Castor canadensis). In 2014, three planting stocks of Nuttall oak (Quercus nuttallii) and Shumard oak (

  14. Simulated winter browsing may lead to induced susceptibility of willows to beavers in spring

    NARCIS (Netherlands)

    Veraart, A.J.; Nolet, B.A.; Rosell, F.; De Vries, Peter

    2006-01-01

    Browsing may lead to an induced resistance or susceptibility of the plant to the herbivore. We tested the effect of winter browsing by Eurasian beavers (Castor fiber L., 1758) on food quality of holme willows (Salix dasyclados Wimm.) in and after the following growth season. Shrubs were pruned in

  15. Aeromagnetic map of the Wet Beaver Roadless Area, Yavapai and Coconino counties, Arizona

    Science.gov (United States)

    Martin, R.A.

    1986-01-01

    The Wet Beaver Roadless Area includes 9,890 acres (15.4 mi2) of the Coconino National Forest and is in T. 15 N., Rs. 6, 7, and 8 E., Yavapai and Coconino Counties, central Arizona. Camp Verde, the nearest major population center, is about 13 mi southwest of the road less area.

  16. Hunting Plan : Big Stone National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The Big Stone National Wildlife Refuge Hunting Plan provides guidance for the management of hunting on the refuge. Hunting program objectives include providing a...

  17. De Novo Genome and Transcriptome Assembly of the Canadian Beaver (Castor canadensis).

    Science.gov (United States)

    Lok, Si; Paton, Tara A; Wang, Zhuozhi; Kaur, Gaganjot; Walker, Susan; Yuen, Ryan K C; Sung, Wilson W L; Whitney, Joseph; Buchanan, Janet A; Trost, Brett; Singh, Naina; Apresto, Beverly; Chen, Nan; Coole, Matthew; Dawson, Travis J; Ho, Karen; Hu, Zhizhou; Pullenayegum, Sanjeev; Samler, Kozue; Shipstone, Arun; Tsoi, Fiona; Wang, Ting; Pereira, Sergio L; Rostami, Pirooz; Ryan, Carol Ann; Tong, Amy Hin Yan; Ng, Karen; Sundaravadanam, Yogi; Simpson, Jared T; Lim, Burton K; Engstrom, Mark D; Dutton, Christopher J; Kerr, Kevin C R; Franke, Maria; Rapley, William; Wintle, Richard F; Scherer, Stephen W

    2017-02-09

    The Canadian beaver (Castor canadensis) is the largest indigenous rodent in North America. We report a draft annotated assembly of the beaver genome, the first for a large rodent and the first mammalian genome assembled directly from uncorrected and moderate coverage (genome size is 2.7 Gb estimated by k-mer analysis. We assembled the beaver genome using the new Canu assembler optimized for noisy reads. The resulting assembly was refined using Pilon supported by short reads (80 ×) and checked for accuracy by congruency against an independent short read assembly. We scaffolded the assembly using the exon-gene models derived from 9805 full-length open reading frames (FL-ORFs) constructed from the beaver leukocyte and muscle transcriptomes. The final assembly comprised 22,515 contigs with an N50 of 278,680 bp and an N50-scaffold of 317,558 bp. Maximum contig and scaffold lengths were 3.3 and 4.2 Mb, respectively, with a combined scaffold length representing 92% of the estimated genome size. The completeness and accuracy of the scaffold assembly was demonstrated by the precise exon placement for 91.1% of the 9805 assembled FL-ORFs and 83.1% of the BUSCO (Benchmarking Universal Single-Copy Orthologs) gene set used to assess the quality of genome assemblies. Well-represented were genes involved in dentition and enamel deposition, defining characteristics of rodents with which the beaver is well-endowed. The study provides insights for genome assembly and an important genomics resource for Castoridae and rodent evolutionary biology. Copyright © 2017 Lok et al.

  18. De Novo Genome and Transcriptome Assembly of the Canadian Beaver (Castor canadensis

    Directory of Open Access Journals (Sweden)

    Si Lok

    2017-02-01

    Full Text Available The Canadian beaver (Castor canadensis is the largest indigenous rodent in North America. We report a draft annotated assembly of the beaver genome, the first for a large rodent and the first mammalian genome assembled directly from uncorrected and moderate coverage (< 30 × long reads generated by single-molecule sequencing. The genome size is 2.7 Gb estimated by k-mer analysis. We assembled the beaver genome using the new Canu assembler optimized for noisy reads. The resulting assembly was refined using Pilon supported by short reads (80 × and checked for accuracy by congruency against an independent short read assembly. We scaffolded the assembly using the exon–gene models derived from 9805 full-length open reading frames (FL-ORFs constructed from the beaver leukocyte and muscle transcriptomes. The final assembly comprised 22,515 contigs with an N50 of 278,680 bp and an N50-scaffold of 317,558 bp. Maximum contig and scaffold lengths were 3.3 and 4.2 Mb, respectively, with a combined scaffold length representing 92% of the estimated genome size. The completeness and accuracy of the scaffold assembly was demonstrated by the precise exon placement for 91.1% of the 9805 assembled FL-ORFs and 83.1% of the BUSCO (Benchmarking Universal Single-Copy Orthologs gene set used to assess the quality of genome assemblies. Well-represented were genes involved in dentition and enamel deposition, defining characteristics of rodents with which the beaver is well-endowed. The study provides insights for genome assembly and an important genomics resource for Castoridae and rodent evolutionary biology.

  19. Big Brothers/Big Sisters: A Study of Volunteer Recruitment and Screening.

    Science.gov (United States)

    Roaf, Phoebe A.; And Others

    Since 1988, Public/Private Ventures of Philadelphia (Pennsylvania) has been conducting a series of studies of mentoring programs for at-risk youth. As part of this effort, the recruitment and screening procedures used by Big Brother/Big Sister (BB/BS) agencies were studied in eight cities. Recruitment for the high-profile BB/BS agencies is not as…

  20. Beaver dams, hydrological thresholds, and controlled floods as a management tool in a desert riverine ecosystem, Bill Williams River, Arizona

    Science.gov (United States)

    Andersen, D.C.; Shafroth, P.B.

    2010-01-01

    Beaver convert lotic stream habitat to lentic through dam construction, and the process is reversed when a flood or other event causes dam failure. We investigated both processes on a regulated Sonoran Desert stream, using the criterion that average current velocity is control the probability of major damage at low (attenuated) flood magnitude. We conclude that environmental flows prescribed to sustain desert riparian forest will also reduce beaver-created lentic habitat in a non-linear manner determined by both beaver dam and flood attributes. Consideration of both desirable and undesirable consequences of ecological engineering by beaver is important when optimizing environmental flows to meet ecological and socioeconomic goals. ?? 2010 John Wiley & Sons, Ltd.

  1. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  2. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  3. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  4. Five Big Ideas

    Science.gov (United States)

    Morgan, Debbie

    2012-01-01

    Designing quality continuing professional development (CPD) for those teaching mathematics in primary schools is a challenge. If the CPD is to be built on the scaffold of five big ideas in mathematics, what might be these five big ideas? Might it just be a case of, if you tell me your five big ideas, then I'll tell you mine? Here, there is…

  5. Big Data Analytics

    Indian Academy of Sciences (India)

    But analysing massiveamounts of data available in the Internet has the potential ofimpinging on our privacy. Inappropriate analysis of big datacan lead to misleading conclusions. In this article, we explainwhat is big data, how it is analysed, and give some case studiesillustrating the potentials and pitfalls of big data analytics ...

  6. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  7. Using occupancy models to accommodate uncertainty in the interpretation of aerial photograph data: status of beaver in Central Oregon, USA

    Science.gov (United States)

    Pearl, Christopher A.; Adams, Michael J.; Haggerty, Patricia K.; Urban, Leslie

    2015-01-01

    Beavers (Castor canadensis) influence habitat for many species and pose challenges in developed landscapes. They are increasingly viewed as a cost-efficient means of riparian habitat restoration and water storage. Still, information on their status is rare, particularly in western North America. We used aerial photography to evaluate changes in beaver occupancy between 1942–1968 and 2009 in upper portions of 2 large watersheds in Oregon, USA. We used multiple observers and occupancy modeling to account for bias related to photo quality, observers, and imperfect detection of beaver impoundments. Our analysis suggested a slightly higher rate of beaver occupancy in the upper Deschutes than the upper Klamath basin. We found weak evidence for beaver increases in the west and declines in eastern parts of the study area. Our study presents a method for dealing with observer variation in photo interpretation and provides the first assessment of the extent of beaver influence in 2 basins with major water-use challenges. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  8. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  9. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  10. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  11. Processing Solutions for Big Data in Astronomy

    Science.gov (United States)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  12. Simulating the effects of a beaver dam on regional groundwater flow through a wetland

    Directory of Open Access Journals (Sweden)

    Kathleen Feiner

    2015-09-01

    New hydrological insights for the region: The construction of a beaver dam resulted in minimal changes to regional groundwater flow paths at this site, which is attributed to a clay unit underlying the peat, disconnecting this wetland from regional groundwater flow. However, groundwater discharge from the wetland pond increased by 90%. Simulating a scenario with the numerical model in which the wetland is connected to regional groundwater flow results in a much larger impact on flow paths. In the absence of the clay layer, the simulated construction of a beaver dam causes a 70% increase in groundwater discharge from the wetland pond and increases the surface area of both the capture zone and the discharge zone by 30% and 80%, respectively.

  13. Beavers indicate metal pollution away from industrial centers in northeastern Poland.

    Science.gov (United States)

    Giżejewska, Aleksandra; Spodniewska, Anna; Barski, Dariusz; Fattebert, Julien

    2015-03-01

    Heavy metals are persistent environmental contaminants, and wild animals are increasingly exposed to the harmful effects of compounds of anthropogenic origin, even in areas distant from industrial centers. We used atomic absorption spectrometry to determine levels of cadmium (Cd), lead (Pb), copper (Cu), and zinc (Zn) in liver and kidney of wild Eurasian beavers (Castor fiber) in Poland. Cd concentrations in liver (0.21 ± 0.44 μg/g) and in kidney (2.81 ± 4.52 μg/g) were lower in juvenile than in adult beavers. Pb concentrations in liver (0.08 ± 0.03 μg/g) and kidney (0.08 ± 0.03 μg/g) were similar among all individuals, while both Cu and Zn levels were higher in liver (Cu 9.2 ± 4.5 μg/g; Zn 35.7 ± 3.5 μg/g) than in kidney (Cu 3.7 ± 1.1 μg/g; Zn 21.5 ± 2.7 μg/g). Cu levels also differed between juveniles and adults. We reviewed the literature reporting metal concentrations in beavers. Our results indicate metal contamination in beavers away from important industrial emission sources and suggest the natural environment should be regularly monitored to ensure their levels are below recommended, legal values.

  14. Big fundamental groups: generalizing homotopy and big homotopy

    OpenAIRE

    Penrod, Keith

    2014-01-01

    The concept of big homotopy theory was introduced by J. Cannon and G. Conner using big intervals of arbitrarily large cardinality to detect big loops. We find, for each space, a canonical cardinal that is sufficient to detect all big loops and all big homotopies in the space.

  15. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  16. Matrix Big Brunch

    OpenAIRE

    Bedford, J; Papageorgakis, C.; Rodriguez-Gomez, D.; Ward, J.

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  17. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  18. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  19. Considerations regarding the occurence of the Eurasian Beaver (Castor fiber Linnaeus 1758 in the Danube Delta (Romania

    Directory of Open Access Journals (Sweden)

    ALEXE Vasile

    2012-09-01

    Full Text Available On its original Romanian name - breb, the Eurasian Beaver (Castor fiber extinct at us for almost two centuries and reintroduced in some areas of the country, at present is better known under the name of his North American relative, beaver. In the last decades, this specie has been reintroduced within in its old habitats from where itwas extinct, especially under the effect of human pressure. Since 1998, reinsertion actions took place in Romania, in many areas, the closest one to Danube Delta area being the lower part of Ialomita river. By 2011 epigraphic or paleozoology evidences about the presence of this mammalian into the actual Delta have not been found, except the Lower Danube, up to Isaccea, but also near Dobrogea Plateau in Murighiol area. Its last Paleontology evidences come from early medieval period. Until now, the actual delta was considered a territory inappropriate for the Eurasian Beaver, due to high fluctuations of the water levels. But, in April 2011, the spontaneous appearance of the European beaver near Maliuc area was proved, a copy killed by poachers. In July 2011, a Beaver injured after the collision with a boat was found and scientifically investigated. The future observations will have to document if this mammal extends its habitat up here or remains an erratic appearance. In case of success of spontaneous colonization, its consequences and effects on the environment in general and on biodiversity inparticular are required to be monitored.

  20. The Use of Acceleration to Code for Animal Behaviours; A Case Study in Free-Ranging Eurasian Beavers Castor fiber.

    Directory of Open Access Journals (Sweden)

    Patricia M Graf

    Full Text Available Recent technological innovations have led to the development of miniature, accelerometer-containing electronic loggers which can be attached to free-living animals. Accelerometers provide information on both body posture and dynamism which can be used as descriptors to define behaviour. We deployed tri-axial accelerometer loggers on 12 free-ranging Eurasian beavers Castor fiber in the county of Telemark, Norway, and on four captive beavers (two Eurasian beavers and two North American beavers C. canadensis to corroborate acceleration signals with observed behaviours. By using random forests for classifying behavioural patterns of beavers from accelerometry data, we were able to distinguish seven behaviours; standing, walking, swimming, feeding, grooming, diving and sleeping. We show how to apply the use of acceleration to determine behaviour, and emphasise the ease with which this non-invasive method can be implemented. Furthermore, we discuss the strengths and weaknesses of this, and the implementation of accelerometry on animals, illustrating limitations, suggestions and solutions. Ultimately, this approach may also serve as a template facilitating studies on other animals with similar locomotor modes and deliver new insights into hitherto unknown aspects of behavioural ecology.

  1. Big Data Components for Business Process Optimization

    Directory of Open Access Journals (Sweden)

    Mircea Raducu TRIFU

    2016-01-01

    Full Text Available In these days, more and more people talk about Big Data, Hadoop, noSQL and so on, but very few technical people have the necessary expertise and knowledge to work with those concepts and technologies. The present issue explains one of the concept that stand behind two of those keywords, and this is the map reduce concept. MapReduce model is the one that makes the Big Data and Hadoop so powerful, fast, and diverse for business process optimization. MapReduce is a programming model with an implementation built to process and generate large data sets. In addition, it is presented the benefits of integrating Hadoop in the context of Business Intelligence and Data Warehousing applications. The concepts and technologies behind big data let organizations to reach a variety of objectives. Like other new information technologies, the main important objective of big data technology is to bring dramatic cost reduction.

  2. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  3. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...... and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative...... inquiry, such as video ethnography, ethnovideo, performance documentation, anthropology and multimodal interaction analysis. That is why we put forward, half-jokingly at first, a Big Video manifesto to spur innovation in the Digital Humanities....

  4. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  5. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  6. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  7. Practical aspects of registration the transformation of a river valley by beavers using terrestrial laser scanning

    Science.gov (United States)

    Tyszkowski, Sebastian; Błaszkiewicz, Mirosław; Brykała, Dariusz; Gierszewski, Piotr; Kaczmarek, Halina; Kordowski, Jarosław; Słowiński, Michał

    2016-04-01

    Activity of beavers (Castor fiber) often significantly affects the environment in which they life. The most commonly observed effect of their being in environment is construction of beaver dams and formation a pond upstream. However, in case of a sudden break of a dam and beaver pond drainage, the valley below the dam may also undergo remodelling. The nature and magnitude of these changes depends on the quantity of water and its energy as well as on the geological structure of the valley. The effects of such events can be riverbank erosion, and the deposition of the displaced of erosion products in the form of sandbars or fans. The material can also be accumulated in local depressions or delivered to water bodies. Such events may occur multiple times in the same area. To assess their impact on the environment it is important to quantify the displaced material. The study of such transformations was performed within a small valley of the river of Struga Czechowska (Tuchola Pinewood Forest, Poland). The valley is mainly cut in sands and gravels. Its steep banks are overgrown with bushes and trees. The assessment of changes in morphology were based on the event of the beaver pond drainage of 2015. The study uses the measurements from the terrestrial laser scanning (scanner Riegl VZ-4000). The measurements were performed before and after the event. Each of the two models obtained for comparison was made up of more than 20 measurement stations. Point clouds were joined by Multi-Station Adjustment without placing in the terrain any objects of reference. During measurements attention was paid to the changes in morphology of both riverbed and valley surrounding. The paper presents the example of the recorded changes as well as the measurement procedure. Moreover, the aspects of fieldwork and issues related to post-processing, such as merging, filtering of point clouds and detection of changes, are also presented. This study is a contribution to the Virtual Institute of

  8. Depositional Model for a Beaver Pond Complex in a Piedmont Stream Valley

    Science.gov (United States)

    Rachide, S. P.; Diemer, J. A.

    2016-12-01

    The North American beaver (Castor canadensis) often construct closely-spaced ponds on suitable sections of streams. Over time avulsive processes form an intricate network of shallow anabranching streams that connect a multitude of ponds, referred to here as a beaver pond complex (BPC). According to Polvi and Wohl (2012), BPCs evolve through four stages. Their model, resulting from the study of an alpine stream dominated by meltwater, focused on changes in valley planform, increased channel complexity, and the effects of floodplain sediment storage during the growth of a BPC. A revised depositional model for a multi-generation BPC is presented here, based on data collected at Mill Creek, in the Piedmont of North Carolina. The field data, comprising geolocated maps, cores and probes, were collected between March 2011 and October 2014 and were supplemented by 12 aerial images of the field area taken from January 1993 to October 2014. The revised BPC depositional model comprises several stages of development that include initial beaver occupation, followed by an interval of fast-paced pond creation and increasing channel complexity, a stage of slowing growth of the BPC, followed by cycles of dam breaching and repair, and an abandonment phase. An additional stage may include later reoccupation of an abandoned BPC to produce a multi-generation BPC. The areas affected by, and durations of, these stages are likely dictated by several variables including: geologic controls, local climate, hydrologic regime, beaver population size, and resource availability. Rare large precipitation events that led to the breaching of dams were likely the most impactful variable in the evolution of the Mill Creek BPC. Field observations indicate that beaching events affected the Mill Creek BPC more frequently than could be inferred from the 12 aerial images of the site. Therefore, aerial imagery should be used in conjunction with field-based investigations in order to more accurately

  9. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  10. BigDog

    Science.gov (United States)

    Playter, R.; Buehler, M.; Raibert, M.

    2006-05-01

    BigDog's goal is to be the world's most advanced quadruped robot for outdoor applications. BigDog is aimed at the mission of a mechanical mule - a category with few competitors to date: power autonomous quadrupeds capable of carrying significant payloads, operating outdoors, with static and dynamic mobility, and fully integrated sensing. BigDog is about 1 m tall, 1 m long and 0.3 m wide, and weighs about 90 kg. BigDog has demonstrated walking and trotting gaits, as well as standing up and sitting down. Since its creation in the fall of 2004, BigDog has logged tens of hours of walking, climbing and running time. It has walked up and down 25 & 35 degree inclines and trotted at speeds up to 1.8 m/s. BigDog has walked at 0.7 m/s over loose rock beds and carried over 50 kg of payload. We are currently working to expand BigDog's rough terrain mobility through the creation of robust locomotion strategies and terrain sensing capabilities.

  11. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  12. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  13. Assessment of conservation easements, total phosphorus, and total suspended solids in West Fork Beaver Creek, Minnesota, 1999-2012

    Science.gov (United States)

    Christensen, Victoria G.; Kieta, Kristen A.

    2014-01-01

    This study examined conservation easements and their effectiveness at reducing phosphorus and solids transport to streams. The U.S. Geological Survey cooperated with the Minnesota Board of Water and Soil Resources and worked collaboratively with the Hawk Creek Watershed Project to examine the West Fork Beaver Creek Basin in Renville County, which has the largest number of Reinvest In Minnesota land retirement contracts in the State (as of 2013). Among all conservation easement programs, a total of 24,218 acres of agricultural land were retired throughout Renville County, and 2,718 acres were retired in the West Fork Beaver Creek Basin from 1987 through 2012. Total land retirement increased steadily from 1987 until 2000. In 2000, land retirement increased sharply because of the Minnesota River Conservation Reserve Enhancement Program, then leveled off when the program ended in 2002. Streamflow data were collected during 1999 through 2011, and total phosphorus and total suspended solids data were collected during 1999 through 2012. During this period, the highest peak streamflow of 1,320 cubic feet per second was in March 2010. Total phosphorus and total suspended solids are constituents that tend to increase with increases in streamflow. Annual flow-weighted mean total-phosphorus concentrations ranged from 0.140 to 0.759 milligrams per liter, and annual flow-weighted mean total suspended solids concentrations ranged from 21.3 to 217 milligrams per liter. Annual flow-weighted mean total phosphorus and total suspended solids concentrations decreased steadily during the first 4 years of water-quality sample collection. A downward trend in flow-weighted mean total-phosphorus concentrations was significant from 1999 through 2008; however, flow-weighted total-phosphorus concentrations increased substantially in 2009, and the total phosphorus trend was no longer significant. The high annual flow-weighted mean concentrations for total phosphorus and total suspended solids

  14. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  15. Big Data ethics

    Directory of Open Access Journals (Sweden)

    Andrej Zwitter

    2014-11-01

    Full Text Available The speed of development in Big Data and associated phenomena, such as social media, has surpassed the capacity of the average consumer to understand his or her actions and their knock-on effects. We are moving towards changes in how ethics has to be perceived: away from individual decisions with specific and knowable outcomes, towards actions by many unaware that they may have taken actions with unintended consequences for anyone. Responses will require a rethinking of ethical choices, the lack thereof and how this will guide scientists, governments, and corporate agencies in handling Big Data. This essay elaborates on the ways Big Data impacts on ethical conceptions.

  16. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  17. Big Data Provenance: Challenges, State of the Art and Opportunities.

    Science.gov (United States)

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  18. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  19. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  20. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  1. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  2. Sharing big biomedical data.

    Science.gov (United States)

    Toga, Arthur W; Dinov, Ivo D

    The promise of Big Biomedical Data may be offset by the enormous challenges in handling, analyzing, and sharing it. In this paper, we provide a framework for developing practical and reasonable data sharing policies that incorporate the sociological, financial, technical and scientific requirements of a sustainable Big Data dependent scientific community. Many biomedical and healthcare studies may be significantly impacted by using large, heterogeneous and incongruent datasets; however there are significant technical, social, regulatory, and institutional barriers that need to be overcome to ensure the power of Big Data overcomes these detrimental factors. Pragmatic policies that demand extensive sharing of data, promotion of data fusion, provenance, interoperability and balance security and protection of personal information are critical for the long term impact of translational Big Data analytics.

  3. Big Creek Pit Tags

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The BCPITTAGS database is used to store data from an Oncorhynchus mykiss (steelhead/rainbow trout) population dynamics study in Big Creek, a coastal stream along the...

  4. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  5. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Klinkby Madsen, Anders; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  6. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  7. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  8. Morphology of an Early Oligocene beaver Propalaeocastor irtyshensis and the status of the genus Propalaeocastor

    Directory of Open Access Journals (Sweden)

    Lüzhou Li

    2017-05-01

    Full Text Available The Early to Late Oligocene Propalaeocastor is the earliest known beaver genus from Eurasia. Although many species of this genus have been described, these species are defined based on very fragmentary specimens. Propalaeocastor irtyshensis from the Early Oligocene Irtysh River Formation in northwestern Xinjiang, China is one of the earliest-known members of Propalaeocastor. This species is defined on a single maxillary fragment. We revise the diagnosis of P. irtyshensis and the genus Propalaeocastor, based on newly discovered specimens from the Irtysh River Formation. The dental morphology of P. irtyshensis is very similar to other early castorids. The caudal palatine foramen of P. irtyshensis is situated in the maxillary-palatine suture. This is a feature generally accept as diagnostic character for the castorids. On the other hand, P. irtyshensis has two upper premolars, a rudimentarily developed sciuromorph-like zygomatic plate, and a relatively large protrogomorph-like infraorbital foramen. Some previous researchers suggested that Propalaeocastor is a junior synonym of Steneofiber, while other took it as a valid genus. Our morphological comparison and phylogenetic analysis suggest that Propalaeocastor differs from Steneofiber and is a valid genus. We also suggest that Agnotocastor aubekerovi, A. coloradensis, A. galushai, A. readingi, Oligotheriomys primus, and “Steneofiber aff. dehmi” should be referred to Propalaeocastor. Propalaeocastor is the earliest and most basal beaver. The origin place of Propalaeocastor and castorids is uncertain. The Early Oligocene radiation of castorids probably is propelled by the global climate change during the Eocene-Oligocene transition.

  9. Sharing big biomedical data

    OpenAIRE

    Toga, Arthur W.; Ivo D Dinov

    2015-01-01

    Background The promise of Big Biomedical Data may be offset by the enormous challenges in handling, analyzing, and sharing it. In this paper, we provide a framework for developing practical and reasonable data sharing policies that incorporate the sociological, financial, technical and scientific requirements of a sustainable Big Data dependent scientific community. Findings Many biomedical and healthcare studies may be significantly impacted by using large, heterogeneous and incongruent data...

  10. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  11. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  12. Big data need big theory too

    Science.gov (United States)

    Dougherty, Edward R.; Highfield, Roger R.

    2016-01-01

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare. This article is part of the themed issue ‘Multiscale modelling at the physics–chemistry–biology interface’. PMID:27698035

  13. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2017-12-13

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  14. A Proposed Concentration Curriculum Design for Big Data Analytics for Information Systems Students

    Science.gov (United States)

    Molluzzo, John C.; Lawler, James P.

    2015-01-01

    Big Data is becoming a critical component of the Information Systems curriculum. Educators are enhancing gradually the concentration curriculum for Big Data in schools of computer science and information systems. This paper proposes a creative curriculum design for Big Data Analytics for a program at a major metropolitan university. The design…

  15. Small geographic range but not panmictic: how forests structure the endangered Point Arena mountain beaver (Aplodontia rufa nigra)

    Science.gov (United States)

    William J. Zielinski; Fredrick V. Schlexer; Sean A. Parks; Kristine L. Pilgrim; Michael K. Schwartz

    2012-01-01

    The landscape genetics framework is typically applied to broad regions that occupy only small portions of a species' range. Rarely is the entire range of a taxon the subject of study. We examined the landscape genetic structure of the endangered Point Arena mountain beaver (Aplodontia rufa nigra), whose isolated geographic range is found in a...

  16. A range-wide occupancy estimate and habitat model for the endangered Point Arena mountain beaver (Aplodontia rufa nigra)

    Science.gov (United States)

    William J. Zielinski; Fredrick V. Schlexer; Jeffrey R. Dunk; Matthew J. Lau; James J. Graham

    2015-01-01

    The mountain beaver (Aplodontia rufa) is notably the most primitive North American rodent with a restricted distribution in the Pacific Northwest based on its physiological limits to heat stress and water needs. The Point Arena subspecies (A. r. nigra) is federally listed as endangered and is 1 of 2 subspecies that have extremely...

  17. The High Cost of Big-Time Football

    Science.gov (United States)

    Weiner, Jay

    1973-01-01

    From facilities to travel to operations, the cost of intercollegiate football is causing questioning on individual campuses and even in the NCAA of the purposes and even necessity of big-time programs. (Editor)

  18. Cropland Management Plan : Big Stone National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The cropland program on Big Stone NWR will be accomplished each year with a combination of: (1) force account farming of permanent farm units where there is no...

  19. Survey of Cyber Crime in Big Data

    Science.gov (United States)

    Rajeswari, C.; Soni, Krishna; Tandon, Rajat

    2017-11-01

    Big data is like performing computation operations and database operations for large amounts of data, automatically from the data possessor’s business. Since a critical strategic offer of big data access to information from numerous and various areas, security and protection will assume an imperative part in big data research and innovation. The limits of standard IT security practices are notable, with the goal that they can utilize programming sending to utilize programming designers to incorporate pernicious programming in a genuine and developing risk in applications and working frameworks, which are troublesome. The impact gets speedier than big data. In this way, one central issue is that security and protection innovation are sufficient to share controlled affirmation for countless direct get to. For powerful utilization of extensive information, it should be approved to get to the information of that space or whatever other area from a space. For a long time, dependable framework improvement has arranged a rich arrangement of demonstrated ideas of demonstrated security to bargain to a great extent with the decided adversaries, however this procedure has been to a great extent underestimated as “needless excess” and sellers In this discourse, essential talks will be examined for substantial information to exploit this develop security and protection innovation, while the rest of the exploration difficulties will be investigated.

  20. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at CBS......) have developed a research-based capability mapping tool, entitled DataProfit, which the public business consultants can use to upgrade their tool kit to enable data-driven growth in manufacturing organizations. Benefit: The DataProfit model/tool comprises insights of an extensive research project...

  1. Big Data and Peacebuilding

    Directory of Open Access Journals (Sweden)

    Sanjana Hattotuwa

    2013-11-01

    Full Text Available Any peace process is an exercise in the negotiation of big data. From centuries old communal hagiography to the reams of official texts, media coverage and social media updates, peace negotiations generate data. Peacebuilding and peacekeeping today are informed by, often respond and contribute to big data. This is no easy task. As recently as a few years ago, before the term big data embraced the virtual on the web, what informed peace process design and implementation was in the physical domain – from contested borders and resources to background information in the form of text. The move from analogue, face-to-face negotiations to online, asynchronous, web-mediated negotiations – which can still include real world meetings – has profound implications for how peace is strengthened in fragile democracies.

  2. Big Data Refinement

    Directory of Open Access Journals (Sweden)

    Eerke A. Boiten

    2016-06-01

    Full Text Available "Big data" has become a major area of research and associated funding, as well as a focus of utopian thinking. In the still growing research community, one of the favourite optimistic analogies for data processing is that of the oil refinery, extracting the essence out of the raw data. Pessimists look for their imagery to the other end of the petrol cycle, and talk about the "data exhausts" of our society. Obviously, the refinement community knows how to do "refining". This paper explores the extent to which notions of refinement and data in the formal methods community relate to the core concepts in "big data". In particular, can the data refinement paradigm can be used to explain aspects of big data processing?

  3. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  4. Mitochondrial genomes reveal slow rates of molecular evolution and the timing of speciation in beavers (Castor, one of the largest rodent species.

    Directory of Open Access Journals (Sweden)

    Susanne Horn

    Full Text Available BACKGROUND: Beavers are one of the largest and ecologically most distinct rodent species. Little is known about their evolution and even their closest phylogenetic relatives have not yet been identified with certainty. Similarly, little is known about the timing of divergence events within the genus Castor. METHODOLOGY/PRINCIPAL FINDINGS: We sequenced complete mitochondrial genomes from both extant beaver species and used these sequences to place beavers in the phylogenetic tree of rodents and date their divergence from other rodents as well as the divergence events within the genus Castor. Our analyses support the phylogenetic position of beavers as a sister lineage to the scaly tailed squirrel Anomalurus within the mouse related clade. Molecular dating places the divergence time of the lineages leading to beavers and Anomalurus as early as around 54 million years ago (mya. The living beaver species, Castor canadensis from North America and Castor fiber from Eurasia, although similar in appearance, appear to have diverged from a common ancestor more than seven mya. This result is consistent with the hypothesis that a migration of Castor from Eurasia to North America as early as 7.5 mya could have initiated their speciation. We date the common ancestor of the extant Eurasian beaver relict populations to around 210,000 years ago, much earlier than previously thought. Finally, the substitution rate of Castor mitochondrial DNA is considerably lower than that of other rodents. We found evidence that this is correlated with the longer life span of beavers compared to other rodents. CONCLUSIONS/SIGNIFICANCE: A phylogenetic analysis of mitochondrial genome sequences suggests a sister-group relationship between Castor and Anomalurus, and allows molecular dating of species divergence in congruence with paleontological data. The implementation of a relaxed molecular clock enabled us to estimate mitochondrial substitution rates and to evaluate the effect

  5. [Introduction of species and microevolution: the European beaver, raccoon dog, and American mink].

    Science.gov (United States)

    Korablev, N P; Korablev, M P; Korablev, P N

    2011-01-01

    Nine skull samples of the beaver Castor fiber, six samples of the raccoon dog Nyctereutes procyonoides, and six samples of the American mink Neovison vison were studied using phenetic and craniometric methods. Analysis of the phenofund structure suggests that in all of the studied species the emergence of novel character variations does not lead to their fixation with a significant frequency. Considerable morphological variability emerges in the contact zone of different autochtonous populations, of wild and breeding forms, as well as in geographically and reproductively isolated small groups of individuals. Morphological differences of introduced animals fit into the conception of species polymorphism and are smoothed over when separate colonies merge into metapopulations, which does not lead to the emergence of novel stable taxa.

  6. Municipal waterborne giardiasis: an epidemilogic investigation. Beavers implicated as a possible reservoir.

    Science.gov (United States)

    Dykes, A C; Juranek, D D; Lorenz, R A; Sinclair, S; Jakubowski, W; Davies, R

    1980-02-01

    In March 1976, 128 persons in Camas, Washington, had laboratory-confirmed giardiasis. A questionnaire survey of 498 Camas residents revealed that 3.8% had clinical giardiasis, while none of 318 residents in a control town were ill. No associations between illness and sex, pet ownership, travel, time spent in wilderness areas, public gatherings, or food preference were found. Giardia cysts were recovered from raw water entering the city water treatment system via two streams and also from two storage reservoirs containing chlorinated and filtered stream water. Failure to remove Giardia cysts was attributed to the water plants' inadequate flocculation, coagulation, and sedimentation combined with deterioration of the filter media. Investigation of the watershed revealed no signs of human fecal contamination. Animal trapping in the watershed area yielded three beavers (Castor canadensis) infected with Giardia that were infective for specific pathogen-free beagle pups.

  7. Characterizing the morphological complexity of North American beaver (Castor canadensis) habitats using ground-based LiDAR

    Science.gov (United States)

    Welsh, S. B.; Wheaton, J. M.; Bouwes, N. W.; Pollock, M. M.; Demeurichy, K. G.

    2009-12-01

    Beavers (Castor canadensis) are frequently referred to as ‘ecosystem engineers’ in part because of the profound influence their dams and associated networks of dens, side-channels and pools have on habitat heterogeneity and the complexity of the environments they occupy. Recently, beaver have been incorporated into stream restoration efforts to help reconnect incised streams to their floodplains and improve physical habitat for fish and other species of concern. Although these ecosystem engineers produce rich and complex habitats, they do not provide as-built drawings of their work and the dynamic habitats they construct are very difficult to survey and characterize for monitoring their effectiveness. Traditional ground-based topographic, vegetation and habitat surveys are often inadequate to properly characterize such habitats or detect change. Similarly, traditional remotely sensed data may lack the resolution and accuracy to detect important changes through time. To better understand the character and dynamics of these complex habitats created by beaver, we will present some data and new techniques for describing their structure using a hybrid ground-based and remote-sensing technology: ground-based LiDaR (a.k.a. terrestrial laser scanning - TLS). Specifically, we are seeking to use this data to explore the feedbacks between key components - beaver, riparian vegetation, channel complexity and their collective influence on salmonid habitat. The combination of high resolution and high accuracy 3D point clouds from TLS data provides new opportunities for characterizing physical habitat and detecting changes with repeat surveys. However, TLS also presents significant methodological challenges in how we manage and analyze data, which may be 2 to 5 orders of magnitude greater in size then traditional ground-based or remotely sensed data sets. Preliminary data and analyses from algorithms under on-going development will be presented. The data is from the first year

  8. Commentary: Epidemiology in the era of big data.

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  9. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    summary. We also do not address the privacy implications of big data collection and processing. That is, our focus is on keeping the data and...Cryptography for Big Data Security Book Chapter for Big Data : Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction

  10. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  11. A Big Bang Lab

    Science.gov (United States)

    Scheider, Walter

    2005-01-01

    The February 2005 issue of The Science Teacher (TST) reminded everyone that by learning how scientists study stars, students gain an understanding of how science measures things that can not be set up in lab, either because they are too big, too far away, or happened in a very distant past. The authors of "How Far are the Stars?" show how the…

  12. Big data in history

    CERN Document Server

    Manning, Patrick

    2013-01-01

    Big Data in History introduces the project to create a world-historical archive, tracing the last four centuries of historical dynamics and change. Chapters address the archive's overall plan, how to interpret the past through a global archive, the missions of gathering records, linking local data into global patterns, and exploring the results.

  13. Small Places, Big Stakes

    DEFF Research Database (Denmark)

    Garsten, Christina; Sörbom, Adrienne

    left of much of ‘what is really going on', and ‘what people are really up to.' Meetings, however, as organized and ritualized social events, may provide the ethnographer with a loupe through which key tenets of larger social groups and organizations, and big issues, may be carefully observed. In formal...

  14. Big Data and Cycling

    NARCIS (Netherlands)

    Romanillos, Gustavo; Zaltz Austwick, Martin; Ettema, Dick; De Kruijf, Joost

    2016-01-01

    Big Data has begun to create significant impacts in urban and transport planning. This paper covers the explosion in data-driven research on cycling, most of which has occurred in the last ten years. We review the techniques, objectives and findings of a growing number of studies we have classified

  15. The Big Bang

    CERN Multimedia

    Moods, Patrick

    2006-01-01

    How did the Universe begin? The favoured theory is that everything - space, time, matter - came into existence at the same moment, around 13.7 thousand million years ago. This event was scornfully referred to as the "Big Bang" by Sir Fred Hoyle, who did not believe in it and maintained that the Universe had always existed.

  16. Big Data ethics

    NARCIS (Netherlands)

    Zwitter, Andrej

    2014-01-01

    The speed of development in Big Data and associated phenomena, such as social media, has surpassed the capacity of the average consumer to understand his or her actions and their knock-on effects. We are moving towards changes in how ethics has to be perceived: away from individual decisions with

  17. Governing Big Data

    Directory of Open Access Journals (Sweden)

    Andrej J. Zwitter

    2014-04-01

    Full Text Available 2.5 quintillion bytes of data are created every day through pictures, messages, gps-data, etc. "Big Data" is seen simultaneously as the new Philosophers Stone and Pandora's box: a source of great knowledge and power, but equally, the root of serious problems.

  18. Distribution and patterns of spread of recolonising Eurasian beavers (Castor fiber Linnaeus 1758 in fragmented habitat, Agdenes peninsula, Norway

    Directory of Open Access Journals (Sweden)

    Duncan Halley

    2013-02-01

    Full Text Available The Agdenes peninsula, Sør-Trøndelag, Norway, 1060km2, is a heavily dissected mountainous landscape with numerous small watersheds, of mainly steep gradient, flowing separately into the sea or to fjords. Suitable habitat for permanent beaver occupation occurs mainly as isolated patches within these watersheds. Eurasian beavers were directly reintroduced to the area in 1926 and 1928. The last known individual of this population died in 1961. In 1968-69 2 pairs and a young animal were reintroduced on the Ingdalselva watershed. The current population is descended from these animals, and probably from the later 1990s by immigrants from the adjacent Orkla river system. In 2010-11 the area was surveyed and 24 beaver family group home ranges located, 20 of which were currently active and 4 abandoned; the population size was estimated at about 80 individuals within family territories plus in any year a number of dispersing individuals. Eighteen of the active territories were located on just four watersheds, Ingdalselva and three immediately adjacent to it. The remaining two territories were isolated on different watersheds distant from any other known group, requiring multiple crossings between watersheds and/or considerable movements through salt water to reach from them. Signs of vagrant individuals were found widely, including on a number of watersheds not occupied by any family group, though containing suitable habitat for permanent colonisation. Known data on the date of establishment of each family group is given, and the pattern of recolonisation to date discussed. An isolated population of beavers on a section of the Orkla river system, first noted in 1933, has been attributed to spread from the first study area reintroductions. However, there are grounds to suspect that this population may have had a different origin. Genetic studies would be useful to elucidate this point.

  19. High rates of energy expenditure and water flux in free-ranging Point Reyes mountain beavers Aplodontia rufa phaea

    Science.gov (United States)

    Crocker, D.E.; Kofahl, N.; Fellers, G.D.; Gates, N.B.; Houser, D.S.

    2007-01-01

    We measured water flux and energy expenditure in free-ranging Point Reyes mountain beavers Aplodontia rufa phaea by using the doubly labeled water method. Previous laboratory investigations have suggested weak urinary concentrating ability, high rates of water flux, and low basal metabolic rates in this species. However, free-ranging measurements from hygric mammals are rare, and it is not known how these features interact in the environment. Rates of water flux (210 ?? 32 mL d-1) and field metabolic rates (1,488 ?? 486 kJ d-1) were 159% and 265%, respectively, of values predicted by allometric equations for similar-sized herbivores. Mountain beavers can likely meet their water needs through metabolic water production and preformed water in food and thus remain in water balance without access to free water. Arginine-vasopressin levels were strongly correlated with rates of water flux and plasma urea : creatinine ratios, suggesting an important role for this hormone in regulating urinary water loss in mountain beavers. High field metabolic rates may result from cool burrow temperatures that are well below lower critical temperatures measured in previous laboratory studies and suggest that thermoregulation costs may strongly influence field energetics and water flux in semifossorial mammals. ?? 2007 by The University of Chicago. All rights reserved.

  20. Revision of Hemiquedius Casey (Staphylinidae, Staphylininae and a review of beetles dependent on beavers and muskrats in North America

    Directory of Open Access Journals (Sweden)

    Adam Brunke

    2017-09-01

    Full Text Available Based on newly discovered characters on the male genitalia, external morphology and an accumulation of ecological data, we revise the single member of the genus Hemiquedius. Two new species, H. infinitus Brunke & Smetana, sp. n. and H. castoris Brunke & Smetana, sp. n., from eastern North America are described, and H. ferox (LeConte, restricted to peninsular Florida, is re-described. Hemiquedius castoris is strongly associated with the microhabitats provided by nest materials of the North American beaver and muskrat. A key to the three species of Hemiquedius is provided and diagnostic characters are illustrated. We also review the beetles known to be obligate associates of beavers and muskrats, and discuss the potential role of these keystone vertebrates in beetle evolution and distribution. Based on nest-associated beetles and their closest living relatives, beaver and muskrat lodges may extend distributions northward by moderating winters, promote sympatric speciation and act as refugia against extinction of lineages on a broader timescale. Further research into these potential impacts by ecologists and evolutionary biologists is encouraged.

  1. Hydrologic Change during the Colonial Era of the United States: Beavers and the Energy Cost of Impoundments (Invited)

    Science.gov (United States)

    Green, M. B.; Bain, D. J.; Arrigo, J. S.; Duncan, J. M.; Kumar, S.; Parolari, A.; Salant, N.; Vorosmarty, C. J.; Aloysius, N. R.; Bray, E. N.; Ruffing, C. M.; Witherell, B. B.

    2009-12-01

    Europeans colonized North America in the early 17th century with intentions ranging between long-term inhabitation and quick extraction of resources for economic gain in Europe. Whatever the intentions, the colonists relied on the landscape for resources resulting in dramatic change to the forest and fur-bearing mammal population. We demonstrate that initial exploitation of North American forest and furs caused a substantial decrease in mean water residence time (τ) between 1600 and 1800 A.D. That loss, which regionally changed from 51 to 41 days, contrasts with conventional wisdom that humans tend to diminish variability in water resources by increasing storage capacity and thus increasing τ. The loss of τ resulted from over-hunted beaver for the hat market in Europe. Analysis suggests that colonial era demographics and economics did not allow human resource allocation to impoundment construction on a level matching the historic beaver effort. However, the τ appears to have regionally increased during the 19th century, suggesting that humans eventually began replacing the water storage lost with the beaver. The analysis highlights the energy cost of impounding water, which is likely to continue to be an important factor given the increasing need for stable water resources and finite energy resources.

  2. Pengembangan Aplikasi Antarmuka Layanan Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Gede Karya

    2017-11-01

    Full Text Available In the 2016 Higher Competitive Grants Research (Hibah Bersaing Dikti, we have been successfully developed models, infrastructure and modules of Hadoop-based big data analysis application. It has also successfully developed a virtual private network (VPN network that allows integration and access to the infrastructure from outside the FTIS Computer Laboratorium. Infrastructure and application modules of analysis are then wanted to be presented as services to small and medium enterprises (SMEs in Indonesia. This research aims to develop application of big data analysis service interface integrated with Hadoop-Cluster. The research begins with finding appropriate methods and techniques for scheduling jobs, calling for ready-made Java Map-Reduce (MR application modules, and techniques for tunneling input / output and meta-data construction of service request (input and service output. The above methods and techniques are then developed into a web-based service application, as well as an executable module that runs on Java and J2EE based programming environment and can access Hadoop-Cluster in the FTIS Computer Lab. The resulting application can be accessed by the public through the site http://bigdata.unpar.ac.id. Based on the test results, the application has functioned well in accordance with the specifications and can be used to perform big data analysis. Keywords: web based service, big data analysis, Hadop, J2EE Abstrak Pada penelitian Hibah Bersaing Dikti tahun 2016 telah berhasil dikembangkan model, infrastruktur dan modul-modul aplikasi big data analysis berbasis Hadoop. Selain itu juga telah berhasil dikembangkan jaringan virtual private network (VPN yang memungkinkan integrasi dan akses infrastruktur tersebut dari luar Laboratorium Komputer FTIS. Infrastruktur dan modul aplikasi analisis tersebut selanjutnya ingin dipresentasikan sebagai layanan kepada usaha kecil dan menengah (UKM di Indonesia. Penelitian ini bertujuan untuk mengembangkan

  3. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  4. Finding the big bang

    CERN Document Server

    Page, Lyman A; Partridge, R Bruce

    2009-01-01

    Cosmology, the study of the universe as a whole, has become a precise physical science, the foundation of which is our understanding of the cosmic microwave background radiation (CMBR) left from the big bang. The story of the discovery and exploration of the CMBR in the 1960s is recalled for the first time in this collection of 44 essays by eminent scientists who pioneered the work. Two introductory chapters put the essays in context, explaining the general ideas behind the expanding universe and fossil remnants from the early stages of the expanding universe. The last chapter describes how the confusion of ideas and measurements in the 1960s grew into the present tight network of tests that demonstrate the accuracy of the big bang theory. This book is valuable to anyone interested in how science is done, and what it has taught us about the large-scale nature of the physical universe.

  5. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... of the international development agenda to algorithms that synthesize large-scale data, (3) novel ways of rationalizing knowledge claims that underlie development policies, and (4) shifts in professional and organizational identities of those concerned with producing and processing data for development. Our discussion...

  6. Getting started with Greenplum for big data analytics

    CERN Document Server

    Gollapudi, Sunila

    2013-01-01

    Standard tutorial-based approach.""Getting Started with Greenplum for Big Data"" Analytics is great for data scientists and data analysts with a basic knowledge of Data Warehousing and Business Intelligence platforms who are new to Big Data and who are looking to get a good grounding in how to use the Greenplum Platform. It's assumed that you will have some experience with database design and programming as well as be familiar with analytics tools like R and Weka.

  7. Big Data i nyhedsformidling

    OpenAIRE

    Schjelde, Emil Kristian Kjølhede; Rosendahl, Rasmus

    2016-01-01

    This thesis evolves around the role of big data in scientific and journalistic knowledge production. We take a perspective on knowledge production through an analysis of discourses in contemporary discussions of epistemology in general sciences and journalism. Our empirical material here consists of a mixture of research articles, books and internet articles. The main objective of this analysis is, through the theoretical works of Ernesto Laclau and Chantal Mouffe, to outlin...

  8. Big Bang 5

    CERN Document Server

    Apolin, Martin

    2007-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Der Band 5 RG behandelt die Grundlagen (Maßsystem, Größenordnungen) und die Mechanik (Translation, Rotation, Kraft, Erhaltungssätze).

  9. Big Bang 8

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Band 8 vermittelt auf verständliche Weise Relativitätstheorie, Kern- und Teilchenphysik (und deren Anwendungen in der Kosmologie und Astrophysik), Nanotechnologie sowie Bionik.

  10. Big Bang 6

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. Der Band 6 RG behandelt die Gravitation, Schwingungen und Wellen, Thermodynamik und eine Einführung in die Elektrizität anhand von Alltagsbeispielen und Querverbindungen zu anderen Disziplinen.

  11. Big Bang 7

    CERN Document Server

    Apolin, Martin

    2008-01-01

    Physik soll verständlich sein und Spaß machen! Deshalb beginnt jedes Kapitel in Big Bang mit einem motivierenden Überblick und Fragestellungen und geht dann von den Grundlagen zu den Anwendungen, vom Einfachen zum Komplizierten. Dabei bleibt die Sprache einfach, alltagsorientiert und belletristisch. In Band 7 werden neben einer Einführung auch viele aktuelle Aspekte von Quantenmechanik (z. Beamen) und Elektrodynamik (zB Elektrosmog), sowie die Klimaproblematik und die Chaostheorie behandelt.

  12. Big Bang Circus

    Science.gov (United States)

    Ambrosini, C.

    2011-06-01

    Big Bang Circus is an opera I composed in 2001 and which was premiered at the Venice Biennale Contemporary Music Festival in 2002. A chamber group, four singers and a ringmaster stage the story of the Universe confronting and interweaving two threads: how early man imagined it and how scientists described it. Surprisingly enough fancy, myths and scientific explanations often end up using the same images, metaphors and sometimes even words: a strong tension, a drumskin starting to vibrate, a shout…

  13. Instream investigations in the Beaver Creek Watershed in West Tennessee, 1991-95

    Science.gov (United States)

    Byl, T.D.; Carney, K.A.

    1996-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Tennessee Department of Agriculture, began a long-term scientific investigation in 1989 to evaluate the effect of agricultural activities on water quality and the effectiveness of agricultural best management practices in the Beaver Creek watershed, West Tennessee. In 1993 as a part of this study, the USGS, in cooperation with the Natural Resources Conservation Service, Shelby County Soil Conservation District, and the Tennessee Soybean Promotion Board, began an evaluation of the physical, chemical, biological and hydrological factors that affect water quality in streams and wetlands, and instream resource-management systems to treat agricultural nonpoint-source runoff and improve water quality. The purpose of this report is to present the results of three studies of stream and wetland investigations and a study on the transport of aldicarb from an agricultural field in the Beaver Creek watershed. A natural bottomland hardwood wetland and an artificially constructed wetland were evaluated as instream resource-management systems. These two studies showed that wetlands are an effective way to improve the quality of agricultural nonpoint-source runoff. The wetlands reduced concentrations and loads of suspended sediments, nutrients, and pesticides in the streams. A third paper documents the influence of riparian vegetation on the biological structure and water quality of a small stream draining an agricultural field. A comparison of the upper reach lined with herbaceous plants and the lower reach with mature woody vegetation showed a more stable biological community structure and Water- quality characteristics in the woody reach than in the herbaceous reach. The water-quality characteristics monitored were pH, temperature, dissolved oxygen, and specific conductance. The herbaceous reach had a greater diversity and abundance of organisms during spring and early summer, but the abundance dropped by approximately

  14. Characterisation of Beaver Habitat Parameters That Promote the Use of Culverts as Dam Construction Sites: Can We Limit the Damage to Forest Roads?

    Directory of Open Access Journals (Sweden)

    Geneviève Tremblay

    2017-12-01

    Full Text Available The use of forest roads as foundations for dam construction by beavers is a recurrent problem in the management of forest road networks. In order to limit the damage to forest roads, our goal was to calculate the probability of beaver dam installation on culverts, according to surrounding habitat parameters, which could allow for improvement in the spatial design of new roads that minimise conflicts with beavers. Comparisons of culverts with (n = 77 and without (n = 51 dams in northwestern Quebec showed that catchment surface, cumulate length of all local streams within a 2-km radius, and road embankment height had a negative effect on the probability of dam construction on culverts, while flow level and culvert diameter ratio had a positive effect. Nevertheless, predicted probabilities of dam construction on culverts generally exceeded 50%, even on sites that were less favourable to beavers. We suggest that it would be more reasonable to take their probable subsequent presence into account at the earliest steps of road conception. Installing mitigation measures such as pre-dams during road construction would probably reduce the occurrence of conflicts with beavers and thus reduce the maintenance costs of forest roads.

  15. Big data science: A literature review of nursing research exemplars.

    Science.gov (United States)

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Disaggregating asthma: Big investigation versus big data.

    Science.gov (United States)

    Belgrave, Danielle; Henderson, John; Simpson, Angela; Buchan, Iain; Bishop, Christopher; Custovic, Adnan

    2017-02-01

    We are facing a major challenge in bridging the gap between identifying subtypes of asthma to understand causal mechanisms and translating this knowledge into personalized prevention and management strategies. In recent years, "big data" has been sold as a panacea for generating hypotheses and driving new frontiers of health care; the idea that the data must and will speak for themselves is fast becoming a new dogma. One of the dangers of ready accessibility of health care data and computational tools for data analysis is that the process of data mining can become uncoupled from the scientific process of clinical interpretation, understanding the provenance of the data, and external validation. Although advances in computational methods can be valuable for using unexpected structure in data to generate hypotheses, there remains a need for testing hypotheses and interpreting results with scientific rigor. We argue for combining data- and hypothesis-driven methods in a careful synergy, and the importance of carefully characterized birth and patient cohorts with genetic, phenotypic, biological, and molecular data in this process cannot be overemphasized. The main challenge on the road ahead is to harness bigger health care data in ways that produce meaningful clinical interpretation and to translate this into better diagnoses and properly personalized prevention and treatment plans. There is a pressing need for cross-disciplinary research with an integrative approach to data science, whereby basic scientists, clinicians, data analysts, and epidemiologists work together to understand the heterogeneity of asthma. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Sampling Operations on Big Data

    Science.gov (United States)

    2015-11-29

    big data in Section II, followed by a description of the analytic environment D4M in Section III. We then describe the types of sampling methods and...signal reconstruction steps are used to do these operations. Big Data analytics , often characterized by analytics applied to datasets that strain available...Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln

  18. Supplement Analysis for the Transmission System Vegetation Management Program (DOE/EIS-0285/SA-113-1) Updates 9/27/02 SA-113 - Big Eddy-Ostrander Transmission Corridor

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, Kenneth [Bonneville Power Administration (BPA), Portland, OR (United States)

    2002-12-02

    To perform remedial vegetation management for keeping vegetation a safe distance away from electric power facilities and controlling noxious weeds within a section of BPA's Big Eddy-Ostrander Transmission Corridor. During a site review conducted in late fall of 2001, the inspector observed various species of hardwood trees resprouted from stumps. The new vegetative growth encroached on the required “Minimum Safe Distance” between the top of vegetation and the conductor cables. The management action is necessary to reduce the current and potential future hazards that tall-growing vegetation poses to transmission conductors. In addition, BPA will include weed control as part of their remedial vegetation management action. Noxious weeds occur within the corridor. Under a 1999 Executive Order, all federal agencies are required to detect and control noxious weeds. In addition, BPA is required under the 1990 amendment to the Noxious Weed Act (7 USC 2801-2814) to manage undesirable plants on federal land. Also, the Bonneville Power Administration (BPA) has responsibility to manage noxious weeds under the Transmission System Vegetation Management Program Final Environmental Impact Statement (FEIS).1 State statutes and regulations also mandate action by BPA and the USFS to control noxious weeds. The Oregon Department of Agriculture (ODA) has requested that agencies aggressively control these weeds before additional spread occurs.

  19. From the bush to the big smoke--development of a hybrid urban community based medical education program in the Northern Territory, Australia.

    Science.gov (United States)

    Morgan, S; Smedts, A; Campbell, N; Sager, R; Lowe, M; Strasser, S

    2009-01-01

    The Northern Territory (NT) of Australia is a unique setting for training medical students. This learning environment is characterised by Aboriginal health and an emphasis on rural and remote primary care practice. For over a decade the NT Clinical School (NTCS) of Flinders University has been teaching undergraduate medical students in the NT. Community based medical education (CBME) has been demonstrated to be an effective method of learning medicine, particularly in rural settings. As a result, it is rapidly gaining popularity in Australia and other countries. The NTCS adopted this model some years ago with the implementation of its Rural Clinical School; however, urban models of CBME are much less well developed than those in rural areas. There is considerable pressure to better incorporate CBME into medical student teaching environment, particularly because of the projected massive increase in student numbers over the next few years. To date, the community setting of urban Darwin, the NT capital city, has not been well utilised for medical student training. In 2008, the NTCS enrolled its first cohort of students in a new hybrid CBME program based in urban Darwin. This report describes the process and challenges involved in development of the program, including justification for a hybrid model and the adaptation of a rural model to an urban setting. Relationships were established and formalised with key partners and stakeholders, including GPs and general practices, Aboriginal medical services, community based healthcare providers and other general practice and community organisations. Other significant issues included curriculum development and review, development of learning materials and the establishment of robust evaluation methods. Development of the CBME model in Darwin posed a number of key challenges. Although the experience of past rural programs was useful, a number of distinct differences were evident in the urban setting. Change leadership and inter

  20. Relating hyporheic fluxes, residence times, and redox-sensitive biogeochemical processes upstream of beaver dams

    Science.gov (United States)

    Briggs, Martin A.; Lautz, Laura; Hare, Danielle K.

    2013-01-01

    Abstract. Small dams enhance the development of patchy microenvironments along stream corridors by trapping sediment and creating complex streambed morphologies. This patchiness drives intricate hyporheic flux patterns that govern the exchange of O2 and redox-sensitive solutes between the water column and the stream bed. We used multiple tracer techniques, naturally occurring and injected, to evaluate hyporheic flow dynamics and associated biogeochemical cycling and microbial reactivity around 2 beaver dams in Wyoming (USA). High-resolution fiber-optic distributed temperature sensing was used to collect temperature data over 9 vertical streambed profiles and to generate comprehensive vertical flux maps using 1-dimensional (1-D) heat-transport modeling. Coincident with these locations, vertical profiles of hyporheic water were collected every week and analyzed for dissolved O2, pH, dissolved organic C, and several conservative and redox-sensitive solutes. In addition, hyporheic and net stream aerobic microbial reactivity were analyzed with a constant-rate injection of the biologically sensitive resazurin (Raz) smart tracer. The combined results revealed a heterogeneous system with rates of downwelling hyporheic flow organized by morphologic unit and tightly coupled to the redox conditions of the subsurface. Principal component analysis was used to summarize the variability of all redox-sensitive species, and results indicated that hyporheic water varied from oxic-stream-like to anoxic-reduced in direct response to the hydrodynamic conditions and associated residence times. The anaerobic transition threshold predicted by the mean O2 Damko

  1. Re-evaluation of Sinocastor (Rodentia: Castoridae with implications on the origin of modern beavers.

    Directory of Open Access Journals (Sweden)

    Natalia Rybczynski

    Full Text Available The extant beaver, Castor, has played an important role shaping landscapes and ecosystems in Eurasia and North America, yet the origins and early evolution of this lineage remain poorly understood. Here we use a geometric morphometric approach to help re-evaluate the phylogenetic affinities of a fossil skull from the Late Miocene of China. This specimen was originally considered Sinocastor, and later transferred to Castor. The aim of this study was to determine whether this form is an early member of Castor, or if it represents a lineage outside of Castor. The specimen was compared to 38 specimens of modern Castor (both C. canadensis and C. fiber as well as fossil specimens of C. fiber (Pleistocene, C. californicus (Pliocene and the early castorids Steneofiber eseri (early Miocene. The results show that the specimen falls outside the Castor morphospace and that compared to Castor, Sinocastor possesses a: 1 narrower post-orbital constriction, 2 anteroposteriorly shortened basioccipital depression, 3 shortened incisive foramen, 4 more posteriorly located palatine foramen, 5 longer rostrum, and 6 longer braincase. Also the specimen shows a much shallower basiocciptal depression than what is seen in living Castor, as well as prominently rooted molars. We conclude that Sinocastor is a valid genus. Given the prevalence of apparently primitive traits, Sinocastor might be a near relative of the lineage that gave rise to Castor, implying a possible Asiatic origin for Castor.

  2. Management by assertion: beavers and songbirds at Lake Skinner (Riverside County, California).

    Science.gov (United States)

    Longcore, Travis; Rich, Catherine; Müller-Schwarze, Dietland

    2007-04-01

    Management of ecological reserve lands should rely on the best available science to achieve the goal of biodiversity conservation. "Adaptive Resource Management" is the current template to ensure that management decisions are reasoned and that decisions increase understanding of the system being managed. In systems with little human disturbance, certain management decisions are clear; steps to protect native species usually include the removal of invasive species. In highly modified systems, however, appropriate management steps to conserve biodiversity are not as readily evident. Managers must, more than ever, rely upon the development and testing of hypotheses to make rational management decisions. We present a case study of modern reserve management wherein beavers (Castor canadensis) were suspected of destroying habitat for endangered songbirds (least Bell's vireo, Vireo bellii pusillus, and southwestern willow flycatcher, Empidonax traillii extimus) and for promoting the invasion of an exotic plant (tamarisk, Tamarix spp.) at an artificial reservoir in southern California. This case study documents the consequences of failing to follow the process of Adaptive Resource Management. Managers made decisions that were unsupported by the scientific literature, and actions taken were likely counterproductive. The opportunity to increase knowledge of the ecosystem was lost. Uninformed management decisions, essentially "management by assertion," undermine the long-term prospects for biodiversity conservation.

  3. Outcome-based education--the ostrich, the peacock and the beaver.

    Science.gov (United States)

    Harden, Ronald M

    2007-09-01

    Significant progress has been made with the move to outcome-based education (OBE) in medicine and learning outcomes are on today's agenda. Learning outcomes have been specified in a number of areas and frameworks or models for communicating and presenting learning outcomes have been described. OBE has, however, two requirements. The first is to make the learning outcomes explicit and the second is the use of the specified outcomes as a basis for decisions about the curriculum. It is the second requirement that is often ignored. Three patterns of behaviour have been identified - the 'ostriches' who ignore the move to OBE believing it to be a passing fad or irrelevance, the 'peacocks' who display, sometimes ostentatiously, a specified set of outcomes but stop there and the 'beavers' who, having prepared their set of learning outcomes, use this as a basis for curriculum related decisions. An OBE implementation inventory is described that allows schools to assess their level of adoption of an OBE approach in their institution. Schools can use this to rate their level of OBE adoption on a five point scale on nine dimensions - a statement of learning outcomes, communication with staff/students about the outcomes, the educational strategies adopted, the learning opportunities available, the course content, student progression through the course, assessment of students, the educational environment and student selection. A profile for OBE implementation can be prepared for the institution.

  4. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  5. How Big is Earth?

    Science.gov (United States)

    Thurber, Bonnie B.

    2015-08-01

    How Big is Earth celebrates the Year of Light. Using only the sunlight striking the Earth and a wooden dowel, students meet each other and then measure the circumference of the earth. Eratosthenes did it over 2,000 years ago. In Cosmos, Carl Sagan shared the process by which Eratosthenes measured the angle of the shadow cast at local noon when sunlight strikes a stick positioned perpendicular to the ground. By comparing his measurement to another made a distance away, Eratosthenes was able to calculate the circumference of the earth. How Big is Earth provides an online learning environment where students do science the same way Eratosthenes did. A notable project in which this was done was The Eratosthenes Project, conducted in 2005 as part of the World Year of Physics; in fact, we will be drawing on the teacher's guide developed by that project.How Big Is Earth? expands on the Eratosthenes project by providing an online learning environment provided by the iCollaboratory, www.icollaboratory.org, where teachers and students from Sweden, China, Nepal, Russia, Morocco, and the United States collaborate, share data, and reflect on their learning of science and astronomy. They are sharing their information and discussing their ideas/brainstorming the solutions in a discussion forum. There is an ongoing database of student measurements and another database to collect data on both teacher and student learning from surveys, discussions, and self-reflection done online.We will share our research about the kinds of learning that takes place only in global collaborations.The entrance address for the iCollaboratory is http://www.icollaboratory.org.

  6. Privacy and Big Data

    CERN Document Server

    Craig, Terence

    2011-01-01

    Much of what constitutes Big Data is information about us. Through our online activities, we leave an easy-to-follow trail of digital footprints that reveal who we are, what we buy, where we go, and much more. This eye-opening book explores the raging privacy debate over the use of personal data, with one undeniable conclusion: once data's been collected, we have absolutely no control over who uses it or how it is used. Personal data is the hottest commodity on the market today-truly more valuable than gold. We are the asset that every company, industry, non-profit, and government wants. Pri

  7. Big Data Challenges

    Directory of Open Access Journals (Sweden)

    Alexandru Adrian TOLE

    2013-10-01

    Full Text Available The amount of data that is traveling across the internet today, not only that is large, but is complex as well. Companies, institutions, healthcare system etc., all of them use piles of data which are further used for creating reports in order to ensure continuity regarding the services that they have to offer. The process behind the results that these entities requests represents a challenge for software developers and companies that provide IT infrastructure. The challenge is how to manipulate an impressive volume of data that has to be securely delivered through the internet and reach its destination intact. This paper treats the challenges that Big Data creates.

  8. Adapting bioinformatics curricula for big data

    Science.gov (United States)

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  9. Orexin receptor expression in the hypothalamic-pituitary-adrenal and hypothalamic-pituitary-gonadal axes of free-living European beavers (Castor fiber L.) in different periods of the reproductive cycle.

    Science.gov (United States)

    Czerwinska, Joanna; Chojnowska, Katarzyna; Kaminski, Tadeusz; Bogacka, Iwona; Smolinska, Nina; Kaminska, Barbara

    2017-01-01

    Orexins are hypothalamic neuropeptides acting via two G protein-coupled receptors in mammals: orexin receptor 1 (OX1R) and orexin receptor 2 (OX2R). In European beavers, which are seasonally breeding animals, the presence and functions of orexins and their receptors remain unknown. Our study aimed to determine the expression of OXR mRNAs and the localization of OXR proteins in hypothalamic-pituitary-adrenal/gonadal (HPA/HPG) axes in free-living beavers. The expression of OXR genes (OX1R, OX2R) and proteins was found in all analysed tissues during three periods of beavers' reproductive cycle (April, July, November). The expression of OXR mRNAs in the beaver HPA axis varied seasonally (PHPA and HPG axes suggest that the expression of these receptors is associated with sex-specific changes in beavers' reproductive activity and their environmental adaptations. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  11. Fish survey, fishing duration, and other data from net and otter trawls from the BIG VALLEY as part of Outer Continental Shelf Environmental Assessment Program (OCSEAP) from 20 May 1976 to 30 June 1976 (NODC Accession 7601547)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Fish survey, fishing duration, and other data were collected from net and otter trawls from the BIG VALLEY from 20 May 1976 to 30 June 1976. Data were collected by...

  12. Benthic organism and other data from otter trawls from the Gulf of Alaska from the BIG VALLEY as part of the Outer Continental Shelf Environmental Assessment Program (OCSEAP) from 17 June 1976 to 18 March 1977 (NODC Accession 7700849)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Benthic organism and other data were collected from otter trawls in the Gulf of Alaska from the BIG VALLEY by University of Alaska; Institute of Marine Science...

  13. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  14. Passport to the Big Bang

    CERN Multimedia

    De Melis, Cinzia

    2013-01-01

    Le 2 juin 2013, le CERN inaugure le projet Passeport Big Bang lors d'un grand événement public. Affiche et programme. On 2 June 2013 CERN launches a scientific tourist trail through the Pays de Gex and the Canton of Geneva known as the Passport to the Big Bang. Poster and Programme.

  15. Think Big, Bigger ... and Smaller

    Science.gov (United States)

    Nisbett, Richard E.

    2010-01-01

    One important principle of social psychology, writes Nisbett, is that some big-seeming interventions have little or no effect. This article discusses a number of cases from the field of education that confirm this principle. For example, Head Start seems like a big intervention, but research has indicated that its effects on academic achievement…

  16. BIG SKY CARBON SEQUESTRATION PARTNERSHIP

    Energy Technology Data Exchange (ETDEWEB)

    Susan M. Capalbo

    2005-01-31

    The Big Sky Carbon Sequestration Partnership, led by Montana State University, is comprised of research institutions, public entities and private sectors organizations, and the Confederated Salish and Kootenai Tribes and the Nez Perce Tribe. Efforts under this Partnership in Phase I fall into four areas: evaluation of sources and carbon sequestration sinks that will be used to determine the location of pilot demonstrations in Phase II; development of GIS-based reporting framework that links with national networks; designing an integrated suite of monitoring, measuring, and verification technologies and assessment frameworks; and initiating a comprehensive education and outreach program. The groundwork is in place to provide an assessment of storage capabilities for CO{sub 2} utilizing the resources found in the Partnership region (both geological and terrestrial sinks), that would complement the ongoing DOE research. Efforts are underway to showcase the architecture of the GIS framework and initial results for sources and sinks. The region has a diverse array of geological formations that could provide storage options for carbon in one or more of its three states. Likewise, initial estimates of terrestrial sinks indicate a vast potential for increasing and maintaining soil C on forested, agricultural, and reclaimed lands. Both options include the potential for offsetting economic benefits to industry and society. Steps have been taken to assure that the GIS-based framework is consistent among types of sinks within the Big Sky Partnership area and with the efforts of other western DOE partnerships. The Partnership recognizes the critical importance of measurement, monitoring, and verification technologies to support not only carbon trading but all policies and programs that DOE and other agencies may want to pursue in support of GHG mitigation. The efforts in developing and implementing MMV technologies for geological sequestration reflect this concern. Research is

  17. Hydrogeological framework, numerical simulation of groundwater flow, and effects of projected water use and drought for the Beaver-North Canadian River alluvial aquifer, northwestern Oklahoma

    Science.gov (United States)

    Ryter, Derek W.; Correll, Jessica S.

    2016-01-14

    This report describes a study of the hydrology, hydrogeological framework, numerical groundwater-flow models, and results of simulations of the effects of water use and drought for the Beaver-North Canadian River alluvial aquifer, northwestern Oklahoma. The purpose of the study was to provide analyses, including estimating equal-proportionate-share (EPS) groundwater-pumping rates and the effects of projected water use and droughts, pertinent to water management of the Beaver-North Canadian River alluvial aquifer for the Oklahoma Water Resources Board.

  18. [Population-genetic structure of beaver (Castor fiber L., 1758) communities and estimation of effective reproductive size Ne of an elementary population].

    Science.gov (United States)

    2004-07-01

    The absence of panmixia at all hierarchical levels of the European beaver communities down to individual families implies a complex organization of the population-genetic structures of the species, in particular, a large intergroup component of gene diversity in the populations. Testing this assumption by analysis of 39 allozyme loci in the communities of reintroduced beaver from the Vyatka river basin (Kirov oblast) has shown that only the beaver colonies exhibit high intergroup gene diversity (Gst = 0.32) whereas this parameter is much lower when estimated among beaver groups from individual Vyatka River tributaries and among localities of one of the tributaries (0.07 and 0.11, respectively). The data suggesting genetic heterogeneity among individual settles within colonies have been obtained. The factors affecting the structure of the beaver communities of the lower hierarchical ranks are considered: the common origin, founder effect, selection, gene drift, assortative mating, and social and behavior features of the species. The conclusion is drawn that the founder effect could be the primary factor of population differentiation only at the time of their formation. The heterogeneity among colonies and among settles is maintained largely by isolation of colonies from one another. The strong interspecific competition for food resources, which is behaviorally implemented in the species at the level of minimal structural units (individual settles) creates a profound and unique population-genetic subdivision of the species. These results substantiate the suggestion that an elementary population (micropopulation) of European beaver is a colony, i.e., a set of related settles of different types. Based on ecological and genetic parameters, the effective reproductive size Ne of the minimum beaver population was estimated to be equal to three animals. This extremely low value of effective reproductive population size largely explains the high tolerance of European beaver

  19. Linkage analysis of quantitative refraction and refractive errors in the Beaver Dam Eye Study.

    Science.gov (United States)

    Klein, Alison P; Duggal, Priya; Lee, Kristine E; Cheng, Ching-Yu; Klein, Ronald; Bailey-Wilson, Joan E; Klein, Barbara E K

    2011-07-13

    Refraction, as measured by spherical equivalent, is the need for an external lens to focus images on the retina. While genetic factors play an important role in the development of refractive errors, few susceptibility genes have been identified. However, several regions of linkage have been reported for myopia (2q, 4q, 7q, 12q, 17q, 18p, 22q, and Xq) and for quantitative refraction (1p, 3q, 4q, 7p, 8p, and 11p). To replicate previously identified linkage peaks and to identify novel loci that influence quantitative refraction and refractive errors, linkage analysis of spherical equivalent, myopia, and hyperopia in the Beaver Dam Eye Study was performed. Nonparametric, sibling-pair, genome-wide linkage analyses of refraction (spherical equivalent adjusted for age, education, and nuclear sclerosis), myopia and hyperopia in 834 sibling pairs within 486 extended pedigrees were performed. Suggestive evidence of linkage was found for hyperopia on chromosome 3, region q26 (empiric P = 5.34 × 10(-4)), a region that had shown significant genome-wide evidence of linkage to refraction and some evidence of linkage to hyperopia. In addition, the analysis replicated previously reported genome-wide significant linkages to 22q11 of adjusted refraction and myopia (empiric P = 4.43 × 10(-3) and 1.48 × 10(-3), respectively) and to 7p15 of refraction (empiric P = 9.43 × 10(-4)). Evidence was also found of linkage to refraction on 7q36 (empiric P = 2.32 × 10(-3)), a region previously linked to high myopia. The findings provide further evidence that genes controlling refractive errors are located on 3q26, 7p15, 7p36, and 22q11.

  20. The influence of hydrologic connectivity on ecosystem metabolism and nitrate uptake in an active beaver meadow

    Science.gov (United States)

    Wegener, P.; Covino, T. P.; Wohl, E.; Kampf, S. K.; Lacy, S.

    2015-12-01

    Wetlands have been widely demonstrated to provide important watershed services, such as the sequestration of carbon (C) and removal of nitrate (NO3-) from through-flowing water. Hydrologic connectivity (degree of water and associated material exchange) between floodplain water bodies (e.g., side channels, ponds) and the main channel influence rates of C accumulation and NO3- uptake, and the degree to which wetlands contribute to enhanced water quality at the catchment scale. However, environmental engineers have largely ignored the role of hydrologic connectivity in providing essential ecosystem services, and constructed wetlands are commonly built using compacted clay and berms that result in less groundwater and surface water exchange than observed in natural wetlands. In a study of an active beaver meadow (multithreaded, riparian wetland) in Rocky Mountain National Park, CO, we show how shifts in hydrology (connectivity, residence times, flow paths) from late spring snowmelt (high connectivity) to autumn/winter baseflow (low connectivity) influence ecosystem metabolism metrics (e.g., gross primary production, ecosystem respiration, and net ecosystem productivity) and NO3- uptake rates. We use a combination of mixing analyses, tracer tests, and hydrometric methods to evaluate shifts in surface and subsurface hydrologic connections between floodplain water bodies from snowmelt to baseflow. In the main channel and three floodplain water bodies, we quantify metabolism metrics and NO3- uptake kinetics across shifting flow regimes. Results from our research indicate that NO3- uptake and metabolism dynamics respond to changing levels of hydrologic connectivity to the main channel, emphasizing the importance of incorporating connectivity in wetland mitigation practices that seek to enhance water quality at the catchment scale.

  1. The Last Big Bang

    Energy Technology Data Exchange (ETDEWEB)

    McGuire, Austin D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meade, Roger Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-13

    As one of the very few people in the world to give the “go/no go” decision to detonate a nuclear device, Austin “Mac” McGuire holds a very special place in the history of both the Los Alamos National Laboratory and the world. As Commander of Joint Task Force Unit 8.1.1, on Christmas Island in the spring and summer of 1962, Mac directed the Los Alamos data collection efforts for twelve of the last atmospheric nuclear detonations conducted by the United States. Since data collection was at the heart of nuclear weapon testing, it fell to Mac to make the ultimate decision to detonate each test device. He calls his experience THE LAST BIG BANG, since these tests, part of Operation Dominic, were characterized by the dramatic displays of the heat, light, and sounds unique to atmospheric nuclear detonations – never, perhaps, to be witnessed again.

  2. Big Data Aesthetics

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2016-01-01

    This article discusses artistic practices and artifacts that are occupied with exploring data through visualization and sonification strategies as well as with translating data into materially solid formats and embodied processes. By means of these examples the overall aim of the article...... is to critically question how and whether such artistic practices can eventually lead to the experience and production of knowledge that could not otherwise be obtained via more traditional ways of data representation. The article, thus, addresses both the problems and possibilities entailed in extending the use...... of large data sets – or Big Data – into the sphere of art and the aesthetic. Central to the discussion here is the analysis of how different structuring principles of data and the discourses that surround these principles shape our perception of data. This discussion involves considerations on various...

  3. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... 2010; Webster 2014). This study evolved around industry stakeholders resisting and negotiating changes, as they are happening, in media consumption dynamics and measurement standards, which inevitably reconceptualize future institutionally effective audiences (Ettema & Whitney 1994). With digital...... that bigger is always better, and the many legacy decisions and rules that ultimately govern how audiences are ‘made’ in commercial measurement companies. As such, the paper extends the discussions of a previous empirical study (Lai 2016) on how media organizations imagine their audiences (Ang 1991; Napoli...

  4. Urban Big Data and Sustainable Development Goals: Challenges and Opportunities

    Directory of Open Access Journals (Sweden)

    Ali Kharrazi

    2016-12-01

    Full Text Available Cities are perhaps one of the most challenging and yet enabling arenas for sustainable development goals. The Sustainable Development Goals (SDGs emphasize the need to monitor each goal through objective targets and indicators based on common denominators in the ability of countries to collect and maintain relevant standardized data. While this approach is aimed at harmonizing the SDGs at the national level, it presents unique challenges and opportunities for the development of innovative urban-level metrics through big data innovations. In this article, we make the case for advancing more innovative targets and indicators relevant to the SDGs through the emergence of urban big data. We believe that urban policy-makers are faced with unique opportunities to develop, experiment, and advance big data practices relevant to sustainable development. This can be achieved by situating the application of big data innovations through developing mayoral institutions for the governance of urban big data, advancing the culture and common skill sets for applying urban big data, and investing in specialized research and education programs.

  5. Big Data: present and future

    Directory of Open Access Journals (Sweden)

    Mircea Raducu TRIFU

    2014-05-01

    Full Text Available The paper explains the importance of the Big Data concept, a concept that even now, after years of development, is for the most companies just a cool keyword. The paper also describes the level of the actual big data development and the things it can do, and also the things that can be done in the near future. The paper focuses on explaining to nontechnical and non-database related technical specialists what basically is big data, presents the three most important V's, as well as the new ones, the most important solutions used by companies like Google or Amazon, as well as some interesting perceptions based on this subject.

  6. Big Data and Ambulatory Care

    Science.gov (United States)

    Thorpe, Jane Hyatt; Gray, Elizabeth Alexandra

    2015-01-01

    Big data is heralded as having the potential to revolutionize health care by making large amounts of data available to support care delivery, population health, and patient engagement. Critics argue that big data's transformative potential is inhibited by privacy requirements that restrict health information exchange. However, there are a variety of permissible activities involving use and disclosure of patient information that support care delivery and management. This article presents an overview of the legal framework governing health information, dispels misconceptions about privacy regulations, and highlights how ambulatory care providers in particular can maximize the utility of big data to improve care. PMID:25401945

  7. Big Data Mining: Tools & Algorithms

    Directory of Open Access Journals (Sweden)

    Adeel Shiraz Hashmi

    2016-03-01

    Full Text Available We are now in Big Data era, and there is a growing demand for tools which can process and analyze it. Big data analytics deals with extracting valuable information from that complex data which can’t be handled by traditional data mining tools. This paper surveys the available tools which can handle large volumes of data as well as evolving data streams. The data mining tools and algorithms which can handle big data have also been summarized, and one of the tools has been used for mining of large datasets using distributed algorithms.

  8. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  9. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  10. Light and Electron Microscopy of the European Beaver (Castor fiber) Stomach Reveal Unique Morphological Features with Possible General Biological Significance

    Science.gov (United States)

    Petryński, Wojciech; Palkowska, Katarzyna; Prusik, Magdalena; Targońska, Krystyna; Giżejewski, Zygmunt; Przybylska-Gornowicz, Barbara

    2014-01-01

    Anatomical, histological, and ultrastructural studies of the European beaver stomach revealed several unique morphological features. The prominent attribute of its gross morphology was the cardiogastric gland (CGG), located near the oesophageal entrance. Light microscopy showed that the CGG was formed by invaginations of the mucosa into the submucosa, which contained densely packed proper gastric glands comprised primarily of parietal and chief cells. Mucous neck cells represented stomach lumen. These data suggest that chief cells in the CGG develop from undifferentiated cells that migrate through the gastric gland neck rather than from mucous neck cells. Classical chief cell formation (i.e., arising from mucous neck cells) occurred in the mucosa lining the stomach lumen, however. The muscularis around the CGG consisted primarily of skeletal muscle tissue. The cardiac region was rudimentary while the fundus/corpus and pyloric regions were equally developed. Another unusual feature of the beaver stomach was the presence of specific mucus with a thickness up to 950 µm (in frozen, unfixed sections) that coated the mucosa. Our observations suggest that the formation of this mucus is complex and includes the secretory granule accumulation in the cytoplasm of pit cells, the granule aggregation inside cells, and the incorporation of degenerating cells into the mucus. PMID:24727802

  11. Concentration of lead, cadmium, and mercury in tissues of European beaver (Castor fiber from the north-eastern Poland

    Directory of Open Access Journals (Sweden)

    Giżejewska Aleksandra

    2014-03-01

    Full Text Available The aim of the study was to determine the concentrations of lead (Pb, cadmium (Cd, and mercury (Hg in the liver, kidneys, and muscles of European beavers (Castor fiber and thus to evaluate the degree of heavy metals contamination in Warmia and Mazury region in Poland. The study was conducted on free-living beavers captured in region of Warmia and Mazury during autumn 2011. Concentrations of the elements were determined by atomic absorption spectrometry. The presence of the metals was detected in all individual tissue samples. Mean Pb and Hg concentrations were relatively low. However, the high mean Cd level, especially in the kidneys (7.933 mg/kg and liver (0.880 mg/kg was demonstrated. Despite the fact that region of Warmia and Mazury is considered to be “ecologically clean”, the conducted studies indicate that systematic monitoring for the presence of heavy metals is necessary not only in industrialised but also in agricultural regions, as well as in natural ecosystems.

  12. Light and electron microscopy of the European beaver (Castor fiber stomach reveal unique morphological features with possible general biological significance.

    Directory of Open Access Journals (Sweden)

    Natalia Ziółkowska

    Full Text Available Anatomical, histological, and ultrastructural studies of the European beaver stomach revealed several unique morphological features. The prominent attribute of its gross morphology was the cardiogastric gland (CGG, located near the oesophageal entrance. Light microscopy showed that the CGG was formed by invaginations of the mucosa into the submucosa, which contained densely packed proper gastric glands comprised primarily of parietal and chief cells. Mucous neck cells represented <0.1% of cells in the CGG gastric glands and 22-32% of cells in the proper gastric glands of the mucosa lining the stomach lumen. These data suggest that chief cells in the CGG develop from undifferentiated cells that migrate through the gastric gland neck rather than from mucous neck cells. Classical chief cell formation (i.e., arising from mucous neck cells occurred in the mucosa lining the stomach lumen, however. The muscularis around the CGG consisted primarily of skeletal muscle tissue. The cardiac region was rudimentary while the fundus/corpus and pyloric regions were equally developed. Another unusual feature of the beaver stomach was the presence of specific mucus with a thickness up to 950 µm (in frozen, unfixed sections that coated the mucosa. Our observations suggest that the formation of this mucus is complex and includes the secretory granule accumulation in the cytoplasm of pit cells, the granule aggregation inside cells, and the incorporation of degenerating cells into the mucus.

  13. Age-related changes in somatic condition and reproduction in the Eurasian beaver: Resource history influences onset of reproductive senescence.

    Directory of Open Access Journals (Sweden)

    Ruairidh D Campbell

    Full Text Available Using 15 years of data from a stable population of wild Eurasian beavers (Castor fiber, we examine how annual and lifetime access to food resources affect individual age-related changes in reproduction and somatic condition. We found an age-related decline in annual maternal reproductive output, after a peak at age 5-6. Rainfall, an established negative proxy of annual resource availability for beavers, was consistently associated with lower reproductive output for females of all ages. In contrast, breeding territory quality, as a measure of local resource history over reproductive lifetimes, caused differences in individual patterns of reproductive senescence; animals from lower quality territories senesced when younger. Litter size was unrelated to maternal age, although adult body weight increased with age. In terms of resource effects, in poorer years but not in better years, older mothers produced larger offspring than did younger mothers, giving support to the constraint theory. Overall, our findings exemplify state-dependent life-history strategies, supporting an effect of resources on reproductive senescence, where cumulative differences in resource access, and not just reproductive strategy, mediate long-term reproductive trade-offs, consistent with the disposable soma and reproductive restraint theories. We propose that flexible life-history schedules could play a role in the dynamics of populations exhibiting reproductive skew, with earlier breeding opportunities leading to an earlier senescence schedule through resource dependent mechanisms.

  14. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  15. The influence of fall-spawning coho salmon (Oncorhynchus kisutch) on growth and production of juvenile coho salmon rearing in beaver ponds on the Copper River Delta, Alaska.

    Science.gov (United States)

    Dirk W. Lang; Gordon H. Reeves; James D. Hall; Mark S. Wipfli

    2006-01-01

    This study examined the influence of fall-spawning coho salmon (Oncorhynchrcs kisutch) on the density, growth rate, body condition, and survival to outmigration of juvenile coho salmon on the Copper River Delta, Alaska, USA. During the fall of 1999 and 2000, fish rearing in beaver ponds that received spawning salmon were compared with fish from...

  16. The significance of the European beaver (Castor fibre activity for the process of renaturalization of river valleys in the era of increasing

    Directory of Open Access Journals (Sweden)

    Kusztal Piotr

    2017-03-01

    Full Text Available Changes in the environment that are caused by the activity of beavers bring numerous advantages. They affect the increase in biodiversity, contribute to improving the condition of cleanliness of watercourses, improve local water relations and restore the natural landscape of river valleys.

  17. Big data for bipolar disorder

    National Research Council Canada - National Science Library

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-01-01

    .... The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events...

  18. Big Data and Perioperative Nursing.

    Science.gov (United States)

    Westra, Bonnie L; Peterson, Jessica J

    2016-10-01

    Big data are large volumes of digital data that can be collected from disparate sources and are challenging to analyze. These data are often described with the five "Vs": volume, velocity, variety, veracity, and value. Perioperative nurses contribute to big data through documentation in the electronic health record during routine surgical care, and these data have implications for clinical decision making, administrative decisions, quality improvement, and big data science. This article explores methods to improve the quality of perioperative nursing data and provides examples of how these data can be combined with broader nursing data for quality improvement. We also discuss a national action plan for nursing knowledge and big data science and how perioperative nurses can engage in collaborative actions to transform health care. Standardized perioperative nursing data has the potential to affect care far beyond the original patient. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  19. Big Lake Dam Inspection Report

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This report summarizes an inspection of the Big Lake Dam that was done in September of 1983. The inspection did not reveal any conditions that constitute and...

  20. BIG SKY CARBON SEQUESTRATION PARTNERSHIP

    Energy Technology Data Exchange (ETDEWEB)

    Susan M. Capalbo

    2004-06-30

    The Big Sky Carbon Sequestration Partnership, led by Montana State University, is comprised of research institutions, public entities and private sectors organizations, and the Confederated Salish and Kootenai Tribes and the Nez Perce Tribe. Efforts under this Partnership fall into four areas: evaluation of sources and carbon sequestration sinks; development of GIS-based reporting framework; designing an integrated suite of monitoring, measuring, and verification technologies; and initiating a comprehensive education and outreach program. At the first two Partnership meetings the groundwork was put in place to provide an assessment of capture and storage capabilities for CO{sub 2} utilizing the resources found in the Partnership region (both geological and terrestrial sinks), that would complement the ongoing DOE research. During the third quarter, planning efforts are underway for the next Partnership meeting which will showcase the architecture of the GIS framework and initial results for sources and sinks, discuss the methods and analysis underway for assessing geological and terrestrial sequestration potentials. The meeting will conclude with an ASME workshop (see attached agenda). The region has a diverse array of geological formations that could provide storage options for carbon in one or more of its three states. Likewise, initial estimates of terrestrial sinks indicate a vast potential for increasing and maintaining soil C on forested, agricultural, and reclaimed lands. Both options include the potential for offsetting economic benefits to industry and society. Steps have been taken to assure that the GIS-based framework is consistent among types of sinks within the Big Sky Partnership area and with the efforts of other western DOE partnerships. Efforts are also being made to find funding to include Wyoming in the coverage areas for both geological and terrestrial sinks and sources. The Partnership recognizes the critical importance of measurement

  1. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  2. Big Data and Ambulatory Care

    OpenAIRE

    Thorpe, Jane Hyatt; Gray, Elizabeth Alexandra

    2014-01-01

    Big data is heralded as having the potential to revolutionize health care by making large amounts of data available to support care delivery, population health, and patient engagement. Critics argue that big data's transformative potential is inhibited by privacy requirements that restrict health information exchange. However, there are a variety of permissible activities involving use and disclosure of patient information that support care delivery and management. This article presents an ov...

  3. BIG SKY CARBON SEQUESTRATION PARTNERSHIP

    Energy Technology Data Exchange (ETDEWEB)

    Susan M. Capalbo

    2004-06-01

    The Big Sky Partnership, led by Montana State University, is comprised of research institutions, public entities and private sectors organizations, and the Confederated Salish and Kootenai Tribes and the Nez Perce Tribe. Efforts during the second performance period fall into four areas: evaluation of sources and carbon sequestration sinks; development of GIS-based reporting framework; designing an integrated suite of monitoring, measuring, and verification technologies; and initiating a comprehensive education and outreach program. At the first two Partnership meetings the groundwork was put in place to provide an assessment of capture and storage capabilities for CO{sub 2} utilizing the resources found in the Partnership region (both geological and terrestrial sinks), that would complement the ongoing DOE research. The region has a diverse array of geological formations that could provide storage options for carbon in one or more of its three states. Likewise, initial estimates of terrestrial sinks indicate a vast potential for increasing and maintaining soil C on forested, agricultural, and reclaimed lands. Both options include the potential for offsetting economic benefits to industry and society. Steps have been taken to assure that the GIS-based framework is consistent among types of sinks within the Big Sky Partnership area and with the efforts of other western DOE partnerships. Efforts are also being made to find funding to include Wyoming in the coverage areas for both geological and terrestrial sinks and sources. The Partnership recognizes the critical importance of measurement, monitoring, and verification technologies to support not only carbon trading but all policies and programs that DOE and other agencies may want to pursue in support of GHG mitigation. The efforts begun in developing and implementing MMV technologies for geological sequestration reflect this concern. Research is also underway to identify and validate best management practices for

  4. Big Sky Carbon Sequestration Partnership

    Energy Technology Data Exchange (ETDEWEB)

    Susan Capalbo

    2005-12-31

    The Big Sky Carbon Sequestration Partnership, led by Montana State University, is comprised of research institutions, public entities and private sectors organizations, and the Confederated Salish and Kootenai Tribes and the Nez Perce Tribe. Efforts under this Partnership in Phase I are organized into four areas: (1) Evaluation of sources and carbon sequestration sinks that will be used to determine the location of pilot demonstrations in Phase II; (2) Development of GIS-based reporting framework that links with national networks; (3) Design of an integrated suite of monitoring, measuring, and verification technologies, market-based opportunities for carbon management, and an economic/risk assessment framework; (referred to below as the Advanced Concepts component of the Phase I efforts) and (4) Initiation of a comprehensive education and outreach program. As a result of the Phase I activities, the groundwork is in place to provide an assessment of storage capabilities for CO{sub 2} utilizing the resources found in the Partnership region (both geological and terrestrial sinks), that complements the ongoing DOE research agenda in Carbon Sequestration. The geology of the Big Sky Carbon Sequestration Partnership Region is favorable for the potential sequestration of enormous volume of CO{sub 2}. The United States Geological Survey (USGS 1995) identified 10 geologic provinces and 111 plays in the region. These provinces and plays include both sedimentary rock types characteristic of oil, gas, and coal productions as well as large areas of mafic volcanic rocks. Of the 10 provinces and 111 plays, 1 province and 4 plays are located within Idaho. The remaining 9 provinces and 107 plays are dominated by sedimentary rocks and located in the states of Montana and Wyoming. The potential sequestration capacity of the 9 sedimentary provinces within the region ranges from 25,000 to almost 900,000 million metric tons of CO{sub 2}. Overall every sedimentary formation investigated

  5. Powering Big Data for Nursing Through Partnership.

    Science.gov (United States)

    Harper, Ellen M; Parkerson, Sara

    2015-01-01

    The Big Data Principles Workgroup (Workgroup) was established with support of the Healthcare Information and Management Systems Society. Building on the Triple Aim challenge, the Workgroup sought to identify Big Data principles, barriers, and challenges to nurse-sensitive data inclusion into Big Data sets. The product of this pioneering partnership Workgroup was the "Guiding Principles for Big Data in Nursing-Using Big Data to Improve the Quality of Care and Outcomes."

  6. Stalin’s Big-Fleet Program

    Science.gov (United States)

    2004-01-01

    of the nuclear submarine Kursk with its entire crew—the Russian navy still remains nuclear and the sec- ond most powerful in the world. It overreached...Soviet Battleship Con- struction], Marine-Rundschau 71 (1974), pp. 461–79; S. Breyer, “Sowjetischer Schlacht - schiffbau,” Marine-Rundschau 72 (1975

  7. Geochemistry of groundwater in the Beaver and Camas Creek drainage basins, eastern Idaho

    Science.gov (United States)

    Rattray, Gordon W.; Ginsbach, Michael L.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Energy, is studying the fate and transport of waste solutes in the eastern Snake River Plain (ESRP) aquifer at the Idaho National Laboratory (INL) in eastern Idaho. This effort requires an understanding of the natural and anthropogenic geochemistry of groundwater at the INL and of the important physical and chemical processes controlling the geochemistry. In this study, the USGS applied geochemical modeling to investigate the geochemistry of groundwater in the Beaver and Camas Creek drainage basins, which provide groundwater recharge to the ESRP aquifer underlying the northeastern part of the INL. Data used in this study include petrology and mineralogy from 2 sediment and 3 rock samples, and water-quality analyses from 4 surface-water and 18 groundwater samples. The mineralogy of the sediment and rock samples was analyzed with X-ray diffraction, and the mineralogy and petrology of the rock samples were examined in thin sections. The water samples were analyzed for field parameters, major ions, silica, nutrients, dissolved organic carbon, trace elements, tritium, and the stable isotope ratios of hydrogen, oxygen, carbon, sulfur, and nitrogen. Groundwater geochemistry was influenced by reactions with rocks of the geologic terranes—carbonate rocks, rhyolite, basalt, evaporite deposits, and sediment comprised of all of these rocks. Agricultural practices near and south of Dubois and application of road anti-icing liquids on U.S. Interstate Highway 15 were likely sources of nitrate, chloride, calcium, and magnesium to groundwater. Groundwater geochemistry was successfully modeled in the alluvial aquifer in Camas Meadows and the ESRP fractured basalt aquifer using the geochemical modeling code PHREEQC. The primary geochemical processes appear to be precipitation or dissolution of calcite and dissolution of silicate minerals. Dissolution of evaporite minerals, associated with Pleistocene Lake

  8. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  9. Seasonal differences in the testicular transcriptome profile of free-living European beavers (Castor fiber L. determined by the RNA-Seq method.

    Directory of Open Access Journals (Sweden)

    Iwona Bogacka

    Full Text Available The European beaver (Castor fiber L. is an important free-living rodent that inhabits Eurasian temperate forests. Beavers are often referred to as ecosystem engineers because they create or change existing habitats, enhance biodiversity and prepare the environment for diverse plant and animal species. Beavers are protected in most European Union countries, but their genomic background remains unknown. In this study, gene expression patterns in beaver testes and the variations in genetic expression in breeding and non-breeding seasons were determined by high-throughput transcriptome sequencing. Paired-end sequencing in the Illumina HiSeq 2000 sequencer produced a total of 373.06 million of high-quality reads. De novo assembly of contigs yielded 130,741 unigenes with an average length of 1,369.3 nt, N50 value of 1,734, and average GC content of 46.51%. A comprehensive analysis of the testicular transcriptome revealed more than 26,000 highly expressed unigenes which exhibited the highest homology with Rattus norvegicus and Ictidomys tridecemlineatus genomes. More than 8,000 highly expressed genes were found to be involved in fundamental biological processes, cellular components or molecular pathways. The study also revealed 42 genes whose regulation differed between breeding and non-breeding seasons. During the non-breeding period, the expression of 37 genes was up-regulated, and the expression of 5 genes was down-regulated relative to the breeding season. The identified genes encode molecules which are involved in signaling transduction, DNA repair, stress responses, inflammatory processes, metabolism and steroidogenesis. Our results pave the way for further research into season-dependent variations in beaver testes.

  10. Learning big data with Amazon Elastic MapReduce

    CERN Document Server

    Singh, Amarkant

    2014-01-01

    This book is aimed at developers and system administrators who want to learn about Big Data analysis using Amazon Elastic MapReduce. Basic Java programming knowledge is required. You should be comfortable with using command-line tools. Prior knowledge of AWS, API, and CLI tools is not assumed. Also, no exposure to Hadoop and MapReduce is expected.

  11. Collaborative Approaches Needed to Close the Big Data Skills Gap

    Directory of Open Access Journals (Sweden)

    Steven Miller

    2014-04-01

    Full Text Available The big data and analytics talent discussion has largely focused on a single role – the data scientist. However, the need is much broader than data scientists. Data has become a strategic business asset. Every professional occupation must adapt to this new mindset. Universities in partnership with industry must move quickly to ensure that the graduates they produce have the required skills for the age of big data. Existing curricula should be reviewed and adapted to ensure relevance. New curricula and degree programs are needed to meet the needs of industry.

  12. Envisioning the future of 'big data' biomedicine.

    Science.gov (United States)

    Bui, Alex A T; Van Horn, John Darrell

    2017-05-01

    Through the increasing availability of more efficient data collection procedures, biomedical scientists are now confronting ever larger sets of data, often finding themselves struggling to process and interpret what they have gathered. This, while still more data continues to accumulate. This torrent of biomedical information necessitates creative thinking about how the data are being generated, how they might be best managed, analyzed, and eventually how they can be transformed into further scientific understanding for improving patient care. Recognizing this as a major challenge, the National Institutes of Health (NIH) has spearheaded the "Big Data to Knowledge" (BD2K) program - the agency's most ambitious biomedical informatics effort ever undertaken to date. In this commentary, we describe how the NIH has taken on "big data" science head-on, how a consortium of leading research centers are developing the means for handling large-scale data, and how such activities are being marshalled for the training of a new generation of biomedical data scientists. All in all, the NIH BD2K program seeks to position data science at the heart of 21 st Century biomedical research. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Digital data sets that describe aquifer characteristics of the alluvial and terrace deposits along the Beaver-North Canadian River from the panhandle to Canton Lake in northwestern Oklahoma

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital hydraulic conductivity values for the alluvial and terrace deposits along the Beaver-North Canadian River from the panhandle to...

  14. Digital data sets that describe aquifer characteristics of the alluvial and terrace deposits along the Beaver-North Canadian River from the panhandle to Canton Lake in northwestern Oklahoma

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital water-level elevation contours for the Quaternary alluvial and terrace deposits along the Beaver-North Canadian River from the...

  15. Digital data sets that describe aquifer characteristics of the alluvial and terrace deposits along the Beaver-North Canadian River from the panhandle to Canton Lake in northwestern Oklahoma

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital aquifer boundaries for the alluvial and terrace deposits along the Beaver-North Canadian River from the panhandle to Canton Lake in...

  16. Digital data sets that describe aquifer characteristics of the alluvial and terrace deposits along the Beaver-North Canadian River from the panhandle to Canton Lake in northwestern Oklahoma

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of digital polygons of a constant recharge value for the alluvial and terrace deposits along the Beaver-North Canadian River from the...

  17. Considerations on Geospatial Big Data

    Science.gov (United States)

    LIU, Zhen; GUO, Huadong; WANG, Changlin

    2016-11-01

    Geospatial data, as a significant portion of big data, has recently gained the full attention of researchers. However, few researchers focus on the evolution of geospatial data and its scientific research methodologies. When entering into the big data era, fully understanding the changing research paradigm associated with geospatial data will definitely benefit future research on big data. In this paper, we look deep into these issues by examining the components and features of geospatial big data, reviewing relevant scientific research methodologies, and examining the evolving pattern of geospatial data in the scope of the four ‘science paradigms’. This paper proposes that geospatial big data has significantly shifted the scientific research methodology from ‘hypothesis to data’ to ‘data to questions’ and it is important to explore the generality of growing geospatial data ‘from bottom to top’. Particularly, four research areas that mostly reflect data-driven geospatial research are proposed: spatial correlation, spatial analytics, spatial visualization, and scientific knowledge discovery. It is also pointed out that privacy and quality issues of geospatial data may require more attention in the future. Also, some challenges and thoughts are raised for future discussion.

  18. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  19. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  20. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  1. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  2. A practical guide to big data research in psychology.

    Science.gov (United States)

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. The Rise of Big Data in Neurorehabilitation.

    Science.gov (United States)

    Faroqi-Shah, Yasmeen

    2016-02-01

    In some fields, Big Data has been instrumental in analyzing, predicting, and influencing human behavior. However, Big Data approaches have so far been less central in speech-language pathology. This article introduces the concept of Big Data and provides examples of Big Data initiatives pertaining to adult neurorehabilitation. It also discusses the potential theoretical and clinical contributions that Big Data can make. The article also recognizes some impediments in building and using Big Data for scientific and clinical inquiry. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  4. Big Sky Carbon Sequestration Partnership

    Energy Technology Data Exchange (ETDEWEB)

    Susan M. Capalbo

    2005-11-01

    The Big Sky Carbon Sequestration Partnership, led by Montana State University, is comprised of research institutions, public entities and private sectors organizations, and the Confederated Salish and Kootenai Tribes and the Nez Perce Tribe. Efforts under this Partnership in Phase I fall into four areas: evaluation of sources and carbon sequestration sinks that will be used to determine the location of pilot demonstrations in Phase II; development of GIS-based reporting framework that links with national networks; designing an integrated suite of monitoring, measuring, and verification technologies and assessment frameworks; and initiating a comprehensive education and outreach program. The groundwork is in place to provide an assessment of storage capabilities for CO2 utilizing the resources found in the Partnership region (both geological and terrestrial sinks), that would complement the ongoing DOE research agenda in Carbon Sequestration. The region has a diverse array of geological formations that could provide storage options for carbon in one or more of its three states. Likewise, initial estimates of terrestrial sinks indicate a vast potential for increasing and maintaining soil C on forested, agricultural, and reclaimed lands. Both options include the potential for offsetting economic benefits to industry and society. Steps have been taken to assure that the GIS-based framework is consistent among types of sinks within the Big Sky Partnership area and with the efforts of other DOE regional partnerships. The Partnership recognizes the critical importance of measurement, monitoring, and verification technologies to support not only carbon trading but all policies and programs that DOE and other agencies may want to pursue in support of GHG mitigation. The efforts in developing and implementing MMV technologies for geological sequestration reflect this concern. Research is also underway to identify and validate best management practices for soil C in the

  5. Topics on distance correlation, feature screening and lifetime expectancy with application to Beaver Dam eye study data

    Science.gov (United States)

    Kong, Jing

    This thesis includes 4 pieces of work. In Chapter 1, we present the work with a method for examining mortality as it is seen to run in families, and lifestyle factors that are also seen to run in families, in a subpopulation of the Beaver Dam Eye Study that has died by 2011. We find significant distance correlations between death ages, lifestyle factors, and family relationships. Considering only sib pairs compared to unrelated persons, distance correlation between siblings and mortality is, not surprisingly, stronger than that between more distantly related family members and mortality. Chapter 2 introduces a feature screening procedure with the use of distance correlation and covariance. We demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure based on distance correlation as a stopping criterion. The approach is further implemented to two real examples, namely the famous small round blue cell tumors data and the Cancer Genome Atlas ovarian cancer data Chapter 3 pays attention to the right censored human longevity data and the estimation of lifetime expectancy. We propose a general framework of backward multiple imputation for estimating the conditional lifetime expectancy function and the variance of the estimator in the right censoring setting and prove the properties of the estimator. In addition, we apply the method to the Beaver Dam eye study data to study human longevity, where the expected human lifetime are modeled with smoothing spline ANOVA based on the covariates including baseline age, gender, lifestyle factors and disease variables. Chapter 4 compares two imputation methods for right censored data, namely the famous Buckley-James estimator and the backward imputation method proposed in Chapter 3 and shows that backward imputation method is less biased and more robust with heterogeneity.

  6. Analysis of Neogene deformation between Beaver, Utah and Barstow, California: Suggestions for altering the extensional paradigm

    Science.gov (United States)

    Anderson, R. Ernest; Beard, Sue; Mankinen, Edward A.; Hillhouse, John W.

    2013-01-01

    For more than two decades, the paradigm of large-magnitude (~250 km), northwest-directed (~N70°W) Neogene extensional lengthening between the Colorado Plateau and Sierra Nevada at the approximate latitude of Las Vegas has remained largely unchallenged, as has the notion that the strain integrates with coeval strains in adjacent regions and with plate-boundary strain. The paradigm depends on poorly constrained interconnectedness of extreme-case lengthening estimated at scattered localities within the region. Here we evaluate the soundness of the inferred strain interconnectedness over an area reaching 600 km southwest from Beaver, Utah, to Barstow, California, and conclude that lengthening is overestimated in most areas and, even if the estimates are valid, lengthening is not interconnected in a way that allows for published versions of province-wide summations.We summarize Neogene strike slip in 13 areas distributed from central Utah to Lake Mead. In general, left-sense shear and associated structures define a broad zone of translation approximately parallel to the eastern boundary of the Basin and Range against the Colorado Plateau, a zone we refer to as the Hingeline shear zone. Areas of steep-axis rotation (ranging to 2500 km2) record N-S shortening rather than unevenly distributed lengthening. In most cases, the rotational shortening and extension-parallel folds and thrusts are coupled to, or absorb, strike slip, thus providing valuable insight into how the discontinuous strike-slip faults are simply parts of a broad zone of continuous strain. The discontinuous nature of strike slip and the complex mixture of extensional, contractional, and steep-axis rotational structures in the Hingeline shear zone are similar to those in the Walker Lane belt in the west part of the Basin and Range, and, together, the two record southward displacement of the central and northern Basin and Range relative to the adjacent Colorado Plateau. Understanding this province

  7. Big Data Analytics in Healthcare.

    Science.gov (United States)

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  8. Spatial big data for disaster management

    Science.gov (United States)

    Shalini, R.; Jayapratha, K.; Ayeshabanu, S.; Chemmalar Selvi, G.

    2017-11-01

    Big data is an idea of informational collections that depicts huge measure of information and complex that conventional information preparing application program is lacking to manage them. Presently, big data is a widely known domain used in research, academic, and industries. It is utilized to store substantial measure of information in a solitary brought together one. Challenges integrate capture, allocation, analysis, information precise, visualization, distribution, interchange, delegation, inquiring, updating and information protection. In this digital world, to put away the information and recovering the data is enormous errand for the huge organizations and some time information ought to be misfortune due to circulated information putting away. For this issue the organization individuals are chosen to actualize the huge information to put away every one of the information identified with the organization they are put away in one enormous database that is known as large information. Remote sensor is a science getting data used to distinguish the items or break down the range from a separation. It is anything but difficult to discover the question effortlessly with the sensor. It makes geographic data from satellite and sensor information so in this paper dissect what are the structures are utilized for remote sensor in huge information and how the engineering is vary from each other and how they are identify with our investigations. This paper depicts how the calamity happens and figuring consequence of informational collection. And applied a seismic informational collection to compute the tremor calamity in view of classification and clustering strategy. The classical data mining algorithms for classification used are k-nearest, naive bayes and decision table and clustering used are hierarchical, make density based and simple k_means using XLMINER and WEKA tool. This paper also helps to predicts the spatial dataset by applying the XLMINER AND WEKA tool and

  9. Big Book of Windows Hacks

    CERN Document Server

    Gralla, Preston

    2008-01-01

    Bigger, better, and broader in scope, the Big Book of Windows Hacks gives you everything you need to get the most out of your Windows Vista or XP system, including its related applications and the hardware it runs on or connects to. Whether you want to tweak Vista's Aero interface, build customized sidebar gadgets and run them from a USB key, or hack the "unhackable" screensavers, you'll find quick and ingenious ways to bend these recalcitrant operating systems to your will. The Big Book of Windows Hacks focuses on Vista, the new bad boy on Microsoft's block, with hacks and workarounds that

  10. Worse than a big rip?

    Energy Technology Data Exchange (ETDEWEB)

    Bouhmadi-Lopez, Mariam [Centro Multidisciplinar de Astrofisica, CENTRA, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais 1, 1049-001 Lisbon (Portugal); Departamento de Fisica, Universidade da Beira Interior, R. Marques d' Avila e Bolama, 6201-001 Covilha (Portugal); Institute of Cosmology and Gravitation, University of Portsmouth, Mercantile House, Hampshire Terrace, Portsmouth, PO1 2EG (United Kingdom)], E-mail: mariam.bouhmadi@fisica.ist.utl.pt; Gonzalez-Diaz, Pedro F. [Colina de los Chopos, Centro de Fisica ' Miguel A. Catalan' , Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)], E-mail: p.gonzalezdiaz@imaff.cfmac.csic.es; Martin-Moruno, Prado [Colina de los Chopos, Centro de Fisica ' Miguel A. Catalan' , Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)], E-mail: pra@imaff.cfmac.csic.es

    2008-01-17

    We show that a generalised phantom Chaplygin gas can present a future singularity in a finite future cosmic time. Unlike the big rip singularity, this singularity happens for a finite scale factor, but like the big rip singularity, it would also take place at a finite future cosmic time. In addition, we define a dual of the generalised phantom Chaplygin gas which satisfies the null energy condition. Then, in a Randall-Sundrum 1 brane-world scenario, we show that the same kind of singularity at a finite scale factor arises for a brane filled with a dual of the generalised phantom Chaplygin gas.

  11. [Big Data- challenges and risks].

    Science.gov (United States)

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  12. Little Science to Big Science: Big Scientists to Little Scientists?

    Science.gov (United States)

    Simonton, Dean Keith

    2010-01-01

    This article presents the author's response to Hisham B. Ghassib's essay entitled "Where Does Creativity Fit into a Productivist Industrial Model of Knowledge Production?" Professor Ghassib's (2010) essay presents a provocative portrait of how the little science of the Babylonians, Greeks, and Arabs became the Big Science of the modern industrial…

  13. Big Cities, Big Problems: Reason for the Elderly to Move?

    NARCIS (Netherlands)

    Fokkema, T.; de Jong-Gierveld, J.; Nijkamp, P.

    1996-01-01

    In many European countries, data on geographical patterns of internal elderly migration show that the elderly (55+) are more likely to leave than to move to the big cities. Besides emphasising the attractive features of the destination areas (pull factors), it is often assumed that this negative

  14. Post-fire debris-flow hazard assessment of the area burned by the 2013 Beaver Creek Fire near Hailey, central Idaho

    Science.gov (United States)

    Skinner, Kenneth D.

    2013-01-01

    A preliminary hazard assessment was developed for debris-flow hazards in the 465 square-kilometer (115,000 acres) area burned by the 2013 Beaver Creek fire near Hailey in central Idaho. The burn area covers all or part of six watersheds and selected basins draining to the Big Wood River and is at risk of substantial post-fire erosion, such as that caused by debris flows. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the Intermountain Region in Western United States were used to estimate the probability of debris-flow occurrence, potential volume of debris flows, and the combined debris-flow hazard ranking along the drainage network within the burn area and to estimate the same for analyzed drainage basins within the burn area. Input data for the empirical models included topographic parameters, soil characteristics, burn severity, and rainfall totals and intensities for a (1) 2-year-recurrence, 1-hour-duration rainfall, referred to as a 2-year storm (13 mm); (2) 10-year-recurrence, 1-hour-duration rainfall, referred to as a 10-year storm (19 mm); and (3) 25-year-recurrence, 1-hour-duration rainfall, referred to as a 25-year storm (22 mm). Estimated debris-flow probabilities for drainage basins upstream of 130 selected basin outlets ranged from less than 1 to 78 percent with the probabilities increasing with each increase in storm magnitude. Probabilities were high in three of the six watersheds. For the 25-year storm, probabilities were greater than 60 percent for 11 basin outlets and ranged from 50 to 60 percent for an additional 12 basin outlets. Probability estimates for stream segments within the drainage network can vary within a basin. For the 25-year storm, probabilities for stream segments within 33 basins were higher than the basin outlet, emphasizing the importance of evaluating the drainage network as well as basin outlets. Estimated debris-flow volumes for the three modeled storms range

  15. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  16. Big Data: Implications for Health System Pharmacy

    OpenAIRE

    Stokes, Laura B.; Rogers, Joseph W.; Hertig, John B.; Weber, Robert J.

    2016-01-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this artic...

  17. Big Data Analytics and Its Applications

    OpenAIRE

    Memon, Mashooque Ahmed; Soomro, Safeeullah; Jumani, Awais Khan; Kartio, Muneer Ahmed

    2017-01-01

    The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be uti...

  18. Big sagebrush seed bank densities following wildfires

    Science.gov (United States)

    Big sagebrush (Artemisia spp.) is a critical shrub to many wildlife species including sage grouse (Centrocercus urophasianus), mule deer (Odocoileus hemionus), and pygmy rabbit (Brachylagus idahoensis). Big sagebrush is killed by wildfires and big sagebrush seed is generally short-lived and do not s...

  19. Judging Big Deals: Challenges, Outcomes, and Advice

    Science.gov (United States)

    Glasser, Sarah

    2013-01-01

    This article reports the results of an analysis of five Big Deal electronic journal packages to which Hofstra University's Axinn Library subscribes. COUNTER usage reports were used to judge the value of each Big Deal. Limitations of usage statistics are also discussed. In the end, the author concludes that four of the five Big Deals are good deals…

  20. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  1. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  2. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  3. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  4. A survey of big data research

    OpenAIRE

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang,Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions.

  5. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  6. A survey of big data research.

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions.

  7. Big data and urban governance

    NARCIS (Netherlands)

    Taylor, L.; Richter, C.; Gupta, J.; Pfeffer, K.; Verrest, H.; Ros-Tonen, M.

    2015-01-01

    This chapter examines the ways in which big data is involved in the rise of smart cities. Mobile phones, sensors and online applications produce streams of data which are used to regulate and plan the city, often in real time, but which presents challenges as to how the city’s functions are seen and

  8. The Big European Bubble Chamber

    CERN Multimedia

    1977-01-01

    The 3.70 metre Big European Bubble Chamber (BEBC), dismantled on 9 August 1984. During operation it was one of the biggest detectors in the world, producing direct visual recordings of particle tracks. 6.3 million photos of interactions were taken with the chamber in the course of its existence.

  9. Banking Wyoming big sagebrush seeds

    Science.gov (United States)

    Robert P. Karrfalt; Nancy Shaw

    2013-01-01

    Five commercially produced seed lots of Wyoming big sagebrush (Artemisia tridentata Nutt. var. wyomingensis (Beetle & Young) S.L. Welsh [Asteraceae]) were stored under various conditions for 5 y. Purity, moisture content as measured by equilibrium relative humidity, and storage temperature were all important factors to successful seed storage. Our results indicate...

  10. The Case for "Big History."

    Science.gov (United States)

    Christian, David

    1991-01-01

    Urges an approach to the teaching of history that takes the largest possible perspective, crossing time as well as space. Discusses the problems and advantages of such an approach. Describes a course on "big" history that begins with time, creation myths, and astronomy, and moves on to paleontology and evolution. (DK)

  11. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  12. True Randomness from Big Data

    Science.gov (United States)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  13. Big Data for personalized healthcare

    NARCIS (Netherlands)

    Siemons, Liseth; Sieverink, Floor; Vollenbroek, Wouter; van de Wijngaert, Lidwien; Braakman-Jansen, Annemarie; van Gemert-Pijnen, Lisette

    2016-01-01

    Big Data, often defined according to the 5V model (volume, velocity, variety, veracity and value), is seen as the key towards personalized healthcare. However, it also confronts us with new technological and ethical challenges that require more sophisticated data management tools and data analysis

  14. Big data en gelijke behandeling

    NARCIS (Netherlands)

    Lammerant, Hans; de Hert, Paul; Blok, P.H.; Blok, P.H.

    2017-01-01

    In dit hoofdstuk bekijken we allereerst de voornaamste basisbegrippen inzake gelijke behandeling en discriminatie (paragraaf 6.2). Vervolgens kijken we haar het Nederlandse en Europese juridisch kader inzake non-discriminatie (paragraaf 6.3-6.5) en hoe die regels moeten worden toegepast op big

  15. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  16. BIG SKY CARBON SEQUESTRATION PARTNERSHIP

    Energy Technology Data Exchange (ETDEWEB)

    Susan M. Capalbo

    2004-10-31

    The Big Sky Carbon Sequestration Partnership, led by Montana State University, is comprised of research institutions, public entities and private sectors organizations, and the Confederated Salish and Kootenai Tribes and the Nez Perce Tribe. Efforts under this Partnership fall into four areas: evaluation of sources and carbon sequestration sinks; development of GIS-based reporting framework; designing an integrated suite of monitoring, measuring, and verification technologies; and initiating a comprehensive education and outreach program. At the first two Partnership meetings the groundwork was put in place to provide an assessment of capture and storage capabilities for CO{sub 2} utilizing the resources found in the Partnership region (both geological and terrestrial sinks), that would complement the ongoing DOE research. During the third quarter, planning efforts are underway for the next Partnership meeting which will showcase the architecture of the GIS framework and initial results for sources and sinks, discuss the methods and analysis underway for assessing geological and terrestrial sequestration potentials. The meeting will conclude with an ASME workshop. The region has a diverse array of geological formations that could provide storage options for carbon in one or more of its three states. Likewise, initial estimates of terrestrial sinks indicate a vast potential for increasing and maintaining soil C on forested, agricultural, and reclaimed lands. Both options include the potential for offsetting economic benefits to industry and society. Steps have been taken to assure that the GIS-based framework is consistent among types of sinks within the Big Sky Partnership area and with the efforts of other western DOE partnerships. Efforts are also being made to find funding to include Wyoming in the coverage areas for both geological and terrestrial sinks and sources. The Partnership recognizes the critical importance of measurement, monitoring, and verification

  17. Leptin plasma concentrations, leptin gene expression, and protein localization in the hypothalamic-pituitary-gonadal and hypothalamic-pituitary-adrenal axes of the European beaver (Castor fiber).

    Science.gov (United States)

    Chojnowska, Katarzyna; Czerwinska, Joanna; Kaminski, Tadeusz; Kaminska, Barbara; Kurzynska, Aleksandra; Bogacka, Iwona

    2017-01-01

    The European beaver (Castor fiber) is the largest seasonal free-living rodent in Eurasia. Since the physiology and endocrine system of this species remains unknown, the present study aimed to determine plasma leptin concentrations and the expression of the leptin gene and protein in the hypothalamic-pituitary-gonadal and hypothalamic-pituitary-adrenal (HPG and HPA) axes of beavers during breeding (April), postbreeding (July), and prebreeding (November) seasons. Leptin plasma concentrations did not change in females, whereas in males, leptin plasma concentrations were higher in July than those in April. The presence of leptin mRNA and protein was found in all examined tissues. In females, leptin mRNA expression in the hypothalamus, pituitary, ovaries, and myometrium was markedly higher in July than that in April. In males, leptin mRNA levels varied across the examined tissues of the HPG and HPA. Leptin synthesis increased in the hypothalamus during breeding and postbreeding seasons, but seasonal changes were not observed in the pituitary. In turn, testicular leptin levels were higher during breeding and prebreeding stages. Seasonal differences in the concentrations of leptin mRNA were also observed in the adrenal cortex. In males, leptin mRNA levels were higher in November than those in April or July. In females, leptin synthesis increased in the adrenal cortex during pregnancy relative to other seasons. This is the first ever study to demonstrate seasonal differences in leptin expression in beaver tissues, and our results could suggest that leptin is involved in the regulation of the HPG and HPA axes during various stages of the reproductive cycle in beavers. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Proximate weather patterns and spring green-up phenology effect Eurasian beaver (Castor fiber) body mass and reproductive success: the implications of climate change and topography.

    Science.gov (United States)

    Campbell, Ruairidh D; Newman, Chris; Macdonald, David W; Rosell, Frank

    2013-04-01

    Low spring temperatures have been found to benefit mobile herbivores by reducing the rate of spring-flush, whereas high rainfall increases forage availability. Cold winters prove detrimental, by increasing herbivore thermoregulatory burdens. Here we examine the effects of temperature and rainfall variability on a temperate sedentary herbivore, the Eurasian beaver, Castor fiber, in terms of inter-annual variation in mean body weight and per territory offspring production. Data pertain to 198 individuals, over 11 years, using capture-mark-recapture. We use plant growth (tree cores) and fAPAR (a satellite-derived plant productivity index) to examine potential mechanisms through which weather conditions affect the availability and the seasonal phenology of beaver forage. Juvenile body weights were lighter after colder winters, whereas warmer spring temperatures were associated with lighter adult body weights, mediated by enhanced green-up phenology rates. Counter-intuitively, we observed a negative association between rainfall and body weight in juveniles and adults, and also with reproductive success. Alder, Alnus incana, (n = 68) growth rings (principal beaver food in the study area) exhibited a positive relationship with rainfall for trees growing at elevations >2 m above water level, but a negative relationship for trees growing change, interactions between weather variables, plant phenology and topography on forage growth are instructive, and consequently warrant examination when developing conservation management strategies for populations of medium to large herbivores. © 2012 Blackwell Publishing Ltd.

  19. Suspended-sediment and turbidity responses to sediment and turbidity reduction projects in the Beaver Kill, Stony Clove Creek, and Warner Creek, Watersheds, New York, 2010–14

    Science.gov (United States)

    Siemion, Jason; McHale, Michael R.; Davis, Wae Danyelle

    2016-12-05

    Suspended-sediment concentrations (SSCs) and turbidity were monitored within the Beaver Kill, Stony Clove Creek, and Warner Creek tributaries to the upper Esopus Creek in New York, the main source of water to the Ashokan Reservoir, from October 1, 2010, through September 30, 2014. The purpose of the monitoring was to determine the effects of suspended-sediment and turbidity reduction projects (STRPs) on SSC and turbidity in two of the three streams; no STRPs were constructed in the Beaver Kill watershed. During the study period, four STRPs were completed in the Stony Clove Creek and Warner Creek watersheds. Daily mean SSCs decreased significantly for a given streamflow after the STRPs were completed. The most substantial decreases in daily mean SSCs were measured at the highest streamflows. Background SSCs, as measured in water samples collected in upstream reference stream reaches, in all three streams in this study were less than 5 milligrams per liter during low and high streamflows. Longitudinal stream sampling identified stream reaches with failing hillslopes in contact with the stream channel as the primary sediment sources in the Beaver Kill and Stony Clove Creek watersheds.

  20. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  1. Modelos de Beaver, Ohlson y Altman: ¿Son realmente capaces de predecir la bancarrota en el sector empresarial costarricense? (Models of Beaver, Ohlson and Altman: are really able to predict the bankruptcy in the Costa Rican business sector?

    Directory of Open Access Journals (Sweden)

    José Alonso Vargas Charpentier

    2014-12-01

    Full Text Available El presente artículo analiza la aplicación de mo- delos para la prevención de bancarrotas empresariales en el sector empresarial costarricense. Se aplicaron los modelos a un grupo de empresas que se acogieron al proceso de intervención financiera, o quiebra, en el Juzgado Concursal de los Tribunales de Justicia de San José, con el fin de determinar si estos modelos fueron capaces de predecir la bancarrota. Dentro de los hallazgos principales están que el Modelo de Altman calificó a cuatro de las cinco empresas anali- zadas como zona roja el año en que se declararon en quiebra, el Modelo de Ohlson, con su ecuación O1 u O3, calificó en quiebra las cinco empresas el año en que se dio, y el modelo de Beaver calificó como el año con peores indicadores al último en tres oca- siones, a diferencia de los otros modelos, los cuales no indicaron que el año de quiebra tuviera los peores indicadores.   Abstract    This article analyzes the models for company bankruptcy prevention used by the Costa Rican business sector. The author applied the studies models to the selected group of companies that had started the judicial intervention or bankruptcy process within the Bankruptcy Court of the Justice Court of San Jose. The objective was to determine if the selected models were capable to prevent the bankruptcy before it initialized. The primary fin- ding was an EM Score 4:5 analyzed companies in the red zone that had filed for bankruptcy within that year. The use of the O1 and O3 equations of the Ohlson method was to predict the bankruptcy of the five companies in the year that had filed to have a bankruptcy case. However, in the Beaver method the companies scored a three as the worst indicators of the year to file for bankruptcy yet the remaining company’s worst year was not the ability to file for bankruptcy within the time.

  2. Big advance towards the LHC upgrade

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    The LHC is currently the world’s most powerful accelerator. With its technical achievements it has already set world records. However, big science looks very far ahead in time and is already preparing already for the LHC’s magnet upgrade, which should involve a 10-fold increase of the collision rates toward the end of the next decade. The new magnet technology involves the use of an advanced superconducting material that has just started to show its potential.   The first Long Quadrupole Shell (LQS01) model during assembly at Fermilab. The first important step in the qualification of the new technology for use in the LHC was achieved at the beginning of December when the US LHC Accelerator Research Program (LARP) – a consortium of Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory and the SLAC National Accelerator Laboratory founded by US Department Of Energy (DOE) in 2003 – successfully tested the first long focussing magnet th...

  3. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    CERN Multimedia

    2008-01-01

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  4. Perspectives on Big Data and Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Elena Geanina ULARU

    2012-12-01

    Full Text Available Nowadays companies are starting to realize the importance of using more data in order to support decision for their strategies. It was said and proved through study cases that “More data usually beats better algorithms”. With this statement companies started to realize that they can chose to invest more in processing larger sets of data rather than investing in expensive algorithms. The large quantity of data is better used as a whole because of the possible correlations on a larger amount, correlations that can never be found if the data is analyzed on separate sets or on a smaller set. A larger amount of data gives a better output but also working with it can become a challenge due to processing limitations. This article intends to define the concept of Big Data and stress the importance of Big Data Analytics.

  5. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  6. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  7. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  8. BigSUR

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  9. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  10. Big data and ophthalmic research.

    Science.gov (United States)

    Clark, Antony; Ng, Jonathon Q; Morlet, Nigel; Semmens, James B

    2016-01-01

    Large population-based health administrative databases, clinical registries, and data linkage systems are a rapidly expanding resource for health research. Ophthalmic research has benefited from the use of these databases in expanding the breadth of knowledge in areas such as disease surveillance, disease etiology, health services utilization, and health outcomes. Furthermore, the quantity of data available for research has increased exponentially in recent times, particularly as e-health initiatives come online in health systems across the globe. We review some big data concepts, the databases and data linkage systems used in eye research-including their advantages and limitations, the types of studies previously undertaken, and the future direction for big data in eye research. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. George and the big bang

    CERN Document Server

    Hawking, Lucy; Parsons, Gary

    2012-01-01

    George has problems. He has twin baby sisters at home who demand his parents’ attention. His beloved pig Freddy has been exiled to a farm, where he’s miserable. And worst of all, his best friend, Annie, has made a new friend whom she seems to like more than George. So George jumps at the chance to help Eric with his plans to run a big experiment in Switzerland that seeks to explore the earliest moment of the universe. But there is a conspiracy afoot, and a group of evildoers is planning to sabotage the experiment. Can George repair his friendship with Annie and piece together the clues before Eric’s experiment is destroyed forever? This engaging adventure features essays by Professor Stephen Hawking and other eminent physicists about the origins of the universe and ends with a twenty-page graphic novel that explains how the Big Bang happened—in reverse!

  12. Big Data for Precision Medicine

    Directory of Open Access Journals (Sweden)

    Daniel Richard Leff

    2015-09-01

    Full Text Available This article focuses on the potential impact of big data analysis to improve health, prevent and detect disease at an earlier stage, and personalize interventions. The role that big data analytics may have in interrogating the patient electronic health record toward improved clinical decision support is discussed. We examine developments in pharmacogenetics that have increased our appreciation of the reasons why patients respond differently to chemotherapy. We also assess the expansion of online health communications and the way in which this data may be capitalized on in order to detect public health threats and control or contain epidemics. Finally, we describe how a new generation of wearable and implantable body sensors may improve wellbeing, streamline management of chronic diseases, and improve the quality of surgical implants.

  13. Big Data hvor N=1

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind

    2017-01-01

    Forskningen vedrørende anvendelsen af ’big data’ indenfor sundhed er kun lige begyndt, og kan på sigt blive en stor hjælp i forhold til at tilrettelægge en mere personlig og helhedsorienteret sundhedsindsats for multisyge. Personlig sundhedsteknologi, som kort præsenteres i dette kapital, rummer et...... stor potentiale for at gennemføre ’big data’ analyser for den enkelte person, det vil sige hvor N=1. Der er store teknologiske udfordringer i at få lavet teknologier og metoder til at indsamle og håndtere personlige data, som kan deles, på tværs på en standardiseret, forsvarlig, robust, sikker og ikke...

  14. Big Data in Transport Geography

    DEFF Research Database (Denmark)

    Reinau, Kristian Hegner; Agerholm, Niels; Lahrmann, Harry Spaabæk

    The emergence of new tracking technologies and Big Data has caused a transformation of the transport geography field in recent years. One new datatype, which is starting to play a significant role in public transport, is smart card data. Despite the growing focus on smart card data, there is a need...... for studies that explicitly compare the quality of this new type of data to traditional data sources. With the current focus on Big Data in the transport field, public transport planners are increasingly looking towards smart card data to analyze and optimize flows of passengers. However, in many cases...... it is not all public transport passengers in a city, region or country with a smart card system that uses the system, and in such cases, it is important to know what biases smart card data has in relation to giving a complete view upon passenger flows. This paper therefore analyses the quality and biases...

  15. The big wheels of ATLAS

    CERN Multimedia

    2006-01-01

    The ATLAS cavern is filling up at an impressive rate. The installation of the first of the big wheels of the muon spectrometer, a thin gap chamber (TGC) wheel, was completed in September. The muon spectrometer will include four big moving wheels at each end, each measuring 25 metres in diameter. Of the eight wheels in total, six will be composed of thin gap chambers for the muon trigger system and the other two will consist of monitored drift tubes (MDTs) to measure the position of the muons (see Bulletin No. 13/2006). The installation of the 688 muon chambers in the barrel is progressing well, with three-quarters of them already installed between the coils of the toroid magnet.

  16. Relationship Between Big Five Personality Traits, Emotional Intelligence and Self-esteem Among College Students

    OpenAIRE

    Fauzia Nazir, AnamAzam, Muhammad Rafiq, Sobia Nazir, Sophia Nazir, ShaziaTasleem

    2015-01-01

    The current research study was on the “Relationship between Big Five Personality Traits & Emotional Intelligence and Self-esteem among the College Students”. This work is based on cross sectional survey research design. The convenience sample was used by including 170 female Students studying at government college kotla Arab Ali khan Gujrat, Pakistan, degree program of 3rd year and 4th year. The study variables were measured using Big Five Inventory Scale by Goldberg (1993), Emotional Intell...

  17. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    OpenAIRE

    Cheung, Mike W.-L.; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists – and probably the most crucial one – is that most researchers may not have the necessary programming and computati...

  18. Big Data in Cancer Genomics

    OpenAIRE

    Chin, Suet Feung; Maia, AT; Jacinta-Fernandes, A; Sammut, Stephen John

    2017-01-01

    Advances in genomic technologies in the last decade have revolutionised the field of medicine, especially in cancer, by producing a large amount of genetic information, often referred to as Big Data. The identification of genetic predisposition changes, prognostic signatures, and cancer driver genes, which when mutated can act as genetic biomarkers for both targeted treatments and disease monitoring, has greatly advanced our understanding of cancer. However, there are still many challenges, s...

  19. Big Data Comes to School

    Directory of Open Access Journals (Sweden)

    Bill Cope

    2016-03-01

    Full Text Available The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.

  20. Big data: the management revolution.

    Science.gov (United States)

    McAfee, Andrew; Brynjolfsson, Erik

    2012-10-01

    Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.

  1. Invisible Roles of Doctoral Program Specialists

    Science.gov (United States)

    Bachman, Eva Burns; Grady, Marilyn L.

    2016-01-01

    The purpose of this study was to investigate the roles of doctoral program specialists in Big Ten universities. Face-to-face interviews with 20 doctoral program specialists employed in institutions in the Big Ten were conducted. Participants were asked to describe their roles within their work place. The doctoral program specialists reported their…

  2. Repair, Evaluation, Maintenance, and Rehabilitation Research Program. High-Resolution Seismic Reflection Investigations at Beaver Dam, Arkansas

    Science.gov (United States)

    1989-07-01

    Commander and Director of WES during the preparation of this report was LTC Jack R. Stephens, EN. Technical Director was Dr. Robert W. Whalin. L1...6C40 sa I"* owAN St.. SOUN iRN.M 0.0 PROCESSING Se.EIC! Of-WaluquUATERUAYS EXPERIMIENT STATION w UER DAMAKNSAS W14111115S MI AIMs L) Gl- lwmwa MLINE C-3

  3. Microbial community diversity patterns are related to physical and chemical differences among temperate lakes near Beaver Island, MI.

    Science.gov (United States)

    Hengy, Miranda H; Horton, Dean J; Uzarski, Donald G; Learman, Deric R

    2017-01-01

    Lakes are dynamic and complex ecosystems that can be influenced by physical, chemical, and biological processes. Additionally, individual lakes are often chemically and physically distinct, even within the same geographic region. Here we show that differences in physicochemical conditions among freshwater lakes located on (and around) the same island, as well as within the water column of each lake, are significantly related to aquatic microbial community diversity. Water samples were collected over time from the surface and bottom-water within four freshwater lakes located around Beaver Island, MI within the Laurentian Great Lakes region. Three of the sampled lakes experienced seasonal lake mixing events, impacting either O2, pH, temperature, or a combination of the three. Microbial community alpha and beta diversity were assessed and individual microbial taxa were identified via high-throughput sequencing of the 16S rRNA gene. Results demonstrated that physical and chemical variability (temperature, dissolved oxygen, and pH) were significantly related to divergence in the beta diversity of surface and bottom-water microbial communities. Despite its correlation to microbial community structure in unconstrained analyses, constrained analyses demonstrated that dissolved organic carbon (DOC) concentration was not strongly related to microbial community structure among or within lakes. Additionally, several taxa were correlated (either positively or negatively) to environmental variables, which could be related to aerobic and anaerobic metabolisms. This study highlights the measurable relationships between environmental conditions and microbial communities within freshwater temperate lakes around the same island.

  4. The international management of big scientific research programs. The example of particle physics; La gestion internationale des grands programmes de recherche scientifique l'exemple de la physique des particules

    Energy Technology Data Exchange (ETDEWEB)

    Feltesse, J. [CEA Saclay, Dept. d' Astrophysique, de Physique des Particules, de Physique Nucleaire et de l' Instrumentation Associee, 91- Gif sur Yvette (France); Comite de Directives Scientifique du CERN (France)

    2004-07-01

    High energy physics is a basic research domain with a well established European and international cooperation. Cooperation can be of different type depending on the size of the facilities involved (accelerators), on their financing, and on the type of experiments that use these facilities. The CERN, the European center for nuclear research, created in October 1954, is the best example of such a cooperation. This article examines first the juridical and scientifical structure of the CERN and the mode of organization of big experiments. Then, it presents the role of international committees in the establishment of a common scientific policy in Europe and in the rest of the world. Finally, the possible future evolution of the CERN towards a worldwide project is evoked. (J.S.)

  5. Final Scientific/Technical Report to the U.S. Department of Energy on NOVA's Einstein's Big Idea (Project title: E-mc2, A Two-Hour Television Program on NOVA)

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Susanne

    2007-05-07

    A woman in the early 1700s who became one of Europe’s leading interpreters of mathematics and a poor bookbinder who became one of the giants of nineteenth-century science are just two of the pioneers whose stories NOVA explored in Einstein’s Big Idea. This two-hour documentary premiered on PBS in October 2005 and is based on the best-selling book by David Bodanis, E=mc2: A Biography of the World’s Most Famous Equation. The film and book chronicle the scientific challenges and discoveries leading up to Einstein’s startling conclusion that mass and energy are one, related by the formula E = mc2.

  6. The Design of Intelligent Repair Welding Mechanism and Relative Control System of Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available Effective repair of worn big gear has large influence on ensuring safety production and enhancing economic benefits. A kind of intelligent repair welding method was put forward mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. Big gear repair welding mechanism was designed in this paper. The work principle and part selection of big gear repair welding mechanism was introduced. The three dimensional mode of big gear repair welding mechanism was constructed by Pro/E three dimensional design software. Three dimensional motions can be realized by motor controlling ball screw. According to involute gear feature, the complicated curve motion on curved gear surface can be transformed to linear motion by orientation. By this way, the repair welding on worn gear area can be realized. In the design of big gear repair welding mechanism control system, Siemens S7-200 series hardware was chosen. Siemens STEP7 programming software was chosen as system design tool. The entire repair welding process was simulated by experiment simulation. It provides a kind of practical and feasible method for the intelligent repair welding of big worn gear.

  7. EAARL-B Topography-Big Thicket National Preserve: Big Sandy Creek Corridor Unit, Texas, 2014

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A first-surface topography Digital Elevation Model (DEM) mosaic for the Big Sandy Creek Corridor Unit of Big Thicket National Preserve in Texas was produced from...

  8. EAARL-B Topography-Big Thicket National Preserve: Big Sandy Creek Unit, Texas, 2014

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A bare-earth topography digital elevation model (DEM) mosaic for the Big Sandy Creek Unit of Big Thicket National Preserve in Texas, was produced from remotely...

  9. EAARL-B Topography-Big Thicket National Preserve: Big Sandy Creek Unit, Texas, 2014

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A first-surface topography digital elevation model (DEM) mosaic for the Big Sandy Creek Unit of Big Thicket National Preserve in Texas, was produced from remotely...

  10. EAARL-B Topography-Big Thicket National Preserve: Big Sandy Creek Corridor Unit, Texas, 2014

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A bare-earth topography Digital Elevation Model (DEM) mosaic for the Big Sandy Creek Corridor Unit of Big Thicket National Preserve in Texas was produced from...

  11. Medical big data: promise and challenges

    OpenAIRE

    Choong Ho Lee; Hyung-Jin Yoon

    2017-01-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct fr...

  12. Multi-Objective Big Bang–Big Crunch Optimization Algorithm For Recursive Digital Filter Design

    OpenAIRE

    Ms. Rashmi Singh; Dr. H. K. Verma

    2012-01-01

    The paper represents the design of recursive second order Butterworth low pass digital filter which optimizes both the magnitude and group delay simultaneously under the Multi-Objective Big Bang-Big Crunch Optimization algorithm. Multi-Objective problem of magnitude and group delay are solved using Multi-Objective BB-BC Optimization algorithm that operates on a complex, continuous search space and optimized by statistically determining the abilities of Big Bang Phase and Big Crunch Phase. Her...

  13. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  14. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  15. Storage and Database Management for Big Data

    Science.gov (United States)

    2015-07-27

    pipeline is to parse these files 1.3. SYSTEM ENGINEERING FOR BIG DATA 5 Figure 1.2: A standard big data pipeline consists of five steps to go from raw...Storage and Database Management for Big Data Vijay Gadepally, Jeremy Kepner and Albert Reuther MIT Lincoln Laboratory Lexington, MA, USA 02420...vijayg@ll.mit.edu, kepner@ll.mit.edu, reuther@ll.mit.edu Distribution A: Public Release July 27, 2015 ii Contents 1 Storage and Database Management for Big

  16. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  17. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  18. Organizational Design Challenges Resulting From Big Data

    Directory of Open Access Journals (Sweden)

    Jay R. Galbraith

    2014-04-01

    Full Text Available Business firms and other types of organizations are feverishly exploring ways of taking advantage of the big data phenomenon. This article discusses firms that are at the leading edge of developing a big data analytics capability. Firms that are currently enjoying the most success in this area are able to use big data not only to improve their existing businesses but to create new businesses as well. Putting a strategic emphasis on big data requires adding an analytics capability to the existing organization. This transformation process results in power shifting to analytics experts and in decisions being made in real time.

  19. The Big Bang Theory on TV: How to make a Big Bang with the Public

    Science.gov (United States)

    Prady, Bill

    2011-04-01

    Is it possible for a television sitcom to accurately portray scientists? Probably not, but with some effort it can accurately portray science. Since its debut in 2007, The Big Bang Theory on CBS has striven to include accurate and current references to physics, astrophysics and other disciplines. This attention to detail (which means that Big Bang is the first television comedy to employ a physicist as a consultant) is an obsession of its co-creator and executive producer, Bill Prady. Prady, whose twenty-six year career in television has taken him from Jim Henson's Muppets to this current project, began his working life as a computer programmer. His frustration with how inaccurately science and technology is generally depicted in film and television led him to ask if it was possible to be both correct and funny. Using clips from the show as examples, we will engage in a discussion of the depiction of science on this program and in popular entertainment in general.

  20. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  1. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  2. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  3. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  4. Big Deployables in Small Satellites

    OpenAIRE

    Davis, Bruce; Francis, William; Goff, Jonathan; Cross, Michael; Copel, Daniel

    2014-01-01

    The concept of utilizing small satellites to perform big mission objectives has grown from a distant idea to a demonstrated reality. One of the challenges in using small-satellite platforms for high-value missions is the packaging of long and large surface-area devices such as antennae, solar arrays and sensor positioning booms. One possible enabling technology is the slit-tube, or a deployable “tape-measure” boom which can be flattened and rolled into a coil achieving a high volumetric packa...

  5. Fitting ERGMs on big networks.

    Science.gov (United States)

    An, Weihua

    2016-09-01

    The exponential random graph model (ERGM) has become a valuable tool for modeling social networks. In particular, ERGM provides great flexibility to account for both covariates effects on tie formations and endogenous network formation processes. However, there are both conceptual and computational issues for fitting ERGMs on big networks. This paper describes a framework and a series of methods (based on existent algorithms) to address these issues. It also outlines the advantages and disadvantages of the methods and the conditions to which they are most applicable. Selected methods are illustrated through examples. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Big Data and Intelligence: Applications, Human Capital, and Education

    Directory of Open Access Journals (Sweden)

    Michael Landon-Murray

    2016-06-01

    Full Text Available The potential for big data to contribute to the US intelligence mission goes beyond bulk collection, social media and counterterrorism. Applications will speak to a range of issues of major concern to intelligence agencies, from military operations to climate change to cyber security. There are challenges too: procurement lags, data stovepiping, separating signal from noise, sources and methods, a range of normative issues, and central to managing these challenges, human capital. These potential applications and challenges are discussed and a closer look at what data scientists do in the Intelligence Community (IC is offered. Effectively filling the ranks of the IC’s data science workforce will depend on the provision of well-trained data scientists from the higher education system. Program offerings at America’s top fifty universities will thus be surveyed (just a few years ago there were reportedly no degrees in data science. One Master’s program that has melded data science with intelligence is examined as well as a university big data research center focused on security and intelligence. This discussion goes a long way to clarify the prospective uses of data science in intelligence while probing perhaps the key challenge to optimal application of big data in the IC.

  7. Challenges and potential solutions for big data implementations in developing countries.

    Science.gov (United States)

    Luna, D; Mayan, J C; García, M J; Almerares, A A; Househ, M

    2014-08-15

    The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: "big data", "developing countries", "data mining", "health information systems", and "computing methodologies". A thematic review of selected articles was performed. There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

  8. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    Directory of Open Access Journals (Sweden)

    Mike W.-L. Cheung

    2016-05-01

    Full Text Available Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists – and probably the most crucial one – is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  9. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    Science.gov (United States)

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  10. Big Challenges and Big Opportunities: The Power of "Big Ideas" to Change Curriculum and the Culture of Teacher Planning

    Science.gov (United States)

    Hurst, Chris

    2014-01-01

    Mathematical knowledge of pre-service teachers is currently "under the microscope" and the subject of research. This paper proposes a different approach to teacher content knowledge based on the "big ideas" of mathematics and the connections that exist within and between them. It is suggested that these "big ideas"…

  11. Optimizing Hadoop Performance for Big Data Analytics in Smart Grid

    Directory of Open Access Journals (Sweden)

    Mukhtaj Khan

    2017-01-01

    Full Text Available The rapid deployment of Phasor Measurement Units (PMUs in power systems globally is leading to Big Data challenges. New high performance computing techniques are now required to process an ever increasing volume of data from PMUs. To that extent the Hadoop framework, an open source implementation of the MapReduce computing model, is gaining momentum for Big Data analytics in smart grid applications. However, Hadoop has over 190 configuration parameters, which can have a significant impact on the performance of the Hadoop framework. This paper presents an Enhanced Parallel Detrended Fluctuation Analysis (EPDFA algorithm for scalable analytics on massive volumes of PMU data. The novel EPDFA algorithm builds on an enhanced Hadoop platform whose configuration parameters are optimized by Gene Expression Programming. Experimental results show that the EPDFA is 29 times faster than the sequential DFA in processing PMU data and 1.87 times faster than a parallel DFA, which utilizes the default Hadoop configuration settings.

  12. Potential role of beavers (Castor fiber in contamination of water in the Masurian Lake District (north-eastern Poland with protozoan parasites Cryptosporidium spp. and Giardia duodenalis

    Directory of Open Access Journals (Sweden)

    Sroka Jacek

    2015-06-01

    Full Text Available The purpose of this study was to assess the possible influence of beavers on the contamination of lake water with zoonotic parasites Giardia duodenalis and Cryptosporidium spp., with respect to the risk to human health. A total of 79 water samples were taken around the habitats of beavers from 14 localities situated in the recreational Masurian Lake District (north-eastern Poland. Water was sampled in the spring and autumn seasons, at different distances from beavers’ lodges (0-2, 10, 30, and 50 m. The samples were examined for the presence of (oocysts of zoonotic protozoa Giardia duodenalis and Cryptosporidium spp. by direct fluorescence assay (DFA and by nested and real time PCR. By DFA, the presence of Giardia cysts was found in 36 samples (45.6% and the presence of Cryptosporidium oocysts in 26 samples (32.9%. Numbers of Giardia cysts, Cryptosporidium oocysts, and summarised (oocysts of both parasites showed a significant variation depending on locality. The numbers of Giardia cysts significantly decreased with the distance from beavers’ lodges while the numbers of Cryptosporidium oocysts did not show such dependence. The amount of Giardia cysts in samples collected in spring was approximately 3 times higher than in autumn. Conversely, a larger number of Cryptosporidium oocysts were detected in samples collected in autumn than in spring. By PCR, Giardia DNA was found in 38 samples (48.1% whereas DNA of Cryptosporidium was found in only 7 samples (8.9%. Eleven Giardia isolates were subjected to phylogenetic analysis by restriction fragment length polymorphism PCR or sequencing which evidenced their belonging to zoonotic assemblages: A (3 isolates and B (8 isolates. In conclusion, water in the vicinity of beavers’ lodges in the tested region was markedly contaminated with (oocysts of Giardia duodenalis and Cryptosporidium spp., which confirms the potential role of beavers as a reservoir of these parasites and indicates a need for

  13. Baryon symmetric big bang cosmology

    Science.gov (United States)

    Stecker, F. W.

    1978-01-01

    Both the quantum theory and Einsteins theory of special relativity lead to the supposition that matter and antimatter were produced in equal quantities during the big bang. It is noted that local matter/antimatter asymmetries may be reconciled with universal symmetry by assuming (1) a slight imbalance of matter over antimatter in the early universe, annihilation, and a subsequent remainder of matter; (2) localized regions of excess for one or the other type of matter as an initial condition; and (3) an extremely dense, high temperature state with zero net baryon number; i.e., matter/antimatter symmetry. Attention is given to the third assumption, which is the simplest and the most in keeping with current knowledge of the cosmos, especially as pertains the universality of 3 K background radiation. Mechanisms of galaxy formation are discussed, whereby matter and antimatter might have collided and annihilated each other, or have coexisted (and continue to coexist) at vast distances. It is pointed out that baryon symmetric big bang cosmology could probably be proved if an antinucleus could be detected in cosmic radiation.

  14. Georges et le big bang

    CERN Document Server

    Hawking, Lucy; Parsons, Gary

    2011-01-01

    Georges et Annie, sa meilleure amie, sont sur le point d'assister à l'une des plus importantes expériences scientifiques de tous les temps : explorer les premiers instants de l'Univers, le Big Bang ! Grâce à Cosmos, leur super ordinateur, et au Grand Collisionneur de hadrons créé par Éric, le père d'Annie, ils vont enfin pouvoir répondre à cette question essentielle : pourquoi existons nous ? Mais Georges et Annie découvrent qu'un complot diabolique se trame. Pire, c'est toute la recherche scientifique qui est en péril ! Entraîné dans d'incroyables aventures, Georges ira jusqu'aux confins de la galaxie pour sauver ses amis...Une plongée passionnante au coeur du Big Bang. Les toutes dernières théories de Stephen Hawking et des plus grands scientifiques actuels.

  15. Big data in oncologic imaging.

    Science.gov (United States)

    Regge, Daniele; Mazzetti, Simone; Giannini, Valentina; Bracco, Christian; Stasi, Michele

    2017-06-01

    Cancer is a complex disease and unfortunately understanding how the components of the cancer system work does not help understand the behavior of the system as a whole. In the words of the Greek philosopher Aristotle "the whole is greater than the sum of parts." To date, thanks to improved information technology infrastructures, it is possible to store data from each single cancer patient, including clinical data, medical images, laboratory tests, and pathological and genomic information. Indeed, medical archive storage constitutes approximately one-third of total global storage demand and a large part of the data are in the form of medical images. The opportunity is now to draw insight on the whole to the benefit of each individual patient. In the oncologic patient, big data analysis is at the beginning but several useful applications can be envisaged including development of imaging biomarkers to predict disease outcome, assessing the risk of X-ray dose exposure or of renal damage following the administration of contrast agents, and tracking and optimizing patient workflow. The aim of this review is to present current evidence of how big data derived from medical images may impact on the diagnostic pathway of the oncologic patient.

  16. The BigBOSS spectrograph

    Science.gov (United States)

    Jelinsky, Patrick; Bebek, Chris; Besuner, Robert; Carton, Pierre-Henri; Edelstein, Jerry; Lampton, Michael; Levi, Michael E.; Poppett, Claire; Prieto, Eric; Schlegel, David; Sholl, Michael

    2012-09-01

    BigBOSS is a proposed ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a 14,000 square degree galaxy and quasi-stellar object redshift survey. It consists of a 5,000- fiber-positioner focal plane feeding the spectrographs. The optical fibers are separated into ten 500 fiber slit heads at the entrance of ten identical spectrographs in a thermally insulated room. Each of the ten spectrographs has a spectral resolution (λ/Δλ) between 1500 and 4000 over a wavelength range from 360 - 980 nm. Each spectrograph uses two dichroic beam splitters to separate the spectrograph into three arms. It uses volume phase holographic (VPH) gratings for high efficiency and compactness. Each arm uses a 4096x4096 15 μm pixel charge coupled device (CCD) for the detector. We describe the requirements and current design of the BigBOSS spectrograph. Design trades (e.g. refractive versus reflective) and manufacturability are also discussed.

  17. Long-term rise of the Water Table in the Northeast US: Climate Variability, Land-Use Change, or Angry Beavers?

    Science.gov (United States)

    Boutt, D. F.

    2011-12-01

    The scientific evidence that humans are directly influencing the Earth's natural climate is increasingly compelling. Numerous studies suggest that climate change will lead to changes in the seasonality of surface water availability thereby increasing the need for groundwater development to offset those shortages. Research suggests that the Northeast region of the U.S. is experiencing significant changes to its' natural climate and hydrologic systems. Previous analysis of a long-term regional compilation of the water table response to the last 60 years of climate variability in New England documented a wide range of variability. The investigation evaluated the physical mechanisms, natural variability and response of aquifers in New England using 100 long term groundwater monitoring stations with 20 or more years of data coupled with 67 stream gages, 75 precipitation stations, and 43 temperature stations. Groundwater trends were calculated as normalized anomalies and analyzed with respect to regional compiled precipitation, temperature, and streamflow anomalies to understand the sensitivity of the aquifer systems to change. Interestingly, a trend and regression analysis demonstrate that water level fluctuations are producing statistically significant results with increasing water levels over at least the past thirty years at most (80 out of 100) well sites. In this contribution we investigate the causal mechanisms behind the observed ground water level trends using site-by-site land-use change assessments, cluster analysis, and spatial analysis of beaver populations (a possible proxy for beaver activity). Regionally, average annual precipitation has been slightly increasing since 1900, with 95% of the stations having statistically significant positive trends. Despite this, no correlation is observed between the magnitude of the annual precipitation trends and the magnitude of the groundwater level changes. Land-use change throughout the region has primarily taken

  18. An embedding for the big bang

    Science.gov (United States)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  19. In Search of the Big Bubble

    Science.gov (United States)

    Simoson, Andrew; Wentzky, Bethany

    2011-01-01

    Freely rising air bubbles in water sometimes assume the shape of a spherical cap, a shape also known as the "big bubble". Is it possible to find some objective function involving a combination of a bubble's attributes for which the big bubble is the optimal shape? Following the basic idea of the definite integral, we define a bubble's surface as…

  20. A New Look at Big History

    Science.gov (United States)

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  1. Big data analysis for smart farming

    NARCIS (Netherlands)

    Kempenaar, C.; Lokhorst, C.; Bleumer, E.J.B.; Veerkamp, R.F.; Been, Th.; Evert, van F.K.; Boogaardt, M.J.; Ge, L.; Wolfert, J.; Verdouw, C.N.; Bekkum, van Michael; Feldbrugge, L.; Verhoosel, Jack P.C.; Waaij, B.D.; Persie, van M.; Noorbergen, H.

    2016-01-01

    In this report we describe results of a one-year TO2 institutes project on the development of big data technologies within the milk production chain. The goal of this project is to ‘create’ an integration platform for big data analysis for smart farming and to develop a show case. This includes both

  2. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  3. The Death of the Big Men

    DEFF Research Database (Denmark)

    Martin, Keir

    2010-01-01

    Recently Tolai people og Papua New Guinea have adopted the term 'Big Shot' to decribe an emerging post-colonial political elite. The mergence of the term is a negative moral evaluation of new social possibilities that have arisen as a consequence of the Big Shots' privileged position within a glo...

  4. Big Red: A Development Environment for Bigraphs

    DEFF Research Database (Denmark)

    Faithfull, Alexander John; Perrone, Gian David; Hildebrandt, Thomas

    2013-01-01

    We present Big Red, a visual editor for bigraphs and bigraphical reactive systems, based upon Eclipse. The editor integrates with several existing bigraph tools to permit simulation and model-checking of bigraphical models. We give a brief introduction to the bigraphs formalism, and show how thes...... these concepts manifest within the tool using a small motivating example bigraphical model developed in Big Red....

  5. Big history and the future of humanity

    NARCIS (Netherlands)

    Spier, F.

    2011-01-01

    Big History and the Future of Humanity presents an theoretical approach that makes "big history" - the placing of the human past within the history of life, the Earth, and the Universe -- accessible to general readers while revealing insights into what the future may hold for humanity.

  6. Big Science and Long-tail Science

    CERN Multimedia

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  7. The DBMS - your Big Data Sommelier

    NARCIS (Netherlands)

    Kargın, Y.; Kersten, M.; Manegold, S.; Pirk, H.

    2015-01-01

    When addressing the problem of "big" data volume, preparation costs are one of the key challenges: the high costs for loading, aggregating and indexing data leads to a long data-to-insight time. In addition to being a nuisance to the end-user, this latency prevents real-time analytics on "big" data.

  8. Kansen voor Big data – WPA Vertrouwen

    NARCIS (Netherlands)

    Broek, T.A. van den; Roosendaal, A.P.C.; Veenstra, A.F.E. van; Nunen, A.M. van

    2014-01-01

    Big data is expected to become a driver for economic growth, but this can only be achieved when services based on (big) data are accepted by citizens and consumers. In a recent policy brief, the Cabinet Office mentions trust as one of the three pillars (the others being transparency and control) for

  9. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  10. Practice Variation in Big-4 Transparency Reports

    DEFF Research Database (Denmark)

    Girdhar, Sakshi; Klarskov Jeppesen, Kim

    2018-01-01

    Purpose: The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach: The study draws...... on a qualitative research approach, in which the content of transparency reports is analyzed and semi-structured interviews are conducted with key people from the Big-4 firms who are responsible for developing the transparency reports. Findings: The findings show that the content of transparency reports...... is inconsistent and the transparency reporting practice is not uniform within the Big-4 networks. Differences were found in the way in which the transparency reporting practices are coordinated globally by the respective central governing bodies of the Big-4. The content of the transparency reports...

  11. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  12. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  13. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  14. Forget the hype or reality. Big data presents new opportunities in Earth Science.

    Science.gov (United States)

    Lee, T. J.

    2015-12-01

    Earth science is arguably one of the most mature science discipline which constantly acquires, curates, and utilizes a large volume of data with diverse variety. We deal with big data before there is big data. For example, while developing the EOS program in the 1980s, the EOS data and information system (EOSDIS) was developed to manage the vast amount of data acquired by the EOS fleet of satellites. EOSDIS continues to be a shining example of modern science data systems in the past two decades. With the explosion of internet, the usage of social media, and the provision of sensors everywhere, the big data era has bring new challenges. First, Goggle developed the search algorithm and a distributed data management system. The open source communities quickly followed up and developed Hadoop file system to facility the map reduce workloads. The internet continues to generate tens of petabytes of data every day. There is a significant shortage of algorithms and knowledgeable manpower to mine the data. In response, the federal government developed the big data programs that fund research and development projects and training programs to tackle these new challenges. Meanwhile, comparatively to the internet data explosion, Earth science big data problem has become quite small. Nevertheless, the big data era presents an opportunity for Earth science to evolve. We learned about the MapReduce algorithms, in memory data mining, machine learning, graph analysis, and semantic web technologies. How do we apply these new technologies to our discipline and bring the hype to Earth? In this talk, I will discuss how we might want to apply some of the big data technologies to our discipline and solve many of our challenging problems. More importantly, I will propose new Earth science data system architecture to enable new type of scientific inquires.

  15. The big war over brackets.

    Science.gov (United States)

    Alvarez, R O

    1994-01-01

    The Third Preparatory Committee Meeting for the International Conference on Population and Development (ICPD), PrepCom III, was held at UN headquarters in New York on April 4-22, 1994. It was the last big preparatory meeting leading to the ICPD to be held in Cairo, Egypt, in September 1994. The author attended the second week of meetings as the official delegate of the Institute for Social Studies and Action. Debates mostly focused upon reproductive health and rights, sexual health and rights, family planning, contraception, condom use, fertility regulation, pregnancy termination, and safe motherhood. The Vatican and its allies' preoccupation with discussing language which may imply abortion caused sustainable development, population, consumption patterns, internal and international migration, economic strategies, and budgetary allocations to be discussed less extensively than they should have been. The author describes points of controversy, the power of women at the meetings, and afterthoughts on the meetings.

  16. Was the Big Bang hot?

    Science.gov (United States)

    Wright, E. L.

    1983-01-01

    Techniques for verifying the spectrum defined by Woody and Richards (WR, 1981), which serves as a base for dust-distorted models of the 3 K background, are discussed. WR detected a sharp deviation from the Planck curve in the 3 K background. The absolute intensity of the background may be determined by the frequency dependence of the dipole anisotropy of the background or the frequency dependence effect in galactic clusters. Both methods involve the Doppler shift; analytical formulae are defined for characterization of the dipole anisotropy. The measurement of the 30-300 GHz spectra of cold galactic dust may reveal the presence of significant amounts of needle-shaped grains, which would in turn support a theory of a cold Big Bang.

  17. Intelligent search in Big Data

    Science.gov (United States)

    Birialtsev, E.; Bukharaev, N.; Gusenkov, A.

    2017-10-01

    An approach to data integration, aimed on the ontology-based intelligent search in Big Data, is considered in the case when information objects are represented in the form of relational databases (RDB), structurally marked by their schemes. The source of information for constructing an ontology and, later on, the organization of the search are texts in natural language, treated as semi-structured data. For the RDBs, these are comments on the names of tables and their attributes. Formal definition of RDBs integration model in terms of ontologies is given. Within framework of the model universal RDB representation ontology, oil production subject domain ontology and linguistic thesaurus of subject domain language are built. Technique of automatic SQL queries generation for subject domain specialists is proposed. On the base of it, information system for TATNEFT oil-producing company RDBs was implemented. Exploitation of the system showed good relevance with majority of queries.

  18. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  19. Matrix sketching for big data reduction (Conference Presentation)

    Science.gov (United States)

    Ezekiel, Soundararajan; Giansiracusa, Michael

    2017-05-01

    Abstract: In recent years, the concept of Big Data has become a more prominent issue as the volume of data as well as the velocity in which it is produced exponentially increases. By 2020 the amount of data being stored is estimated to be 44 Zettabytes and currently over 31 Terabytes of data is being generated every second. Algorithms and applications must be able to effectively scale to the volume of data being generated. One such application designed to effectively and efficiently work with Big Data is IBM's Skylark. Part of DARPA's XDATA program, an open-source catalog of tools to deal with Big Data; Skylark, or Sketching-based Matrix Computations for Machine Learning is a library of functions designed to reduce the complexity of large scale matrix problems that also implements kernel-based machine learning tasks. Sketching reduces the dimensionality of matrices through randomization and compresses matrices while preserving key properties, speeding up computations. Matrix sketches can be used to find accurate solutions to computations in less time, or can summarize data by identifying important rows and columns. In this paper, we investigate the effectiveness of sketched matrix computations using IBM's Skylark versus non-sketched computations. We judge effectiveness based on several factors: computational complexity and validity of outputs. Initial results from testing with smaller matrices are promising, showing that Skylark has a considerable reduction ratio while still accurately performing matrix computations.

  20. Next Generation Workload Management and Analysis System for Big Data

    Energy Technology Data Exchange (ETDEWEB)

    De, Kaushik [Univ. of Texas, Arlington, TX (United States)

    2017-04-24

    We report on the activities and accomplishments of a four-year project (a three-year grant followed by a one-year no cost extension) to develop a next generation workload management system for Big Data. The new system is based on the highly successful PanDA software developed for High Energy Physics (HEP) in 2005. PanDA is used by the ATLAS experiment at the Large Hadron Collider (LHC), and the AMS experiment at the space station. The program of work described here was carried out by two teams of developers working collaboratively at Brookhaven National Laboratory (BNL) and the University of Texas at Arlington (UTA). These teams worked closely with the original PanDA team – for the sake of clarity the work of the next generation team will be referred to as the BigPanDA project. Their work has led to the adoption of BigPanDA by the COMPASS experiment at CERN, and many other experiments and science projects worldwide.

  1. Long-term changes in phytoplankton in a humic lake in response to the water level rising: the effects of beaver engineering on a freshwater ecosystem

    Directory of Open Access Journals (Sweden)

    Pęczuła W.

    2013-08-01

    Full Text Available Although water level changes are supposed to be a key factor affecting the functioning of lake ecosystems, knowledge on this topic is scarce, particularly for humic lakes. This paper presents the results of 18 years’ research on a small humic lake exposed to hydrological change (rising of the water level, which was induced by spontaneous colonization of the lake by the European beaver (Castor fiber L.. We put forward a hypothesis that this change will be reflected in the quantity and structure of summer phytoplankton due to expected changes in the water chemistry. We noted a statistically significant decrease in total phosphorus and calcium concentrations, electrolytic conductivity, and Secchi disc transparency, and an increase in water color. The phytoplankton structure changed, with cyanoprocaryota and greens decreasing and flagellates increasing. The alteration was observed in a lake which had previously been drained by ditches, so beaver damming appeared to cause the return of the lake to its original endorheic conditions as well as to a water chemistry and phytoplankton structure more typical of undisturbed humic lakes.

  2. Interannual and long-term changes in the trophic state of a multibasin lake: Effects of morphology, climate, winter aeration, and beaver activity

    Science.gov (United States)

    Robertson, Dale; Rose, William; Reneau, Paul C.

    2016-01-01

    Little St. Germain Lake (LSG), a relatively pristine multibasin lake in Wisconsin, USA, was examined to determine how morphologic (internal), climatic (external), anthropogenic (winter aeration), and natural (beaver activity) factors affect the trophic state (phosphorus, P; chlorophyll, CHL; and Secchi depth, SD) of each of its basins. Basins intercepting the main flow and external P sources had highest P and CHL and shallowest SD. Internal loading in shallow, polymictic basins caused P and CHL to increase and SD to decrease as summer progressed. Winter aeration used to eliminate winterkill increased summer internal P loading and decreased water quality, while reductions in upstream beaver impoundments had little effect on water quality. Variations in air temperature and precipitation affected each basin differently. Warmer air temperatures increased productivity throughout the lake and decreased clarity in less eutrophic basins. Increased precipitation increased P in the basins intercepting the main flow but had little effect on the isolated deep West Bay. These relations are used to project effects of future climatic changes on LSG and other temperate lakes.

  3. What makes Big Data, Big Data? Exploring the ontological characteristics of 26 datasets

    Directory of Open Access Journals (Sweden)

    Rob Kitchin

    2016-02-01

    Full Text Available Big Data has been variously defined in the literature. In the main, definitions suggest that Big Data possess a suite of key traits: volume, velocity and variety (the 3Vs, but also exhaustivity, resolution, indexicality, relationality, extensionality and scalability. However, these definitions lack ontological clarity, with the term acting as an amorphous, catch-all label for a wide selection of data. In this paper, we consider the question ‘what makes Big Data, Big Data?’, applying Kitchin’s taxonomy of seven Big Data traits to 26 datasets drawn from seven domains, each of which is considered in the literature to constitute Big Data. The results demonstrate that only a handful of datasets possess all seven traits, and some do not possess either volume and/or variety. Instead, there are multiple forms of Big Data. Our analysis reveals that the key definitional boundary markers are the traits of velocity and exhaustivity. We contend that Big Data as an analytical category needs to be unpacked, with the genus of Big Data further delineated and its various species identified. It is only through such ontological work that we will gain conceptual clarity about what constitutes Big Data, formulate how best to make sense of it, and identify how it might be best used to make sense of the world.

  4. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  5. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  6. Interoperability Outlook in the Big Data Future

    Science.gov (United States)

    Kuo, K. S.; Ramachandran, R.

    2015-12-01

    The establishment of distributed active archive centers (DAACs) as data warehouses and the standardization of file format by NASA's Earth Observing System Data Information System (EOSDIS) had doubtlessly propelled interoperability of NASA Earth science data to unprecedented heights in the 1990s. However, we obviously still feel wanting two decades later. We believe the inadequate interoperability we experience is a result of the the current practice that data are first packaged into files before distribution and only the metadata of these files are cataloged into databases and become searchable. Data therefore cannot be efficiently filtered. Any extensive study thus requires downloading large volumes of data files to a local system for processing and analysis.The need to download data not only creates duplication and inefficiency but also further impedes interoperability, because the analysis has to be performed locally by individual researchers in individual institutions. Each institution or researcher often has its/his/her own preference in the choice of data management practice as well as programming languages. Analysis results (derived data) so produced are thus subject to the differences of these practices, which later form formidable barriers to interoperability. A number of Big Data technologies are currently being examined and tested to address Big Earth Data issues. These technologies share one common characteristics: exploiting compute and storage affinity to more efficiently analyze large volumes and great varieties of data. Distributed active "archive" centers are likely to evolve into distributed active "analysis" centers, which not only archive data but also provide analysis service right where the data reside. "Analysis" will become the more visible function of these centers. It is thus reasonable to expect interoperability to improve because analysis, in addition to data, becomes more centralized. Within a "distributed active analysis center

  7. Big data and the electronic health record.

    Science.gov (United States)

    Peters, Steve G; Buntrock, James D

    2014-01-01

    The electronic medical record has evolved from a digital representation of individual patient results and documents to information of large scale and complexity. Big Data refers to new technologies providing management and processing capabilities, targeting massive and disparate data sets. For an individual patient, techniques such as Natural Language Processing allow the integration and analysis of textual reports with structured results. For groups of patients, Big Data offers the promise of large-scale analysis of outcomes, patterns, temporal trends, and correlations. The evolution of Big Data analytics moves us from description and reporting to forecasting, predictive modeling, and decision optimization.

  8. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  9. Ethics and Epistemology of Big Data.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian

    2017-12-01

    In this Symposium on the Ethics and Epistemology of Big Data, we present four perspectives on the ways in which the rapid growth in size of research databanks-i.e. their shift into the realm of "big data"-has changed their moral, socio-political, and epistemic status. While there is clearly something different about "big data" databanks, we encourage readers to place the arguments presented in this Symposium in the context of longstanding debates about the ethics, politics, and epistemology of biobank, database, genetic, and epidemiological research.

  10. Smart Information Management in Health Big Data.

    Science.gov (United States)

    Muteba A, Eustache

    2017-01-01

    The smart information management system (SIMS) is concerned with the organization of anonymous patient records in a big data and their extraction in order to provide needful real-time intelligence. The purpose of the present study is to highlight the design and the implementation of the smart information management system. We emphasis, in one hand, the organization of a big data in flat file in simulation of nosql database, and in the other hand, the extraction of information based on lookup table and cache mechanism. The SIMS in the health big data aims the identification of new therapies and approaches to delivering care.

  11. BIG DATA, BIG CONSEQUENCES? EEN VERKENNING NAAR PRIVACY EN BIG DATA GEBRUIK BINNEN DE OPSPORING, VERVOLGING EN RECHTSPRAAK

    OpenAIRE

    Lodder, A.R.; van der Meulen, N.S.; Wisman, T.H.A.; Meij, Lisette; Zwinkels, C.M.M.

    2014-01-01

    In deze verkenning is ingegaan op de privacy aspecten van Big Data analysis binnen het domein Veiligheid en Justitie. Besproken zijn toepassingen binnen de rechtspraak zoals voorspellen van uitspraken en gebruik in rechtszaken. Met betrekking tot opsporing is onder andere ingegaan op predictive policing en internetopsporing. Na een uiteenzetting van de privacynormen en toepassingsmogelijkheden, zijn de volgende zes uitgangspunten voor Big Data toepassingen voorgesteld: 7 A.R. Lodder e.a. ‐ Bi...

  12. Big questions, big science: meeting the challenges of global ecology.

    Science.gov (United States)

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  13. 76 FR 7837 - Big Rivers Electric Corporation; Notice of Filing

    Science.gov (United States)

    2011-02-11

    ... December 1, 2010, the date that Big Rivers integrated its transmission facilities with the Midwest... Energy Regulatory Commission Big Rivers Electric Corporation; Notice of Filing Take notice that on February 4, 2011, Big Rivers Electric Corporation (Big Rivers) filed a notice of cancellation of its Second...

  14. NOAA Big Data Partnership RFI

    Science.gov (United States)

    de la Beaujardiere, J.

    2014-12-01

    In February 2014, the US National Oceanic and Atmospheric Administration (NOAA) issued a Big Data Request for Information (RFI) from industry and other organizations (e.g., non-profits, research laboratories, and universities) to assess capability and interest in establishing partnerships to position a copy of NOAA's vast data holdings in the Cloud, co-located with easy and affordable access to analytical capabilities. This RFI was motivated by a number of concerns. First, NOAA's data facilities do not necessarily have sufficient network infrastructure to transmit all available observations and numerical model outputs to all potential users, or sufficient infrastructure to support simultaneous computation by many users. Second, the available data are distributed across multiple services and data facilities, making it difficult to find and integrate data for cross-domain analysis and decision-making. Third, large datasets require users to have substantial network, storage, and computing capabilities of their own in order to fully interact with and exploit the latent value of the data. Finally, there may be commercial opportunities for value-added products and services derived from our data. Putting a working copy of data in the Cloud outside of NOAA's internal networks and infrastructures should reduce demands and risks on our systems, and should enable users to interact with multiple datasets and create new lines of business (much like the industries built on government-furnished weather or GPS data). The NOAA Big Data RFI therefore solicited information on technical and business approaches regarding possible partnership(s) that -- at no net cost to the government and minimum impact on existing data facilities -- would unleash the commercial potential of its environmental observations and model outputs. NOAA would retain the master archival copy of its data. Commercial partners would not be permitted to charge fees for access to the NOAA data they receive, but

  15. Big Bend National Park: Acoustical Monitoring 2010

    Science.gov (United States)

    2013-06-01

    During the summer of 2010 (September October 2010), the Volpe Center collected baseline acoustical data at Big Bend National Park (BIBE) at four sites deployed for approximately 30 days each. The baseline data collected during this period will he...

  16. Quantum nature of the big bang.

    Science.gov (United States)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  17. Statistical Challenges in Modeling Big Brain Signals

    KAUST Repository

    Yu, Zhaoxia

    2017-11-01

    Brain signal data are inherently big: massive in amount, complex in structure, and high in dimensions. These characteristics impose great challenges for statistical inference and learning. Here we review several key challenges, discuss possible solutions, and highlight future research directions.

  18. Statistical Challenges in Modeling Big Brain Signals

    OpenAIRE

    Yu, Zhaoxia; Pluta, Dustin; Shen, Tong; Chen, Chuansheng; Xue, Gui; Ombao, Hernando

    2017-01-01

    Brain signal data are inherently big: massive in amount, complex in structure, and high in dimensions. These characteristics impose great challenges for statistical inference and learning. Here we review several key challenges, discuss possible solutions, and highlight future research directions.

  19. Fisicos argentinos reproduciran el Big Bang

    CERN Multimedia

    De Ambrosio, Martin

    2008-01-01

    Two groups of argentine physicists from La Plata and Buenos Aires Universities work in a sery of experiments who while recreate the conditions of the big explosion that was at the origin of the universe. (1 page)

  20. 2015 OLC Lidar DEM: Big Wood, ID

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Quantum Spatial has collected Light Detection and Ranging (LiDAR) data for the Oregon LiDAR Consortium (OLC) Big Wood 2015 study area. This study area is located in...

  1. Big data: survey, technologies, opportunities, and challenges

    National Research Council Canada - National Science Library

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range...

  2. Cosmic relics from the big bang

    Energy Technology Data Exchange (ETDEWEB)

    Hall, L.J.

    1988-12-01

    A brief introduction to the big bang picture of the early universe is given. Dark matter is discussed; particularly its implications for elementary particle physics. A classification scheme for dark matter relics is given. 21 refs., 11 figs., 1 tab.

  3. Big Data and Analytics in Healthcare.

    Science.gov (United States)

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  4. Applications of big data to smart cities

    National Research Council Canada - National Science Library

    Al Nuaimi, Eiman; Al Neyadi, Hind; Mohamed, Nader; Al-Jaroodi, Jameela

    2015-01-01

    Many governments are considering adopting the smart city concept in their cities and implementing big data applications that support smart city components to reach the required level of sustainability...

  5. Scaling big data with Hadoop and Solr

    CERN Document Server

    Karambelkar, Hrishikesh Vijay

    2015-01-01

    This book is aimed at developers, designers, and architects who would like to build big data enterprise search solutions for their customers or organizations. No prior knowledge of Apache Hadoop and Apache Solr/Lucene technologies is required.

  6. ARC Code TI: BigView

    Data.gov (United States)

    National Aeronautics and Space Administration — BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running linux. Additionally, it can work in a multi-screen environment...

  7. Big data en handschriften van Christiaan Huygens

    NARCIS (Netherlands)

    Damen, J.C.M.

    2013-01-01

    In de achtste aflevering van een serie combinatiebesprekingen (digitaalandspeciaal) schenkt Jos Damen aandacht aan een onderzoek naar big data van bibliotheekcatalogi en een catalogus van het werk van het Nederlandse genie Christiaan Huygens.

  8. Big data business models: Challenges and opportunities

    Directory of Open Access Journals (Sweden)

    Ralph Schroeder

    2016-12-01

    Full Text Available This paper, based on 28 interviews from a range of business leaders and practitioners, examines the current state of big data use in business, as well as the main opportunities and challenges presented by big data. It begins with an account of the current landscape and what is meant by big data. Next, it draws distinctions between the ways organisations use data and provides a taxonomy of big data business models. We observe a variety of different business models, depending not only on sector, but also on whether the main advantages derive from analytics capabilities or from having ready access to valuable data sources. Some major challenges emerge from this account, including data quality and protectiveness about sharing data. The conclusion discusses these challenges, and points to the tensions and differing perceptions about how data should be governed as between business practitioners, the promoters of open data, and the wider public.

  9. Big Data in food and agriculture

    Directory of Open Access Journals (Sweden)

    Kelly Bronson

    2016-06-01

    Full Text Available Farming is undergoing a digital revolution. Our existing review of current Big Data applications in the agri-food sector has revealed several collection and analytics tools that may have implications for relationships of power between players in the food system (e.g. between farmers and large corporations. For example, Who retains ownership of the data generated by applications like Monsanto Corproation's Weed I.D . “app”? Are there privacy implications with the data gathered by John Deere's precision agricultural equipment? Systematically tracing the digital revolution in agriculture, and charting the affordances as well as the limitations of Big Data applied to food and agriculture, should be a broad research goal for Big Data scholarship. Such a goal brings data scholarship into conversation with food studies and it allows for a focus on the material consequences of big data in society.

  10. Big Data for Business Ecosystem Players

    Directory of Open Access Journals (Sweden)

    Perko Igor

    2016-06-01

    Full Text Available In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use-namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.

  11. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  12. Cincinnati Big Area Additive Manufacturing (BAAM)

    Energy Technology Data Exchange (ETDEWEB)

    Duty, Chad E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Love, Lonnie J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-03-04

    Oak Ridge National Laboratory (ORNL) worked with Cincinnati Incorporated (CI) to demonstrate Big Area Additive Manufacturing which increases the speed of the additive manufacturing (AM) process by over 1000X, increases the size of parts by over 10X and shows a cost reduction of over 100X. ORNL worked with CI to transition the Big Area Additive Manufacturing (BAAM) technology from a proof-of-principle (TRL 2-3) demonstration to a prototype product stage (TRL 7-8).

  13. COBE looks back to the Big Bang

    Science.gov (United States)

    Mather, John C.

    1993-01-01

    An overview is presented of NASA-Goddard's Cosmic Background Explorer (COBE), the first NASA satellite designed to observe the primeval explosion of the universe. The spacecraft carries three extremely sensitive IR and microwave instruments designed to measure the faint residual radiation from the Big Bang and to search for the formation of the first galaxies. COBE's far IR absolute spectrophotometer has shown that the Big Bang radiation has a blackbody spectrum, proving that there was no large energy release after the explosion.

  14. Big data in the new media environment.

    Science.gov (United States)

    O'Donnell, Matthew Brook; Falk, Emily B; Konrath, Sara

    2014-02-01

    Bentley et al. argue for the social scientific contextualization of "big data" by proposing a four-quadrant model. We suggest extensions of the east-west (i.e., socially motivated versus independently motivated) decision-making dimension in light of findings from social psychology and neuroscience. We outline a method that leverages linguistic tools to connect insights across fields that address the individuals underlying big-data media streams.

  15. The NOAA Big Data Project

    Science.gov (United States)

    de la Beaujardiere, J.

    2015-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) is a Big Data producer, generating tens of terabytes per day from hundreds of sensors on satellites, radars, aircraft, ships, and buoys, and from numerical models. These data are of critical importance and value for NOAA's mission to understand and predict changes in climate, weather, oceans, and coasts. In order to facilitate extracting additional value from this information, NOAA has established Cooperative Research and Development Agreements (CRADAs) with five Infrastructure-as-a-Service (IaaS) providers — Amazon, Google, IBM, Microsoft, Open Cloud Consortium — to determine whether hosting NOAA data in publicly-accessible Clouds alongside on-demand computational capability stimulates the creation of new value-added products and services and lines of business based on the data, and if the revenue generated by these new applications can support the costs of data transmission and hosting. Each IaaS provider is the anchor of a "Data Alliance" which organizations or entrepreneurs can join to develop and test new business or research avenues. This presentation will report on progress and lessons learned during the first 6 months of the 3-year CRADAs.

  16. Big-bang nucleosynthesis revisited

    Science.gov (United States)

    Olive, Keith A.; Schramm, David N.; Steigman, Gary; Walker, Terry P.

    1989-01-01

    The homogeneous big-bang nucleosynthesis yields of D, He-3, He-4, and Li-7 are computed taking into account recent measurements of the neutron mean-life as well as updates of several nuclear reaction rates which primarily affect the production of Li-7. The extraction of primordial abundances from observation and the likelihood that the primordial mass fraction of He-4, Y(sub p) is less than or equal to 0.24 are discussed. Using the primordial abundances of D + He-3 and Li-7 we limit the baryon-to-photon ratio (eta in units of 10 exp -10) 2.6 less than or equal to eta(sub 10) less than or equal to 4.3; which we use to argue that baryons contribute between 0.02 and 0.11 to the critical energy density of the universe. An upper limit to Y(sub p) of 0.24 constrains the number of light neutrinos to N(sub nu) less than or equal to 3.4, in excellent agreement with the LEP and SLC collider results. We turn this argument around to show that the collider limit of 3 neutrino species can be used to bound the primordial abundance of He-4: 0.235 less than or equal to Y(sub p) less than or equal to 0.245.

  17. Neutrinos and Big Bang Nucleosynthesis

    Directory of Open Access Journals (Sweden)

    Gary Steigman

    2012-01-01

    Full Text Available According to the standard models of particle physics and cosmology, there should be a background of cosmic neutrinos in the present Universe, similar to the cosmic microwave photon background. The weakness of the weak interactions renders this neutrino background undetectable with current technology. The cosmic neutrino background can, however, be probed indirectly through its cosmological effects on big bang nucleosynthesis (BBN and the cosmic microwave background (CMB radiation. In this BBN review, focused on neutrinos and more generally on dark radiation, the BBN constraints on the number of “equivalent neutrinos” (dark radiation, on the baryon asymmetry (baryon density, and on a possible lepton asymmetry (neutrino degeneracy are reviewed and updated. The BBN constraints on dark radiation and on the baryon density following from considerations of the primordial abundances of deuterium and helium-4 are in excellent agreement with the complementary results from the CMB, providing a suggestive, but currently inconclusive, hint of the presence of dark radiation, and they constrain any lepton asymmetry. For all the cases considered here there is a “lithium problem”: the BBN-predicted lithium abundance exceeds the observationally inferred primordial value by a factor of ~3.

  18. "Big Science" exhibition at Balexert

    CERN Multimedia

    2008-01-01

    CERN is going out to meet those members of the general public who were unable to attend the recent Open Day. The Laboratory will be taking its "Big Science" exhibition from the Globe of Science and Innovation to the Balexert shopping centre from 19 to 31 May 2008. The exhibition, which shows the LHC and its experiments through the eyes of a photographer, features around thirty spectacular photographs measuring 4.5 metres high and 2.5 metres wide. Welcomed and guided around the exhibition by CERN volunteers, shoppers at Balexert will also have the opportunity to discover LHC components on display and watch films. "Fun with Physics" workshops will be held at certain times of the day. Main hall of the Balexert shopping centre, ground floor, from 9.00 a.m. to 7.00 p.m. Monday to Friday and from 10 a.m. to 6 p.m. on the two Saturdays. Call for volunteers All members of the CERN personnel are invited to enrol as volunteers to help welcom...

  19. Big Data: Survey, Technologies, Opportunities, and Challenges

    Directory of Open Access Journals (Sweden)

    Nawsher Khan

    2014-01-01

    Full Text Available Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  20. Big Data: Survey, Technologies, Opportunities, and Challenges

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  1. Slaves to Big Data. Or Are We?

    Directory of Open Access Journals (Sweden)

    Mireille Hildebrandt

    2013-10-01

    Full Text Available

    In this contribution, the notion of Big Data is discussed in relation to the monetisation of personal data. The claim of some proponents, as well as adversaries, that Big Data implies that ‘n = all’, meaning that we no longer need to rely on samples because we have all the data, is scrutinised and found to be both overly optimistic and unnecessarily pessimistic. A set of epistemological and ethical issues is presented, focusing on the implications of Big Data for our perception, cognition, fairness, privacy and due process. The article then looks into the idea of user-centric personal data management to investigate to what extent it provides solutions for some of the problems triggered by the Big Data conundrum. Special attention is paid to the core principle of data protection legislation, namely purpose binding. Finally, this contribution seeks to inquire into the influence of Big Data politics on self, mind and society, and asks how we can prevent ourselves from becoming slaves to Big Data.

  2. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  3. Big data: survey, technologies, opportunities, and challenges.

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  4. Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course

    Science.gov (United States)

    Asamoah, Daniel Adomako; Sharda, Ramesh; Hassan Zadeh, Amir; Kalgotra, Pankush

    2017-01-01

    In this article, we present an experiential perspective on how a big data analytics course was designed and delivered to students at a major Midwestern university. In reference to the "MSIS 2006 Model Curriculum," we designed this course as a level 2 course, with prerequisites in databases, computer programming, statistics, and data…

  5. Mechanisms and Magnitude of Cenozoic Crustal Extension in the Vicinity of Lake Mead, Nevada and the Beaver Dam Mountains, Utah: Geochemical, Geochronological,Thermochronological and Geophysical Constraints

    Science.gov (United States)

    Almeida, Rafael V.

    The central Basin and Range Province of Nevada and Utah was one of the first areas in which the existence of widespread low-angle normal faults or detachments was first recognized. The magnitude of associated crustal extension is estimated by some to be large, in places increasing original line lengths by as much as a factor of four. However, rock mechanics experiments and seismological data cast doubt on whether these structures slipped at low inclination in the manner generally assumed. In this dissertation, I review the evidence for the presence of detachment faults in the Lake Mead and Beaver Dam Mountains areas and place constraints on the amount of extension that has occurred there since the Miocene. Chapter 1 deals with the source-provenance relationship between Miocene breccias cropping out close to Las Vegas, Nevada and their interpreted source at Gold Butte, currently located 65 km to the east. Geochemical, geochronological and thermochronological data provide support for that long-accepted correlation, though with unexpected mismatches requiring modification of the original hypothesis. In Chapter 2, the same data are used to propose a refinement of the timing of ~1.45 Ga anorogenic magmatism, and the distribution of Proterozoic crustal boundaries. Chapter 3 uses geophysical methods to address the subsurface geometry of faults along the west flank of the Beaver Dam Mountains of southwestern Utah. The data suggest that the range is bounded by steeply inclined normal faults rather than a regional-scale detachment fault. Footwall folding formerly ascribed to Miocene deformation is reinterpreted as an expression of Cretaceous crustal shortening. Fission track data presented in Chapter 4 are consistent with mid-Miocene exhumation adjacent to high-angle normal faults. They also reveal a protracted history dating back to the Pennsylvanian-Permian time, with implications for the interpretation of other basement-cored uplifts in the region. A key finding of this

  6. Methow River Studies, Washington: abundance estimates from Beaver Creek and the Chewuch River screw trap, methodology testing in the Whitefish Island side channel, and survival and detection estimates from hatchery fish releases, 2013

    Science.gov (United States)

    Martens, Kyle D.; Fish, Teresa M.; Watson, Grace A.; Connolly, Patrick J.

    2014-01-01

    Salmon and steelhead populations have been severely depleted in the Columbia River from factors such as the presence of tributary dams, unscreened irrigation diversions, and habitat degradation from logging, mining, grazing, and others (Raymond, 1988). The U.S. Geological Survey (USGS) has been funded by the Bureau of Reclamation (Reclamation) to provide evaluation of on-going Reclamation funded efforts to recover Endangered Species Act (ESA) listed anadromous salmonid populations in the Methow River watershed, a watershed of the Columbia River in the Upper Columbia River Basin, in north-central Washington State (fig. 1). This monitoring and evaluation program was funded to document Reclamation’s effort to partially fulfill the 2008 Federal Columbia River Power System Biological Opinion (BiOp) (National Oceanographic and Atmospheric Administration, Fisheries Division 2003). This Biological Opinion includes Reasonable and Prudent Alternatives (RPA) to protect listed salmon and steelhead across their life cycle. Species of concern in the Methow River include Upper Columbia River (UCR) spring Chinook salmon (Oncorhynchus tshawytscha), UCR summer steelhead (O. mykiss), and bull trout (Salvelinus confluentus), which are all listed as threatened or endangered under the ESA. The work done by the USGS since 2004 has encompassed three phases of work. The first phase started in 2004 and continued through 2012. This first phase involved the evaluation of stream colonization and fish production in Beaver Creek following the modification of several water diversions (2000–2006) that were acting as barriers to upstream fish movement. Products to date from this work include: Ruttenburg (2007), Connolly and others (2008), Martens and Connolly (2008), Connolly (2010), Connolly and others (2010), Martens and Connolly (2010), Benjamin and others (2012), Romine and others (2013a), Weigel and others (2013a, 2013b, 2013c), and Martens and others (2014). The second phase, initiated in

  7. Boosting Big National Lab Data

    Energy Technology Data Exchange (ETDEWEB)

    Kleese van Dam, Kerstin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-02-21

    Introduction: Big data. Love it or hate it, solving the world’s most intractable problems requires the ability to make sense of huge and complex sets of data and do it quickly. Speeding up the process – from hours to minutes or from weeks to days – is key to our success. One major source of such big data are physical experiments. As many will know, these physical experiments are commonly used to solve challenges in fields such as energy security, manufacturing, medicine, pharmacology, environmental protection and national security. Experiments use different instruments and sensor types to research for example the validity of new drugs, the base cause for diseases, more efficient energy sources, new materials for every day goods, effective methods for environmental cleanup, the optimal ingredients composition for chocolate or determine how to preserve valuable antics. This is done by experimentally determining the structure, properties and processes that govern biological systems, chemical processes and materials. The speed and quality at which we can acquire new insights from experiments directly influences the rate of scientific progress, industrial innovation and competitiveness. And gaining new groundbreaking insights, faster, is key to the economic success of our nations. Recent years have seen incredible advances in sensor technologies, from house size detector systems in large experiments such as the Large Hadron Collider and the ‘Eye of Gaia’ billion pixel camera detector to high throughput genome sequencing. These developments have led to an exponential increase in data volumes, rates and variety produced by instruments used for experimental work. This increase is coinciding with a need to analyze the experimental results at the time they are collected. This speed is required to optimize the data taking and quality, and also to enable new adaptive experiments, where the sample is manipulated as it is observed, e.g. a substance is injected into a

  8. Big things start in small ways.

    Science.gov (United States)

    Rawlings, N

    1990-12-01

    This statement from the President of the 31st December Women's Movement in Ghana was part of a larger text presented at the World NGO Conference in Tokyo, July 1-4, 1990. The women's movement in Ghana strives to achieve equal opportunity, social justice, and sustainable development against social discrimination for women. Planning and development have focused on women in socioeconomic development. Specific projects at the core of creating positive conditions for socioeconomic growth, raising the standard of living, and expanding the economy, involve cover food and cash-crop production, food processing, food preparation, and small scale industrial activities such as ceramics and crafts. Income supplementation helps parents to send children to school instead of work. Daycare centers operating near work places benefit mothers in terms of providing a vacation, adult literacy programs, and family counseling sessions. The Movement actively mobilizes women to have children vaccinated. Access to credit for women and utilization of technology enriches life for women, and reduces backbreaking labor. The Movement is building wells in rural areas to reduce parasitic infection and creating easy access to a water supply. 252 projects have been completed and 100 are in process. The Movement provides a development model for integrating the resources of government, NGO's, and members of the community. Self-confidence of women has assured the success of projects. The Sasakawa Foundation has contributed technology and Japanese volunteers to improve the cultivation of food crops and by example express humble, respectful, hard working, and happy models of big things staring in small ways.

  9. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    Science.gov (United States)

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  10. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF THE INTERIOR Fish and Wildlife Service Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and Wildlife Service, Interior. ACTION: Notice of availability; request for comments...

  11. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  12. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  13. [Big data from clinical routine].

    Science.gov (United States)

    Mansmann, U

    2018-02-16

    Over the past 100 years, evidence-based medicine has undergone several fundamental changes. Through the field of physiology, medical doctors were introduced to the natural sciences. Since the late 1940s, randomized and epidemiological studies have come to provide the evidence for medical practice, which led to the emergence of clinical epidemiology as a new field in the medical sciences. Within the past few years, big data has become the driving force behind the vision for having a comprehensive set of health-related data which tracks individual healthcare histories and consequently that of large populations. The aim of this article is to discuss the implications of data-driven medicine, and to examine how it can find a place within clinical care. The EU-wide discussion on the development of data-driven medicine is presented. The following features and suggested actions were identified: harmonizing data formats, data processing and analysis, data exchange, related legal frameworks and ethical challenges. For the effective development of data-driven medicine, pilot projects need to be conducted to allow for open and transparent discussion on the advantages and challenges. The Federal Ministry of Education and Research ("Bundesministerium für Bildung und Forschung," BMBF) Arthromark project is an important example. Another example is the Medical Informatics Initiative of the BMBF. The digital revolution affects clinic practice. Data can be generated and stored in quantities that are almost unimaginable. It is possible to take advantage of this for development of a learning healthcare system if the principles of medical evidence generation are integrated into innovative IT-infrastructures and processes.

  14. Reactive Programming in Java

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Reactive Programming in gaining a lot of excitement. Many libraries, tools, and frameworks are beginning to make use of reactive libraries. Besides, applications dealing with big data or high frequency data can benefit from this programming paradigm. Come to this presentation to learn about what reactive programming is, what kind of problems it solves, how it solves them. We will take an example oriented approach to learning the programming model and the abstraction.

  15. Development of Multiple Big Data Analytics Platforms with Rapid Response

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2017-01-01

    Full Text Available The crucial problem of the integration of multiple platforms is how to adapt for their own computing features so as to execute the assignments most efficiently and gain the best outcome. This paper introduced the new approaches to big data platform, RHhadoop and SparkR, and integrated them to form a high-performance big data analytics with multiple platforms as part of business intelligence (BI to carry out rapid data retrieval and analytics with R programming. This paper aims to develop the optimization for job scheduling using MSHEFT algorithm and implement the optimized platform selection based on computing features for improving the system throughput significantly. In addition, users would simply give R commands rather than run Java or Scala program to perform the data retrieval and analytics in the proposed platforms. As a result, according to performance index calculated for various methods, although the optimized platform selection can reduce the execution time for the data retrieval and analytics significantly, furthermore scheduling optimization definitely increases the system efficiency a lot.

  16. Big Data in Caenorhabditis elegans: quo vadis?

    Science.gov (United States)

    Hutter, Harald; Moerman, Donald

    2015-11-05

    A clear definition of what constitutes "Big Data" is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of "complete" data sets for this organism is actually rather small--not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein-protein interaction--important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. © 2015 Hutter and Moerman. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  17. Design and development of a medical big data processing system based on Hadoop.

    Science.gov (United States)

    Yao, Qin; Tian, Yu; Li, Peng-Fei; Tian, Li-Li; Qian, Yang-Ming; Li, Jing-Song

    2015-03-01

    Secondary use of medical big data is increasingly popular in healthcare services and clinical research. Understanding the logic behind medical big data demonstrates tendencies in hospital information technology and shows great significance for hospital information systems that are designing and expanding services. Big data has four characteristics--Volume, Variety, Velocity and Value (the 4 Vs)--that make traditional systems incapable of processing these data using standalones. Apache Hadoop MapReduce is a promising software framework for developing applications that process vast amounts of data in parallel with large clusters of commodity hardware in a reliable, fault-tolerant manner. With the Hadoop framework and MapReduce application program interface (API), we can more easily develop our own MapReduce applications to run on a Hadoop framework that can scale up from a single node to thousands of machines. This paper investigates a practical case of a Hadoop-based medical big data processing system. We developed this system to intelligently process medical big data and uncover some features of hospital information system user behaviors. This paper studies user behaviors regarding various data produced by different hospital information systems for daily work. In this paper, we also built a five-node Hadoop cluster to execute distributed MapReduce algorithms. Our distributed algorithms show promise in facilitating efficient data processing with medical big data in healthcare services and clinical research compared with single nodes. Additionally, with medical big data analytics, we can design our hospital information systems to be much more intelligent and easier to use by making personalized recommendations.

  18. Reconnaissance of ground-water quality at selected wells in the Beaver Creek watershed, Shelby, Fayette, Tipton, and Haywood counties, West Tennessee, July to August 1992

    Science.gov (United States)

    Fielder, A.M.; Roman-Mas, A. J.; Bennett, M.W.

    1994-01-01

    A reconnaissance of water-quality conditions of the water-table aquifer in the Beaver Creek watershed and other rural areas of Shelby, Fayette, Tipton, and Haywood Counties, Tennessee, was conducted during July and August 1992. The reconnaissance was conducted by the U.S. Geological Survey, in cooperation with the Tennessee Department of Agriculture and the University of Tennessee Agricultural Extension Service. The report presents data of selected water-quality constituents and properties of water samples collected from 398 domestic wells, located primarily in rural areas. Nitrate concentrations exceeded 10 milligrams per liter in water from 73 of the 398 wells. Fecal coliform and fecal streptococci bacteria were detected in water from 21 and 118 wells, respectively.

  19. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  20. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  1. Big Data access and infrastructure for modern biology: case studies in data repository utility.

    Science.gov (United States)

    Boles, Nathan C; Stone, Tyler; Bergeron, Charles; Kiehl, Thomas R

    2017-01-01

    Big Data is no longer solely the purview of big organizations with big resources. Today's routine tools and experimental methods can generate large slices of data. For example, high-throughput sequencing can quickly interrogate biological systems for the expression levels of thousands of different RNAs, examine epigenetic marks throughout the genome, and detect differences in the genomes of individuals. Multichannel electrophysiology platforms produce gigabytes of data in just a few minutes of recording. Imaging systems generate videos capturing biological behaviors over the course of days. Thus, any researcher now has access to a veritable wealth of data. However, the ability of any given researcher to utilize that data is limited by her/his own resources and skills for downloading, storing, and analyzing the data. In this paper, we examine the necessary resources required to engage Big Data, survey the state of modern data analysis pipelines, present a few data repository case studies, and touch on current institutions and programs supporting the work that relies on Big Data. © 2016 New York Academy of Sciences.

  2. One Second After the Big Bang

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    A new experiment called PTOLEMY (Princeton Tritium Observatory for Light, Early-Universe, Massive-Neutrino Yield) is under development at the Princeton Plasma Physics Laboratory with the goal of challenging one of the most fundamental predictions of the Big Bang – the present-day existence of relic neutrinos produced less than one second after the Big Bang. Using a gigantic graphene surface to hold 100 grams of a single-atomic layer of tritium, low noise antennas that sense the radio waves of individual electrons undergoing cyclotron motion, and a massive array of cryogenic sensors that sit at the transition between normal and superconducting states, the PTOLEMY project has the potential to challenge one of the most fundamental predictions of the Big Bang, to potentially uncover new interactions and properties of the neutrinos, and to search for the existence of a species of light dark matter known as sterile neutrinos.

  3. Social networks, big data and transport planning

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz Sanchez, T.; Lidon Mars Aicart, M. del; Arroyo Lopez, M.R.; Serna Nocedal, A.

    2016-07-01

    The characteristics of people who are related or tied to each individual affects her activitytravel behavior. That influence is especially associated to social and recreational activities, which are increasingly important. Collecting high quality data from those social networks is very difficult, because respondents are asked about their general social life, which is most demanding to remember that specific facts. On the other hand, currently there are different potential sources of transport data, which is characterized by the huge amount of information available, the velocity with it is obtained and the variety of format in which is presented. This sort of information is commonly known as Big Data. In this paper we identify potential sources of social network related big data that can be used in Transport Planning. Then, a review of current applications in Transport Planning is presented. Finally, some future prospects of using social network related big data are highlighted. (Author)

  4. Latency critical big data computing in finance

    Directory of Open Access Journals (Sweden)

    Xinhui Tian

    2015-12-01

    Full Text Available Analytics based on big data computing can benefit today's banking and financial organizations on many aspects, and provide much valuable information for organizations to achieve more intelligent trading, which can help them to gain a great competitive advantage. However, the large scale of data and the critical latency analytics requirement in finance poses a great challenge for current system architecture. In this paper, we first analyze the challenges brought by the financial latency critical big data computing, then propose a discussion on how to handle these challenges from a perspective of multi-level system. We also talk about current researches on low latency in different system levels. The discussions and conclusions in the paper can be useful to the banking and financial organizations with the critical latency requirement of big data analytics.

  5. Big data in food safety: An overview.

    Science.gov (United States)

    Marvin, Hans J P; Janssen, Esmée M; Bouzembrak, Yamine; Hendriksen, Peter J M; Staats, Martijn

    2017-07-24

    Technology is now being developed that is able to handle vast amounts of structured and unstructured data from diverse sources and origins. These technologies are often referred to as big data, and open new areas of research and applications that will have an increasing impact in all sectors of our society. In this paper we assessed to which extent big data is being applied in the food safety domain and identified several promising trends. In several parts of the world, governments stimulate the publication on internet of all data generated in public funded research projects. This policy opens new opportunities for stakeholders dealing with food safety to address issues which were not possible before. Application of mobile phones as detection devices for food safety and the use of social media as early warning of food safety problems are a few examples of the new developments that are possible due to big data.

  6. Big Data Analytics for Genomic Medicine.

    Science.gov (United States)

    He, Karen Y; Ge, Dongliang; He, Max M

    2017-02-15

    Genomic medicine attempts to build individualized strategies for diagnostic or therapeutic decision-making by utilizing patients' genomic information. Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets. While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy. In this paper, we review the challenges of manipulating large-scale next-generation sequencing (NGS) data and diverse clinical data derived from the EHRs for genomic medicine. We introduce possible solutions for different challenges in manipulating, managing, and analyzing genomic and clinical data to implement genomic medicine. Additionally, we also present a practical Big Data toolset for identifying clinically actionable genetic variants using high-throughput NGS data and EHRs.

  7. A Maturity Analysis of Big Data Technologies

    Directory of Open Access Journals (Sweden)

    Radu BONCEA

    2017-01-01

    Full Text Available In recent years Big Data technologies have been developed at faster pace due to increase in demand from applications that generate and process vast amount of data. The Cloud Computing and the Internet of Things are the main drivers for developing enterprise solutions that support Business Intelligence which in turn, creates new opportunities and new business models. An enterprise can now collect data about its internal processes, process this data to gain new insights and business value and make better decisions. And this is the reason why Big Data is now seen as a vital component in any enterprise architecture. In this article the maturity of several Big Data technologies is put under analysis. For each technology there are several aspects considered, such as development status, market usage, licensing policies, availability for certifications, adoption, support for cloud computing and enterprise.

  8. Geologic map of Big Bend National Park, Texas

    Science.gov (United States)

    Turner, Kenzie J.; Berry, Margaret E.; Page, William R.; Lehman, Thomas M.; Bohannon, Robert G.; Scott, Robert B.; Miggins, Daniel P.; Budahn, James R.; Cooper, Roger W.; Drenth, Benjamin J.; Anderson, Eric D.; Williams, Van S.

    2011-01-01

    The purpose of this map is to provide the National Park Service and the public with an updated digital geologic map of Big Bend National Park (BBNP). The geologic map report of Maxwell and others (1967) provides a fully comprehensive account of the important volcanic, structural, geomorphological, and paleontological features that define BBNP. However, the map is on a geographically distorted planimetric base and lacks topography, which has caused difficulty in conducting GIS-based data analyses and georeferencing the many geologic features investigated and depicted on the map. In addition, the map is outdated, excluding significant data from numerous studies that have been carried out since its publication more than 40 years ago. This report includes a modern digital geologic map that can be utilized with standard GIS applications to aid BBNP researchers in geologic data analysis, natural resource and ecosystem management, monitoring, assessment, inventory activities, and educational and recreational uses. The digital map incorporates new data, many revisions, and greater detail than the original map. Although some geologic issues remain unresolved for BBNP, the updated map serves as a foundation for addressing those issues. Funding for the Big Bend National Park geologic map was provided by the United States Geological Survey (USGS) National Cooperative Geologic Mapping Program and the National Park Service. The Big Bend mapping project was administered by staff in the USGS Geology and Environmental Change Science Center, Denver, Colo. Members of the USGS Mineral and Environmental Resources Science Center completed investigations in parallel with the geologic mapping project. Results of these investigations addressed some significant current issues in BBNP and the U.S.-Mexico border region, including contaminants and human health, ecosystems, and water resources. Funding for the high-resolution aeromagnetic survey in BBNP, and associated data analyses and

  9. Big data for risk analysis: The future of safe railways

    Energy Technology Data Exchange (ETDEWEB)

    Figueres Esteban, M.

    2016-07-01

    New technology brings ever more data to support decision-making for intelligent transport systems. Big Data is no longer a futuristic challenge, it is happening right now: modern railway systems have countless sources of data providing a massive quantity of diverse information on every aspect of operations such as train position and speed, brake applications, passenger numbers, status of the signaling system or reported incidents.The traditional approaches to safety management on the railways have relied on static data sources to populate traditional safety tools such as bow-tie models and fault trees. The Big Data Risk Analysis (BDRA) program for Railways at the University of Huddersfield is investigating how the many Big Data sources from the railway can be combined in a meaningful way to provide a better understanding about the GB railway systems and the environment within which they operate.Moving to BDRA is not simply a matter of scaling-up existing analysis techniques. BDRA has to coordinate and combine a wide range of sources with different types of data and accuracy, and that is not straight-forward. BDRA is structured around three components: data, ontology and visualisation. Each of these components is critical to support the overall framework. This paper describes how these three components are used to get safety knowledge from two data sources by means of ontologies from text documents. This is a part of the ongoing BDRA research that is looking at integrating many large and varied data sources to support railway safety and decision-makers. (Author)

  10. A Critical Axiology for Big Data Studies

    Directory of Open Access Journals (Sweden)

    Saif Shahin

    2016-01-01

    Full Text Available Los datos masivos ( Big Data han tenido un gran impacto en el periodis - mo y los estudios de comunicación, a la vez que han generado un gran número de preocupaciones sociales que van desde la vigilancia masiva hasta la legitimación de prejuicios, como el racismo. En este artículo, se desarrolla una agenda para la investigación crítica de Big Data y se discu - te cuál debería ser el propósito de dicha investigación, de qué obstáculos protegerse y la posibilidad de adaptar los métodos de Big Data para lle - var a cabo la investigación empírica desde un punto de vista crítico. Di - cho programa de investigación no solo permitirá que la erudición crítica desafíe significativamente a Big Data como una herramienta hegemónica, sino que también permitirá que los académicos usen los recursos de Big Data para abordar una serie de problemas sociales de formas previamente imposibles. El artículo llama a la innovación metodológica para combinar las técnicas emergentes de Big Data y los métodos críticos y cualitativos de investigación, como la etnografía y el análisis del discurso, de tal ma - nera que se puedan complementar.

  11. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  12. How quantum is the big bang?

    Science.gov (United States)

    Bojowald, Martin

    2008-06-06

    When quantum gravity is used to discuss the big bang singularity, the most important, though rarely addressed, question is what role genuine quantum degrees of freedom play. Here, complete effective equations are derived for isotropic models with an interacting scalar to all orders in the expansions involved. The resulting coupling terms show that quantum fluctuations do not affect the bounce much. Quantum correlations, however, do have an important role and could even eliminate the bounce. How quantum gravity regularizes the big bang depends crucially on properties of the quantum state.

  13. Coinductive Big-Step Semantics for Concurrency

    Directory of Open Access Journals (Sweden)

    Tarmo Uustalu

    2013-12-01

    Full Text Available In a paper presented at SOS 2010, we developed a framework for big-step semantics for interactive input-output in combination with divergence, based on coinductive and mixed inductive-coinductive notions of resumptions, evaluation and termination-sensitive weak bisimilarity. In contrast to standard inductively defined big-step semantics, this framework handles divergence properly; in particular, runs that produce some observable effects and then diverge, are not "lost". Here we scale this approach for shared-variable concurrency on a simple example language. We develop the metatheory of our semantics in a constructive logic.

  14. From big data to smart data

    CERN Document Server

    Iafrate, Fernando

    2015-01-01

    A pragmatic approach to Big Data by taking the reader on a journey between Big Data (what it is) and the Smart Data (what it is for). Today's decision making can be reached via information (related to the data), knowledge (related to people and processes), and timing (the capacity to decide, act and react at the right time). The huge increase in volume of data traffic, and its format (unstructured data such as blogs, logs, and video) generated by the "digitalization" of our world modifies radically our relationship to the space (in motion) and time, dimension and by capillarity, the enterpr

  15. Research in Big Data Warehousing using Hadoop

    Directory of Open Access Journals (Sweden)

    Abderrazak Sebaa

    2017-05-01

    Full Text Available Traditional data warehouses have played a key role in decision support system until the recent past. However, the rapid growing of the data generation by the current applications requires new data warehousing systems: volume and format of collected datasets, data source variety, integration of unstructured data and powerful analytical processing. In the age of the Big Data, it is important to follow this pace and adapt the existing warehouse systems to overcome the new issues and challenges. In this paper, we focus on the data warehousing over big data. We discuss the limitations of the traditional ones. We present its alternative technologies and related future work for data warehousing.

  16. Some notes on the big trip

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Diaz, Pedro F. [Colina de los Chopos, Centro de Fisica ' Miguel A. Catalan' , Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)]. E-mail: pedrogonzalez@mi.madritel.es

    2006-03-30

    The big trip is a cosmological process thought to occur in the future by which the entire universe would be engulfed inside a gigantic wormhole and might travel through it along space and time. In this Letter we discuss different arguments that have been raised against the viability of that process, reaching the conclusions that the process can actually occur by accretion of phantom energy onto the wormholes and that it is stable and might occur in the global context of a multiverse model. We finally argue that the big trip does not contradict any holographic bounds on entropy and information.

  17. Revised Uncertainties in Big Bang Nucleosynthesis

    OpenAIRE

    Foley, Michael; Sasankan, Nishanth; Kusakabe, Motohiko; Mathews, Grant. J.

    2017-01-01

    Big Bang Nucleosynthesis (BBN) explores the first few minutes of nuclei formation after the Big Bang. We present updates that result in new constraints at the 2{\\sigma} level for the abundances of the four primary light nuclides - D,3He,4He, and 7Li - in BBN. A modified standard BBN code was used in a Monte Carlo analysis of the nucleosynthesis uncertainty as a function of baryon-to-photon ratio. Reaction rates were updated to those of NACRE and REACLIB, and R-Matrix calculations. The results...

  18. Privacy Challenges of Genomic Big Data.

    Science.gov (United States)

    Shen, Hong; Ma, Jian

    2017-01-01

    With the rapid advancement of high-throughput DNA sequencing technologies, genomics has become a big data discipline where large-scale genetic information of human individuals can be obtained efficiently with low cost. However, such massive amount of personal genomic data creates tremendous challenge for privacy, especially given the emergence of direct-to-consumer (DTC) industry that provides genetic testing services. Here we review the recent development in genomic big data and its implications on privacy. We also discuss the current dilemmas and future challenges of genomic privacy.

  19. SETI as a part of Big History

    Science.gov (United States)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive Journal of Astrobiology and Acta Astronautica. But those mathematical results will not be repeated in this paper in order not to make it too long. Possibly a whole new book about GBMs will be written by the author. Mass Extinctions of the geological past also are one more topic that may be cast in the language of a decreasing GBM over a short time lapse, since Mass Extinctions are sudden all-lows in the number of living species. In this paper, we give formulae for the decreasing GBMs of Mass Extinctions, like the K-Pg one of 64 million years ago. Finally, we note that the Big History Equation is just the extension of the Drake Equation to 13.8 billion years of cosmic

  20. Big Data Analytics in Chemical Engineering.

    Science.gov (United States)

    Chiang, Leo; Lu, Bo; Castillo, Ivan

    2017-06-07

    Big data analytics is the journey to turn data into insights for more informed business and operational decisions. As the chemical engineering community is collecting more data (volume) from different sources (variety), this journey becomes more challenging in terms of using the right data and the right tools (analytics) to make the right decisions in real time (velocity). This article highlights recent big data advancements in five industries, including chemicals, energy, semiconductors, pharmaceuticals, and food, and then discusses technical, platform, and culture challenges. To reach the next milestone in multiplying successes to the enterprise level, government, academia, and industry need to collaboratively focus on workforce development and innovation.