WorldWideScience

Sample records for big water availability

  1. Application and Prospect of Big Data in Water Resources

    Science.gov (United States)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  2. Sense Things in the Big Deep Water Bring the Big Deep Water to Computers so People can understand the Deep Water all the Time without getting wet

    Science.gov (United States)

    Pelz, M.; Heesemann, M.; Scherwath, M.; Owens, D.; Hoeberechts, M.; Moran, K.

    2015-12-01

    Senses help us learn stuff about the world. We put sense things in, over, and under the water to help people understand water, ice, rocks, life and changes over time out there in the big water. Sense things are like our eyes and ears. We can use them to look up and down, right and left all of the time. We can also use them on top of or near the water to see wind and waves. As the water gets deep, we can use our sense things to see many a layer of different water that make up the big water. On the big water we watch ice grow and then go away again. We think our sense things will help us know if this is different from normal, because it could be bad for people soon if it is not normal. Our sense things let us hear big water animals talking low (but sometimes high). We can also see animals that live at the bottom of the big water and we take lots of pictures of them. Lots of the animals we see are soft and small or hard and small, but sometimes the really big ones are seen too. We also use our sense things on the bottom and sometimes feel the ground shaking. Sometimes, we get little pockets of bad smelling air going up, too. In other areas of the bottom, we feel hot hot water coming out of the rock making new rocks and we watch some animals even make houses and food out of the hot hot water that turns to rock as it cools. To take care of the sense things we use and control water cars and smaller water cars that can dive deep in the water away from the bigger water car. We like to put new things in the water and take things out of the water that need to be fixed at least once a year. Sense things are very cool because you can use the sense things with your computer too. We share everything for free on our computers, which your computer talks to and gets pictures and sounds for you. Sharing the facts from the sense things is the best part about having the sense things because we can get many new ideas about understanding the big water from anyone with a computer!

  3. Leveraging Big Data Tools and Technologies: Addressing the Challenges of the Water Quality Sector

    Directory of Open Access Journals (Sweden)

    Juan Manuel Ponce Romero

    2017-11-01

    Full Text Available The water utility sector is subject to stringent legislation, seeking to address both the evolution of practices within the chemical/pharmaceutical industry, and the safeguarding of environmental protection, and which is informed by stakeholder views. Growing public environmental awareness is balanced by fair apportionment of liability within-sector. This highly complex and dynamic context poses challenges for water utilities seeking to manage the diverse chemicals arising from disparate sources reaching Wastewater Treatment Plants, including residential, commercial, and industrial points of origin, and diffuse sources including agricultural and hard surface water run-off. Effluents contain broad ranges of organic and inorganic compounds, herbicides, pesticides, phosphorus, pharmaceuticals, and chemicals of emerging concern. These potential pollutants can be in dissolved form, or arise in association with organic matter, the associated risks posing significant environmental challenges. This paper examines how the adoption of new Big Data tools and computational technologies can offer great advantage to the water utility sector in addressing this challenge. Big Data approaches facilitate improved understanding and insight of these challenges, by industry, regulator, and public alike. We discuss how Big Data approaches can be used to improve the outputs of tools currently in use by the water industry, such as SAGIS (Source Apportionment GIS system, helping to reveal new relationships between chemicals, the environment, and human health, and in turn provide better understanding of contaminants in wastewater (origin, pathways, and persistence. We highlight how the sector can draw upon Big Data tools to add value to legacy datasets, such as the Chemicals Investigation Programme in the UK, combined with contemporary data sources, extending the lifespan of data, focusing monitoring strategies, and helping users adapt and plan more efficiently. Despite

  4. When Big Ice Turns Into Water It Matters For Houses, Stores And Schools All Over

    Science.gov (United States)

    Bell, R. E.

    2017-12-01

    When ice in my glass turns to water it is not bad but when the big ice at the top and bottom of the world turns into water it is not good. This new water makes many houses, stores and schools wet. It is really bad during when the wind is strong and the rain is hard. New old ice water gets all over the place. We can not get to work or school or home. We go to the big ice at the top and bottom of the world to see if it will turn to water soon and make more houses wet. We fly over the big ice to see how it is doing. Most of the big ice sits on rock. Around the edge of the big sitting on rock ice, is really low ice that rides on top of the water. This really low ice slows down the big rock ice turning into water. If the really low ice cracks up and turns into little pieces of ice, the big rock ice will make more houses wet. We look to see if there is new water in the cracks. Water in the cracks is bad as it hurts the big rock ice. Water in the cracks on the really low ice will turn the low ice into many little pieces of ice. Then the big rock ice will turn to water. That is water in cracks is bad for the houses, schools and businesses. If water moves off the really low ice, it does not stay in the cracks. This is better for the really low ice. This is better for the big rock ice. We took pictures of the really low ice and saw water leaving. The water was not staying in the cracks. Water leaving the really low ice might be good for houses, schools and stores.

  5. Resources available for autism research in the big data era: a systematic review

    Directory of Open Access Journals (Sweden)

    Reem Al-jawahiri

    2017-01-01

    Full Text Available Recently, there has been a move encouraged by many stakeholders towards generating big, open data in many areas of research. One area where big, open data is particularly valuable is in research relating to complex heterogeneous disorders such as Autism Spectrum Disorder (ASD. The inconsistencies of findings and the great heterogeneity of ASD necessitate the use of big and open data to tackle important challenges such as understanding and defining the heterogeneity and potential subtypes of ASD. To this end, a number of initiatives have been established that aim to develop big and/or open data resources for autism research. In order to provide a useful data reference for autism researchers, a systematic search for ASD data resources was conducted using the Scopus database, the Google search engine, and the pages on ‘recommended repositories’ by key journals, and the findings were translated into a comprehensive list focused on ASD data. The aim of this review is to systematically search for all available ASD data resources providing the following data types: phenotypic, neuroimaging, human brain connectivity matrices, human brain statistical maps, biospecimens, and ASD participant recruitment. A total of 33 resources were found containing different types of data from varying numbers of participants. Description of the data available from each data resource, and links to each resource is provided. Moreover, key implications are addressed and underrepresented areas of data are identified.

  6. The technopolitics of big infrastructure and the Chinese water machine

    Directory of Open Access Journals (Sweden)

    Britt Crow-Miller

    2017-06-01

    Full Text Available Despite widespread recognition of the problems caused by relying on engineering approaches to water management issues, since 2000 China has raised its commitment to a concrete-heavy approach to water management. While, historically, China’s embrace of modernist water management could be understood as part of a broader set of ideas about controlling nature, in the post-reform era this philosophical view has merged with a technocratic vision of national development. In the past two decades, a Chinese Water Machine has coalesced: the institutional embodiment of China’s commitment to large infrastructure. The technocratic vision of the political and economic elite at the helm of this Machine has been manifest in the form of some of the world’s largest water infrastructure projects, including the Three Gorges Dam and the South-North Water Transfer Project, and in the exporting of China’s vision of concrete-heavy development beyond its own borders. This paper argues that China’s approach to water management is best described as a techno-political regime that extends well beyond infrastructure, and is fundamentally shaped by both past choices and current political-economic conditions. Emerging from this regime, the Chinese Water Machine is one of the forces driving the (return to big water infrastructure globally.

  7. Water resources in the Big Lost River Basin, south-central Idaho

    Science.gov (United States)

    Crosthwaite, E.G.; Thomas, C.A.; Dyer, K.L.

    1970-01-01

    The Big Lost River basin occupies about 1,400 square miles in south-central Idaho and drains to the Snake River Plain. The economy in the area is based on irrigation agriculture and stockraising. The basin is underlain by a diverse-assemblage of rocks which range, in age from Precambrian to Holocene. The assemblage is divided into five groups on the basis of their hydrologic characteristics. Carbonate rocks, noncarbonate rocks, cemented alluvial deposits, unconsolidated alluvial deposits, and basalt. The principal aquifer is unconsolidated alluvial fill that is several thousand feet thick in the main valley. The carbonate rocks are the major bedrock aquifer. They absorb a significant amount of precipitation and, in places, are very permeable as evidenced by large springs discharging from or near exposures of carbonate rocks. Only the alluvium, carbonate rock and locally the basalt yield significant amounts of water. A total of about 67,000 acres is irrigated with water diverted from the Big Lost River. The annual flow of the river is highly variable and water-supply deficiencies are common. About 1 out of every 2 years is considered a drought year. In the period 1955-68, about 175 irrigation wells were drilled to provide a supplemental water supply to land irrigated from the canal system and to irrigate an additional 8,500 acres of new land. Average. annual precipitation ranged from 8 inches on the valley floor to about 50 inches at some higher elevations during the base period 1944-68. The estimated water yield of the Big Lost River basin averaged 650 cfs (cubic feet per second) for the base period. Of this amount, 150 cfs was transpired by crops, 75 cfs left the basin as streamflow, and 425 cfs left as ground-water flow. A map of precipitation and estimated values of evapotranspiration were used to construct a water-yield map. A distinctive feature of the Big Lost River basin, is the large interchange of water from surface streams into the ground and from the

  8. Water quality and trend analysis of Colorado--Big Thompson system reservoirs and related conveyances, 1969 through 2000

    Science.gov (United States)

    Stevens, Michael R.

    2003-01-01

    . This report summarizes and assesses: Water-quality and field-measurement profile data collected by the U.S. Geological Survey and stored in the U.S. Geological Survey National Water Information System, Time-series trends of chemical constituents and physical properties, Trends in oxygen deficits in the hypolimnion of the reservoirs in the late summer season by the seasonal Kendall trend test method, Nutrient limitation and trophic status indicators, and Water-quality data in terms of Colorado water-quality standards. Water quality was generally acceptable for primary uses throughout the Colorado?Big Thompson system over the site periods of record, which are all within the span of 1969 to 2000. Dissolved solids and nutrient concentrations were low and typical of a forested/mountainous/crystalline bedrock hydrologic setting. Most of the more toxic trace elements were rarely detected or were found in low concentrations, due at least in part to a relative lack of ore-mineral deposits within the drainage areas of the Colorado?Big Thompson Project. Constituent concentrations consistently met water-quality standard thresholds set by the State of Colorado. Trophic-State Index Values indicated mesotrophic conditions generally prevailed at reservoirs, based on available Secchi depth, total phosphorus concentrations, and chlorophyll-a concentrations. Based on plots of time-series values and concentrations and seasonal Kendall nonparametric trends testing, dissolved solids and most major ions are decreasing at most sites. Many of the nutrient data did not meet the minimum criteria for time-series testing; but for those that did, nutrient concentrations were generally stable (no statistical trend) or decreasing (ammonia plus organic nitrogen and total phosphorus). Iron and manganese concentrations were stable or decreasing at most sites that met testing criteria. Chlorophyll-a data were only collected for 11 years but generally indicated quasi-stable or d

  9. Results from the Big Spring basin water quality monitoring and demonstration projects, Iowa, USA

    Science.gov (United States)

    Rowden, R.D.; Liu, H.; Libra, R.D.

    2001-01-01

    Agricultural practices, hydrology, and water quality of the 267-km2 Big Spring groundwater drainage basin in Clayton County, Iowa, have been monitored since 1981. Land use is agricultural; nitrate-nitrogen (-N) and herbicides are the resulting contaminants in groundwater and surface water. Ordovician Galena Group carbonate rocks comprise the main aquifer in the basin. Recharge to this karstic aquifer is by infiltration, augmented by sinkhole-captured runoff. Groundwater is discharged at Big Spring, where quantity and quality of the discharge are monitored. Monitoring has shown a threefold increase in groundwater nitrate-N concentrations from the 1960s to the early 1980s. The nitrate-N discharged from the basin typically is equivalent to over one-third of the nitrogen fertilizer applied, with larger losses during wetter years. Atrazine is present in groundwater all year; however, contaminant concentrations in the groundwater respond directly to recharge events, and unique chemical signatures of infiltration versus runoff recharge are detectable in the discharge from Big Spring. Education and demonstration efforts have reduced nitrogen fertilizer application rates by one-third since 1981. Relating declines in nitrate and pesticide concentrations to inputs of nitrogen fertilizer and pesticides at Big Spring is problematic. Annual recharge has varied five-fold during monitoring, overshadowing any water-quality improvements resulting from incrementally decreased inputs. ?? Springer-Verlag 2001.

  10. Ground-water appraisal in northwestern Big Stone County, west-central Minnesota

    Science.gov (United States)

    Soukup, W.G.

    1980-01-01

    The development of ground water for irrigation in northwestern Big Stone County has not kept up with development in other irrigable areas of the State. This is due, in part, to the absence of extensive surficial aquifers and the difficulty in locating buried aquifers.

  11. Forecasting in an integrated surface water-ground water system: The Big Cypress Basin, South Florida

    Science.gov (United States)

    Butts, M. B.; Feng, K.; Klinting, A.; Stewart, K.; Nath, A.; Manning, P.; Hazlett, T.; Jacobsen, T.

    2009-04-01

    The South Florida Water Management District (SFWMD) manages and protects the state's water resources on behalf of 7.5 million South Floridians and is the lead agency in restoring America's Everglades - the largest environmental restoration project in US history. Many of the projects to restore and protect the Everglades ecosystem are part of the Comprehensive Everglades Restoration Plan (CERP). The region has a unique hydrological regime, with close connection between surface water and groundwater, and a complex managed drainage network with many structures. Added to the physical complexity are the conflicting needs of the ecosystem for protection and restoration, versus the substantial urban development with the accompanying water supply, water quality and flood control issues. In this paper a novel forecasting and real-time modelling system is presented for the Big Cypress Basin. The Big Cypress Basin includes 272 km of primary canals and 46 water control structures throughout the area that provide limited levels of flood protection, as well as water supply and environmental quality management. This system is linked to the South Florida Water Management District's extensive real-time (SCADA) data monitoring and collection system. Novel aspects of this system include the use of a fully distributed and integrated modeling approach and a new filter-based updating approach for accurately forecasting river levels. Because of the interaction between surface- and groundwater a fully integrated forecast modeling approach is required. Indeed, results for the Tropical Storm Fay in 2008, the groundwater levels show an extremely rapid response to heavy rainfall. Analysis of this storm also shows that updating levels in the river system can have a direct impact on groundwater levels.

  12. Numerical simulation of groundwater and surface-water interactions in the Big River Management Area, central Rhode Island

    Science.gov (United States)

    Masterson, John P.; Granato, Gregory E.

    2013-01-01

    The Rhode Island Water Resources Board is considering use of groundwater resources from the Big River Management Area in central Rhode Island because increasing water demands in Rhode Island may exceed the capacity of current sources. Previous water-resources investigations in this glacially derived, valley-fill aquifer system have focused primarily on the effects of potential groundwater-pumping scenarios on streamflow depletion; however, the effects of groundwater withdrawals on wetlands have not been assessed, and such assessments are a requirement of the State’s permitting process to develop a water supply in this area. A need for an assessment of the potential effects of pumping on wetlands in the Big River Management Area led to a cooperative agreement in 2008 between the Rhode Island Water Resources Board, the U.S. Geological Survey, and the University of Rhode Island. This partnership was formed with the goal of developing methods for characterizing wetland vegetation, soil type, and hydrologic conditions, and monitoring and modeling water levels for pre- and post-water-supply development to assess potential effects of groundwater withdrawals on wetlands. This report describes the hydrogeology of the area and the numerical simulations that were used to analyze the interaction between groundwater and surface water in response to simulated groundwater withdrawals. The results of this analysis suggest that, given the hydrogeologic conditions in the Big River Management Area, a standard 5-day aquifer test may not be sufficient to determine the effects of pumping on water levels in nearby wetlands. Model simulations showed water levels beneath Reynolds Swamp declined by about 0.1 foot after 5 days of continuous pumping, but continued to decline by an additional 4 to 6 feet as pumping times were increased from a 5-day simulation period to a simulation period representative of long-term average monthly conditions. This continued decline in water levels with

  13. Ecological Health and Water Quality Assessments in Big Creek Lake, AL

    Science.gov (United States)

    Childs, L. M.; Frey, J. W.; Jones, J. B.; Maki, A. E.; Brozen, M. W.; Malik, S.; Allain, M.; Mitchell, B.; Batina, M.; Brooks, A. O.

    2008-12-01

    Big Creek Lake (aka J.B. Converse Reservoir) serves as the water supply for the majority of residents in Mobile County, Alabama. The area surrounding the reservoir serves as a gopher tortoise mitigation bank and is protected from further development, however, impacts from previous disasters and construction have greatly impacted the Big Creek Lake area. The Escatawpa Watershed drains into the lake, and of the seven drainage streams, three have received a 303 (d) (impaired water bodies) designation in the past. In the adjacent ecosystem, the forest is experiencing major stress from drought and pine bark beetle infestations. Various agencies are using control methods such as pesticide treatment to eradicate the beetles. There are many concerns about these control methods and the run-off into the ecosystem. In addition to pesticide control methods, the Highway 98 construction projects cross the north area of the lake. The community has expressed concern about both direct and indirect impacts of these construction projects on the lake. This project addresses concerns about water quality, increasing drought in the Southeastern U.S., forest health as it relates to vegetation stress, and state and federal needs for improved assessment methods supported by remotely sensed data to determine coastal forest susceptibility to pine bark beetles. Landsat TM, ASTER, MODIS, and EO-1/ALI imagery was employed in Normalized Difference Vegetation Index (NDVI) and Normalized Difference Moisture Index (NDMI), as well as to detect concentration of suspended solids, chlorophyll and water turbidity. This study utilizes NASA Earth Observation Systems to determine how environmental conditions and human activity relate to pine tree stress and the onset of pine beetle invasion, as well as relate current water quality data to community concerns and gain a better understanding of human impacts upon water resources.

  14. How changes in top water bother big turning packs of up-going wet air

    Science.gov (United States)

    Wood, K.

    2017-12-01

    Big turning packs of up-going wet air form near areas of warm water at the top of big bodies of water. After these turning packs form, they usually get stronger if the top water stays warm. If the top water becomes less warm, the turning packs usually get less strong. Other things can change how strong a turning pack gets, like how wet the air around it is and if that air moves faster higher up than lower down. When these turning packs hit land, their rain and winds can hurt people and the stuff they own, especially if the turning pack is really strong. But it's hard to know how much stronger or less strong it will become before it hits land. Warm top water gives a turning pack of up-going wet air a lot of power, but cool top water doesn't, so we need to know how warm the top water is. Because I can't go into every turning pack myself, flying computers in outer space tell me what the top water is doing. I look at the top water near turning packs that get strong and see how it's different from the top water near those that get less strong. Top water that changes from warm to cool in a small area bothers a turning pack of up-going wet air, which then gets less strong. If we see these top water changes ahead of time, that might help us know what a turning pack will do before it gets close to land.

  15. Recharge Area, Base-Flow and Quick-Flow Discharge Rates and Ages, and General Water Quality of Big Spring in Carter County, Missouri, 2000-04

    Science.gov (United States)

    Imes, Jeffrey L.; Plummer, Niel; Kleeschulte, Michael J.; Schumacher, John G.

    2007-01-01

    Exploration for lead deposits has occurred in a mature karst area of southeast Missouri that is highly valued for its scenic beauty and recreational opportunities. The area contains the two largest springs in Missouri (Big Spring and Greer Spring), both of which flow into federally designated scenic rivers. Concerns about potential mining effects on the area ground water and aquatic biota prompted an investigation of Big Spring. Water-level measurements made during 2000 helped define the recharge area of Big Spring, Greer Spring, Mammoth Spring, and Boze Mill Spring. The data infer two distinct potentiometric surfaces. The shallow potentiometric surface, where the depth-to-water is less than about 250 feet, tends to mimic topographic features and is strongly controlled by streams. The deep potentiometric surface, where the depth-to-water is greater than about 250 feet represents ground-water hydraulic heads within the more mature karst areas. A highly permeable zone extends about 20 mile west of Big Spring toward the upper Hurricane Creek Basin. Deeper flowing water in the Big Spring recharge area is directed toward this permeable zone. The estimated sizes of the spring recharge areas are 426 square miles for Big Spring, 352 square miles for Greer Spring, 290 square miles for Mammoth Spring, and 54 square miles for Boze Mill Spring. A discharge accumulation curve using Big Spring daily mean discharge data shows no substantial change in the discharge pattern of Big Spring during the period of record (water years 1922 through 2004). The extended periods when the spring flow deviated from the trend line can be attributed to prolonged departures from normal precipitation. The maximum possible instantaneous flow from Big Spring has not been adequately defined because of backwater effects from the Current River during high-flow conditions. Physical constraints within the spring conduit system may restrict its maximum flow. The largest discharge measured at Big Spring

  16. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  17. 33 CFR 117.677 - Big Sunflower River.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Big Sunflower River. 117.677 Section 117.677 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Mississippi § 117.677 Big Sunflower River. The draw of...

  18. Design and Implementation of a Training Course on Big Data Use in Water Management

    Directory of Open Access Journals (Sweden)

    Petra Koudelova

    2017-09-01

    Full Text Available Big Data has great potential to be applied to research in the field of geosciences. Motivated by the opportunity provided by the Data Integration and Analysis System (DIAS of Japan, we organized an intensive two-week course that aims to educate participants on Big Data and its exploitation to solve water management problems. When developing and implementing the Program, we identified two main challenges: (1 assuring that the training has a lasting effect and (2 developing an interdisciplinary curriculum suitable for participants of diverse professional backgrounds. To address these challenges, we introduced several distinctive features. The Program was based on experiential learning – the participants were required to solve real problems and worked in international and multidisciplinary teams. The lectures were strictly relevant to the case-study problems. Significant time was devoted to hands-on exercises, and participants received immediate feedback on individual assignments to ensure skills development. Our evaluation of the two occasions of the Program in 2015 and 2016 indicates significant positive outcomes. The successful completion of the individual assignments confirmed that the participants gained key skills related to the usage of DIAS and other tools. The final solutions to the case-study problems showed that the participants were able to integrate and apply the obtained knowledge, indicating that the Program’s format and curriculum were effective. We found that participants used DIAS in subsequent studies and work, thus suggesting that the Program had long-lasting effects. Our experience indicates that despite time constraints, short courses can effectively encourage researchers and practitioners to explore opportunities provided by Big Data.

  19. Nationwide water availability data for energy-water modeling

    Energy Technology Data Exchange (ETDEWEB)

    Tidwell, Vincent Carroll [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zemlick, Katie M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klise, Geoffrey Taylor [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-11-01

    The purpose of this effort is to explore where the availability of water could be a limiting factor in the siting of new electric power generation. To support this analysis, water availability is mapped at the county level for the conterminous United States (3109 counties). Five water sources are individually considered, including unappropriated surface water, unappropriated groundwater, appropriated water (western U.S. only), municipal wastewater and brackish groundwater. Also mapped is projected growth in non-thermoelectric consumptive water demand to 2035. Finally, the water availability metrics are accompanied by estimated costs associated with utilizing that particular supply of water. Ultimately these data sets are being developed for use in the National Renewable Energy Laboratories' (NREL) Regional Energy Deployment System (ReEDS) model, designed to investigate the likely deployment of new energy installations in the U.S., subject to a number of constraints, particularly water.

  20. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  1. Water availability, water quality water governance: the future ahead

    Science.gov (United States)

    Tundisi, J. G.; Matsumura-Tundisi, T.; Ciminelli, V. S.; Barbosa, F. A.

    2015-04-01

    The major challenge for achieving a sustainable future for water resources and water security is the integration of water availability, water quality and water governance. Water is unevenly distributed on Planet Earth and these disparities are cause of several economic, ecological and social differences in the societies of many countries and regions. As a consequence of human misuse, growth of urbanization and soil degradation, water quality is deteriorating continuously. Key components for the maintenance of water quantity and water quality are the vegetation cover of watersheds, reduction of the demand and new water governance that includes integrated management, predictive evaluation of impacts, and ecosystem services. Future research needs are discussed.

  2. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  3. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  4. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  5. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  6. On the visualization of water-related big data: extracting insights from drought proxies' datasets

    Science.gov (United States)

    Diaz, Vitali; Corzo, Gerald; van Lanen, Henny A. J.; Solomatine, Dimitri

    2017-04-01

    Big data is a growing area of science where hydroinformatics can benefit largely. There have been a number of important developments in the area of data science aimed at analysis of large datasets. Such datasets related to water include measurements, simulations, reanalysis, scenario analyses and proxies. By convention, information contained in these databases is referred to a specific time and a space (i.e., longitude/latitude). This work is motivated by the need to extract insights from large water-related datasets, i.e., transforming large amounts of data into useful information that helps to better understand of water-related phenomena, particularly about drought. In this context, data visualization, part of data science, involves techniques to create and to communicate data by encoding it as visual graphical objects. They may help to better understand data and detect trends. Base on existing methods of data analysis and visualization, this work aims to develop tools for visualizing water-related large datasets. These tools were developed taking advantage of existing libraries for data visualization into a group of graphs which include both polar area diagrams (PADs) and radar charts (RDs). In both graphs, time steps are represented by the polar angles and the percentages of area in drought by the radios. For illustration, three large datasets of drought proxies are chosen to identify trends, prone areas and spatio-temporal variability of drought in a set of case studies. The datasets are (1) SPI-TS2p1 (1901-2002, 11.7 GB), (2) SPI-PRECL0p5 (1948-2016, 7.91 GB) and (3) SPEI-baseV2.3 (1901-2013, 15.3 GB). All of them are on a monthly basis and with a spatial resolution of 0.5 degrees. First two were retrieved from the repository of the International Research Institute for Climate and Society (IRI). They are included into the Analyses Standardized Precipitation Index (SPI) project (iridl.ldeo.columbia.edu/SOURCES/.IRI/.Analyses/.SPI/). The third dataset was

  7. Big Data and Heath Impacts of Drinking Water Quality Violation

    Science.gov (United States)

    Allaire, M.; Zheng, S.; Lall, U.

    2017-12-01

    Health impacts of drinking water quality violations are only understood at a coarse level in the United States. This limits identification of threats to water security in communities across the country. Substantial under-reporting is suspected due to requirements at U.S. public health institutes that water borne illnesses be confirmed by health providers. In the era of `big data', emerging information sources could offer insight into waterborne disease trends. In this study, we explore the use of fine-resolution sales data for over-the-counter medicine to estimate the health impacts of drinking water quality violations. We also demonstrate how unreported water quality issues can be detected by observing market behavior. We match a panel of supermarket sales data for the U.S. at the weekly level with geocoded violations data from 2006-2015. We estimate the change in anti-diarrheal medicine sale due to drinking water violations using a fixed effects model. We find that water quality violations have considerable effects on medicine sales. Sales nearly double due to Tier 1 violations, which pose an immediate health risk, and sales increase 15.1 percent due to violations related to microorganisms. Furthermore, our estimate of diarrheal illness cases associated with water quality violations indicates that the Centers for Disease Control and Prevention (CDC) reporting system may only capture about one percent of diarrheal cases due to impaired water. Incorporating medicine sales data could offer national public health institutes a game-changing way to improve monitoring of disease outbreaks. Since many disease cases are not formally diagnosed by health providers, consumption information could provide additional information to remedy under-reporting issues and improve water security in communities across the United States.

  8. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  9. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  10. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  11. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  12. Water Quality in Big Cypress National Preserve and Everglades National Park - Trends and Spatial Characteristics of Selected Constituents

    Science.gov (United States)

    Miller, Ronald L.; McPherson, Benjamin F.; Sobczak, Robert; Clark, Christine

    2004-01-01

    Seasonal changes in water levels and flows in Big Cypress National Preserve (BICY) and Everglades National Park (EVER) affect water quality. As water levels and flows decline during the dry season, physical, geochemical and biological processes increase the breakdown of organic materials and the build-up of organic waste, nutrients, and other constituents in the remaining surface water. For example, concentrations of total phosphorus in the marsh are less than 0.01 milligram per liter (mg/L) during much of the year. Concentrations can rise briefly above this value during the dry season and occasionally exceed 0.1 mg/L under drought conditions. Long-term changes in water levels, flows, water management, and upstream land use also affect water quality in BICY and EVER, based on analysis of available data (1959-2000). During the 1980's and early 1990's, specific conductance and concentrations of chloride increased in the Taylor Slough and Shark River Slough. Chloride concentrations more than doubled from 1960 to 1990, primarily due to greater canal transport of high dissolved solids into the sloughs. Some apparent long-term trends in sulfate and total phosphorus were likely attributable, at least in part, to high percentages of less-than and zero values and to changes in reporting levels over the period of record. High values in nutrient concentrations were evident during dry periods of the 1980's and were attributable either to increased canal inflows of nutrient-rich water, increased nutrient releases from breakdown of organic bottom sediment, or increased build-up of nutrient waste from concentrations of aquatic biota and wildlife in remaining ponds. Long-term changes in water quality over the period of record are less pronounced in the western Everglades and the Big Cypress Swamp; however, short-term seasonal and drought-related changes are evident. Water quality varies spatially across the region because of natural variations in geology, hydrology, and vegetation

  13. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  14. Fuel and heavy water availability

    International Nuclear Information System (INIS)

    1980-01-01

    The general guidelines for the Working Group's evaluation of the availability of nuclear fuel and heavy water were set at the Organizing Conference of the International Nuclear Fuel Cycle Evaluation (INFCE), which was held in Washington, United States of America, 19-21 October 1977. The agreed technical and economic scope for the evaluation was to: (1) Estimate needs for nuclear energy and correlated needs for uranium and heavy water according to different fuel cycle strategies; (2) Review uranium availability with specific regard to: Assessment of resources and production capacities; policies and incentives for encouraging exploration and production including joint ventures; marketing policies and/or guarantees of sales for companies investing in exploration and production; marketing policies and/or guarantees of supply for utilities; technical development of exploration, mining and milling methods; (3) Review heavy water availability; (4) Review thorium availability; (5) Consider special needs of developing countries. The illustrations of availability and requirements developed in this report do provide a useful framework for considering future options and alternatives for the development of nuclear power

  15. Big Impacts and Transient Oceans on Titan

    Science.gov (United States)

    Zahnle, K. J.; Korycansky, D. G.; Nixon, C. A.

    2014-01-01

    We have studied the thermal consequences of very big impacts on Titan [1]. Titan's thick atmosphere and volatile-rich surface cause it to respond to big impacts in a somewhat Earth-like manner. Here we construct a simple globally-averaged model that tracks the flow of energy through the environment in the weeks, years, and millenia after a big comet strikes Titan. The model Titan is endowed with 1.4 bars of N2 and 0.07 bars of CH4, methane lakes, a water ice crust, and enough methane underground to saturate the regolith to the surface. We assume that half of the impact energy is immediately available to the atmosphere and surface while the other half is buried at the site of the crater and is unavailable on time scales of interest. The atmosphere and surface are treated as isothermal. We make the simplifying assumptions that the crust is everywhere as methane saturated as it was at the Huygens landing site, that the concentration of methane in the regolith is the same as it is at the surface, and that the crust is made of water ice. Heat flow into and out of the crust is approximated by step-functions. If the impact is great enough, ice melts. The meltwater oceans cool to the atmosphere conductively through an ice lid while at the base melting their way into the interior, driven down in part through Rayleigh-Taylor instabilities between the dense water and the warm ice. Topography, CO2, and hydrocarbons other than methane are ignored. Methane and ethane clathrate hydrates are discussed quantitatively but not fully incorporated into the model.

  16. Ground-Water Availability in the United States

    Science.gov (United States)

    Reilly, Thomas E.; Dennehy, Kevin F.; Alley, William M.; Cunningham, William L.

    2008-01-01

    Ground water is among the Nation's most important natural resources. It provides half our drinking water and is essential to the vitality of agriculture and industry, as well as to the health of rivers, wetlands, and estuaries throughout the country. Large-scale development of ground-water resources with accompanying declines in ground-water levels and other effects of pumping has led to concerns about the future availability of ground water to meet domestic, agricultural, industrial, and environmental needs. The challenges in determining ground-water availability are many. This report examines what is known about the Nation's ground-water availability and outlines a program of study by the U.S. Geological Survey Ground-Water Resources Program to improve our understanding of ground-water availability in major aquifers across the Nation. The approach is designed to provide useful regional information for State and local agencies who manage ground-water resources, while providing the building blocks for a national assessment. The report is written for a wide audience interested or involved in the management, protection, and sustainable use of the Nation's water resources.

  17. Water Resources Availability in Kabul, Afghanistan

    Science.gov (United States)

    Akbari, A. M.; Chornack, M. P.; Coplen, T. B.; Emerson, D. G.; Litke, D. W.; Mack, T. J.; Plummer, N.; Verdin, J. P.; Verstraeten, I. M.

    2008-12-01

    The availability of water resources is vital to the rebuilding of Kabul, Afghanistan. In recent years, droughts and increased water use for drinking water and agriculture have resulted in widespread drying of wells. Increasing numbers of returning refugees, rapid population growth, and potential climate change have led to heightened concerns for future water availability. The U.S. Geological Survey, with support from the U.S. Agency for International Development, began collaboration with the Afghanistan Geological Survey and Ministry of Energy and Water on water-resource investigations in the Kabul Basin in 2004. This has led to the compilation of historic and recent water- resources data, creation of monitoring networks, analyses of geologic, geophysical, and remotely sensed data. The study presented herein provides an assessment of ground-water availability through the use of multidisciplinary hydrogeologic data analysis. Data elements include population density, climate, snowpack, geology, mineralogy, surface water, ground water, water quality, isotopic information, and water use. Data were integrated through the use of conceptual ground-water-flow model analysis and provide information necessary to make improved water-resource planning and management decisions in the Kabul Basin. Ground water is currently obtained from a shallow, less than 100-m thick, highly productive aquifer. CFC, tritium, and stable hydrogen and oxygen isotopic analyses indicate that most water in the shallow aquifer appears to be recharged post 1970 by snowmelt-supplied river leakage and secondarily by late winter precipitation. Analyses indicate that increasing withdrawals are likely to result in declining water levels and may cause more than 50 percent of shallow supply wells to become dry or inoperative particularly in urbanized areas. The water quality in the shallow aquifer is deteriorated in urban areas by poor sanitation and water availability concerns may be compounded by poor well

  18. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  19. Big Data: Concept, Potentialities and Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Fernando Almeida

    2018-03-01

    Full Text Available The evolution of information systems and the growth in the use of the Internet and social networks has caused an explosion in the amount of available data relevant to the activities of the companies. Therefore, the treatment of these available data is vital to support operational, tactical and strategic decisions. This paper aims to present the concept of big data and the main technologies that support the analysis of large data volumes. The potential of big data is explored considering nine sectors of activity, such as financial, retail, healthcare, transports, agriculture, energy, manufacturing, public, and media and entertainment. In addition, the main current opportunities, vulnerabilities and privacy challenges of big data are discussed. It was possible to conclude that despite the potential for using the big data to grow in the previously identified areas, there are still some challenges that need to be considered and mitigated, namely the privacy of information, the existence of qualified human resources to work with Big Data and the promotion of a data-driven organizational culture.

  20. Water availability pollution and control

    International Nuclear Information System (INIS)

    Qureshi, K.A.

    2001-01-01

    Water has played a very important role in the development of human society. Resources of water have shaped the development of people and nations. Management of water gave the birth to innovations and technologies. Our complex metropolitan civilization and advanced technologies have generated new demands for water. Its importance to society and government has never diminished. The growing concern over resources availability and a rapid spread of water pollution, the link between water supply and water quality have become more apparent. The global management of water demands economy in use, restricted chemical and sanitation emissions, population control, discouragement of urbanization and water pollution awareness can greatly assist in averting the water holocaust that the world is expecting to face in the years to come. The scientific community in Pakistan is required to diagnose these problems in a systematic way to give advance warning of expected water scarcity, water pollution, water related land degradation, urban growth and population to assure the water cycle integrity of our world. (author)

  1. Lunchtime School Water Availability and Water Consumption Among California Adolescents.

    Science.gov (United States)

    Bogart, Laura M; Babey, Susan H; Patel, Anisha I; Wang, Pan; Schuster, Mark A

    2016-01-01

    To examine the potential impact of California SB 1413, which required school districts to provide free, fresh drinking water during mealtimes in food service areas by July 1, 2011, on greater water consumption among California adolescents. Data were drawn from the 2012 and 2013 state-representative California Health Interview Survey. A total of 2,665 adolescents aged 12-17 years were interviewed regarding their water consumption and availability of free water during lunchtime at their school. Three-fourths reported that their school provided free water at lunchtime, mainly via fountains. In a multivariate model that controlled for age, gender, income, race/ethnicity, body mass index, and school type, adolescents in schools that provided free water consumed significantly more water than adolescents who reported that water was not available, bivariate (standard error) = .67 (.28), p = .02. School water access did not significantly vary across the 2 years. Lunchtime school water availability was related to water consumption, but a quarter of adolescents reported that their school did not provide free water at lunch. Future research should explore what supports and inducements might facilitate provision of drinking water during school mealtimes. Copyright © 2016 Society for Adolescent Health and Medicine. All rights reserved.

  2. 25 CFR 137.2 - Availability of water.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Availability of water. 137.2 Section 137.2 Indians BUREAU... COSTS, SAN CARLOS INDIAN IRRIGATION PROJECT, ARIZONA § 137.2 Availability of water. Pursuant to section... notice to announce when water is actually available for lands in private ownership under the project and...

  3. Regional Responses to Constrained Water Availability

    Science.gov (United States)

    Cui, Y.; Calvin, K. V.; Hejazi, M. I.; Clarke, L.; Kim, S. H.; Patel, P.

    2017-12-01

    There have been many concerns about water as a constraint to agricultural production, electricity generation, and many other human activities in the coming decades. Nevertheless, how different countries/economies would respond to such constraints has not been explored. Here, we examine the responding mechanism of binding water availability constraints at the water basin level and across a wide range of socioeconomic, climate and energy technology scenarios. Specifically, we look at the change in water withdrawals between energy, land-use and other sectors within an integrated framework, by using the Global Change Assessment Model (GCAM) that also endogenizes water use and allocation decisions based on costs. We find that, when water is taken into account as part of the production decision-making, countries/basins in general fall into three different categories, depending on the change of water withdrawals and water re-allocation between sectors. First, water is not a constraining factor for most of the basins. Second, advancements in water-saving technologies of the electricity generation cooling systems are sufficient of reducing water withdrawals to meet binding water availability constraints, such as in China and the EU-15. Third, water-saving in the electricity sector alone is not sufficient and thus cannot make up the lowered water availability from the binding case; for example, many basins in Pakistan, Middle East and India have to largely reduce irrigated water withdrawals by either switching to rain-fed agriculture or reducing production. The dominant responding strategy for individual countries/basins is quite robust across the range of alternate scenarios that we test. The relative size of water withdrawals between energy and agriculture sectors is one of the most important factors that affect the dominant mechanism.

  4. Review of 'plant available water' aspects of water use efficiency ...

    African Journals Online (AJOL)

    Review of 'plant available water' aspects of water use efficiency under ... model relating the water supply from a layered soil profile to water demand; the ... and management strategies to combat excessive water losses by deep drainage.

  5. 78 FR 19261 - Environmental Impacts Statements; Notice of Availability

    Science.gov (United States)

    2013-03-29

    ... Availability Responsible Agency: Office of Federal Activities, General Information (202) 564-7146 or http://www..., King Coal Highway Delbarton to Belo Project and Buffalo Mountain Surface Mine Clean Water Act Section..., USFS, MT, Jack Rabbit to Big Sky Meadow Village 161 kV Transmission Line Upgrade, Review Period Ends...

  6. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  7. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  8. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  9. Water Availability as a Measure of Cellulose Hydrolysis Efficiency

    DEFF Research Database (Denmark)

    Hsieh, Chia-Wen

    of sugars, salts, and surfactants impact the water relaxation time. Systems with high concentrations of sugars and salts tend to have low water availability, as these form strong interactions with water to keep their solubility, leaving less water available for hydrolysis. Thus, cellulase performance...... decreases. However, the addition of surfactants such as polyethylene glycol (PEG) increases the water mobility, leading to higher water availability, and ultimately higher glucose production. More specifically, the higher water availability boosts the activity of processive cellulases. Thus, water...... availability is vital for efficient hydrolysis, especially at high dry matter content where water availability is low. At high dry matter content, cellulase activity changes water interactions with biomass, affecting the water mobility. While swelling and fiber loosening also take place during hydrolysis...

  10. 46 CFR 76.10-3 - Water availability.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 3 2010-10-01 2010-10-01 false Water availability. 76.10-3 Section 76.10-3 Shipping... Fire Main System, Details § 76.10-3 Water availability. (a) On all vessels on an international voyage, regardless of the date of construction, water pressure from the firemain protecting enclosed spaces shall be...

  11. Differences in diet and activity pattern between two groups of Alouatta palliata associated with the availability of big trees and fruit of top food taxa.

    Science.gov (United States)

    Dunn, Jacob C; Cristóbal-Azkarate, Jurgi; Veà, Joaquím J

    2009-08-01

    The threat that forest fragmentation and habitat loss presents for several Alouatta taxa requires us to determine the key elements that may promote the persistence of howler monkeys in forest fragments and to evaluate how changes in the availability of these elements may affect their future conservation prospects. In this study we analyzed the relationship between the availability of both big trees of top food taxa (BTTFT) (diameter at breast height>60) and fruit of top food taxa (FrTFT) in the home ranges of two groups of Alouatta palliata mexicana occupying different forest fragments in Los Tuxtlas, Mexico, and their diet and activity pattern. Both study groups preferred big trees for feeding and the group with lower availability of BTTFT in their home range fed from more, smaller food sources. Furthermore, both study groups also increased the number of food sources when their consumption of fruit decreased, and the group with lower availability of FrTFT in their home range fed from more food sources. The increase in the number of food sources used under such conditions, in turn, set up a process of higher foraging effort and lower rest. In summary, our results support other studies that suggest that the availability of big trees and fruit may be two important elements influencing the persistence of howler monkeys in forest fragments.

  12. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  13. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  14. Big data and biomedical informatics: a challenging opportunity.

    Science.gov (United States)

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  15. Big data analytics for the Future Circular Collider reliability and availability studies

    Science.gov (United States)

    Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter

    2017-10-01

    Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.

  16. Water on Mars - Volatile history and resource availability

    Science.gov (United States)

    Jakosky, Bruce M.

    1990-01-01

    An attempt is made to define the available deposits of water in the near-surface region of Mars which will be available to human exploration missions. The Martian seasonal water cycle is reviewed, and geochemical and geological constraints on the availability of water are examined. It is concluded that the only sure source of water in amounts significant as a resource are in the polar ice deposits.

  17. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  18. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  19. Water Availability and Management of Water Resources

    Science.gov (United States)

    One of the most pressing national and global issues is the availability of freshwater due to global climate change, energy scarcity issues and the increase in world population and accompanying economic growth. Estimates of water supplies and flows through the world's hydrologic c...

  20. Water Availability in a Warming World

    Science.gov (United States)

    Aminzade, Jennifer

    As climate warms during the 21st century, the resultant changes in water availability are a vital issue for society, perhaps even more important than the magnitude of warming itself. Yet our climate models disagree in their forecasts of water availability, limiting our ability to plan accordingly. This thesis investigates future water availability projections from Coupled Ocean-Atmosphere General Circulation Models (GCMs), primarily using two water availability measures: soil moisture and the Supply Demand Drought Index (SDDI). Chapter One introduces methods of measuring water availability and explores some of the fundamental differences between soil moisture, SDDI and the Palmer Drought Severity Index (PDSI). SDDI and PDSI tend to predict more severe future drought conditions than soil moisture; 21st century projections of SDDI show conditions rivaling North American historic mega-droughts. We compare multiple potential evapotranspiration (EP) methods in New York using input from the GISS Model ER GCM and local station data from Rochester, NY, and find that they compare favorably with local pan evaporation measurements. We calculate SDDI and PDSI values using various EP methods, and show that changes in future projections are largest when using EP methods most sensitive to global warming, not necessarily methods producing EP values with the largest magnitudes. Chapter Two explores the characteristics and biases of the five GCMs and their 20th and 21st century climate projections. We compare atmospheric variables that drive water availability changes globally, zonally, and geographically among models. All models show increases in both dry and wet extremes for SDDI and soil moisture, but increases are largest for extreme drying conditions using SDDI. The percentage of gridboxes that agree on the sign of change of soil moisture and SDDI between models is very low, but does increase in the 21st century. Still, differences between models are smaller than differences

  1. Advanced Modeling in Excel: from Water Jets to Big Bang

    Science.gov (United States)

    Ignatova, Olga; Chyzhyk, D.; Willis, C.; Kazachkov, A.

    2006-12-01

    An international students’ project is presented focused on application of Open Office and Excel spreadsheets for modeling of projectile-motion type dynamical systems. Variation of the parameters of plotted and animated families of jets flowing at different angles out of the holes in the wall of water-filled reservoir [1,2] revealed unexpected peculiarities of the envelopes, vertices, intersections and landing points of virtual trajectories. Comparison with real-life systems and rigorous calculations were performed to prove predictions of computer experiments. By same technique, the kinematics of fireworks was analyzed. On this basis two-dimensional ‘firework’ computer model of Big Bang was designed and studied, its relevance and limitations checked. 1.R.Ehrlich, Turning the World Inside Out, (Princeton University Press, Princeton, NJ, 1990), pp. 98-100. 2.A.Kazachkov, Yu.Bogdan, N.Makarovsky, N.Nedbailo. A Bucketful of Physics, in R.Pinto, S.Surinach (eds), International Conference Physics Teacher Education Beyond 2000. Selected Contributions (Elsevier Editions, Paris, 2001), pp.563-564. Sponsored by Courtney Willis.

  2. Big Data and Biomedical Informatics: A Challenging Opportunity

    Science.gov (United States)

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  3. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  4. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  5. Water availability and management for food security

    Science.gov (United States)

    Food security is directly linked to water security for food production. Water availability for crop production will be dependent upon precipitation or irrigation, soil water holding capacity, and crop water demand. The linkages among these components in rainfed agricultural systems shows the impact ...

  6. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  7. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  8. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  9. Review of 'plant available water' aspects of water use efficiency ...

    African Journals Online (AJOL)

    ... enhanced understanding of the system, thereby enabling the formulation of a quantitative model relating the water supply from a layered soil profile to water demand; the formulation of logical quantitative definitions for crop-ecotope specific upper and lower limits of available water; the identification of the harmful rootzone ...

  10. Proposing water balance method for water availability estimation in Indonesian regional spatial planning

    Science.gov (United States)

    Juniati, A. T.; Sutjiningsih, D.; Soeryantono, H.; Kusratmoko, E.

    2018-01-01

    The water availability (WA) of a region is one of important consideration in both the formulation of spatial plans and the evaluation of the effectiveness of actual land use in providing sustainable water resources. Information on land-water needs vis-a-vis their availability in a region determines the state of the surplus or deficit to inform effective land use utilization. How to calculate water availability have been described in the Guideline in Determining the Carrying Capacity of the Environment in Regional Spatial Planning. However, the method of determining the supply and demand of water on these guidelines is debatable since the determination of WA in this guideline used a rational method. The rational method is developed the basis for storm drain design practice and it is essentially a peak discharge method peak discharge calculation method. This paper review the literature in methods of water availability estimation which is described descriptively, and present arguments to claim that water balance method is a more fundamental and appropriate tool in water availability estimation. A better water availability estimation method would serve to improve the practice in preparing formulations of Regional Spatial Plan (RSP) as well as evaluating land use capacity in providing sustainable water resources.

  11. Big Data and Analytics in Healthcare.

    Science.gov (United States)

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  12. Big Data for Business Ecosystem Players

    Directory of Open Access Journals (Sweden)

    Perko Igor

    2016-06-01

    Full Text Available In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use-namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.

  13. Minsky on "Big Government"

    Directory of Open Access Journals (Sweden)

    Daniel de Santana Vasconcelos

    2014-03-01

    Full Text Available This paper objective is to assess, in light of the main works of Minsky, his view and analysis of what he called the "Big Government" as that huge institution which, in parallels with the "Big Bank" was capable of ensuring stability in the capitalist system and regulate its inherently unstable financial system in mid-20th century. In this work, we analyze how Minsky proposes an active role for the government in a complex economic system flawed by financial instability.

  14. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  15. The Information Panopticon in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Martin Berner

    2014-04-01

    Full Text Available Taking advantage of big data opportunities is challenging for traditional organizations. In this article, we take a panoptic view of big data – obtaining information from more sources and making it visible to all organizational levels. We suggest that big data requires the transformation from command and control hierarchies to post-bureaucratic organizational structures wherein employees at all levels can be empowered while simultaneously being controlled. We derive propositions that show how to best exploit big data technologies in organizations.

  16. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text AvailableBig Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  17. Compounding Impacts of Human-Induced Water Stress and Climate Change on Water Availability

    Science.gov (United States)

    Mehran, Ali; AghaKouchak, Amir; Nakhjiri, Navid; Stewardson, Michael J.; Peel, Murray C.; Phillips, Thomas J.; Wada, Yoshihide; Ravalico, Jakin K.

    2017-01-01

    The terrestrial phase of the water cycle can be seriously impacted by water management and human water use behavior (e.g., reservoir operation, and irrigation withdrawals). Here we outline a method for assessing water availability in a changing climate, while explicitly considering anthropogenic water demand scenarios and water supply infrastructure designed to cope with climatic extremes. The framework brings a top-down and bottom-up approach to provide localized water assessment based on local water supply infrastructure and projected water demands. When our framework is applied to southeastern Australia we find that, for some combinations of climatic change and water demand, the region could experience water stress similar or worse than the epic Millennium Drought. We show considering only the influence of future climate on water supply, and neglecting future changes in water demand and water storage augmentation might lead to opposing perspectives on future water availability. While human water use can significantly exacerbate climate change impacts on water availability, if managed well, it allows societies to react and adapt to a changing climate. The methodology we present offers a unique avenue for linking climatic and hydrologic processes to water resource supply and demand management and other human interactions.

  18. Frameworks for Assessing Human Influence on Water Availability

    Science.gov (United States)

    AghaKouchak, A.; Mehran, A.; Mazdiyasni, O.; Ashraf, B.

    2016-12-01

    The water cycle is tightly coupled with water management and human water use behavior. Human activities and water use behavior can intensify the effects of a meteorological drought (a notion referred to as Anthropogenic Drought). In this presentation, we provide a general definition of anthropogenic drought. We then briefly review two different methods for assessing human influence on water availability: (1) a data-driven multivariate approach that links the information on inflow and surface reservoir storage to water demand; (2) A model-based framework that brings a top-down and bottom-up approach to provide localized water assessment based on local available infrastructure and projected water demands. Finally, we will show how the proposed methods can be used for water management scenario analysis (e.g., local water availability based on different human water demands scenarios). This presentation is primarily based on Mehran et al (Mehran A., Mazdiyasni O., AghaKouchak A., 2015, A Hybrid Framework for Assessing Socioeconomic Drought: Linking Climate Variability, Local Resilience, and Demand, Journal of Geophysical Research, 120 (15), 7520-7533, doi: 10.1002/2015JD023147.) and AghaKouchak et al (AghaKouchak A., Feldman D., Hoerling M., Huxman T., Lund J., 2015, Recognize Anthropogenic Drought, Nature, 524 (7566), 409-4011, doi:10.1038/524409a).

  19. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Substrate water availability and seed water content on niger germination

    Directory of Open Access Journals (Sweden)

    Carla Regina Baptista Gordin

    2015-09-01

    Full Text Available Niger is an oleaginous species whose cultivation has been spreading, but there is not much information on the adverse conditions during its seedling establishment. This study aimed at evaluating the effects of substrate water availability and seed water content on niger germination. Seeds were moistened using the humid atmosphere method for 0; 24; 48; and 72 hours, obtaining the water contents of 7.0 %, 12.8 %, 16.8 % and 32.2 %. Then, they were sown in substrate moistened with PEG 6000 solutions with different osmotic potentials: 0.0 MPa (control, -0.1 MPa, -0.2 MPa, -0.3 MPa and -0.4 MPa. A completely randomized design, in a 4 x 5 factorial scheme (water content x osmotic potential, with four replications of 50 seeds, was used. First count and germination percentage, germination speed index and mean time, shoot and root length and seedlings dry weight were evaluated. The reduction in the substrate osmotic potential decreases the niger seed germination and seedling growth, regardless of water content, but with a higher evidence in seed water contents below 32.2 % and 12.8 %, respectively.

  1. Water quality and amphibian health in the Big Bend region of the Rio Grande Basin

    Science.gov (United States)

    Sharma, Bibek; Hu, F.; Carr, J.A.; Patino, Reynaldo

    2011-01-01

    Male and female Rio Grande leopard frogs (Rana berlandieri) were collected in May 2005 from the main stem and tributaries of the Rio Grande in the Big Bend region of Texas. Frogs were examined for (1) incidence of testicular ovarian follicles in males; (2) thyroid epithelial cell height, a potential index of exposure to thyroid-disrupting contaminants; and (3) incidence of liver melanomacrophage aggregates, a general index of exposure to contaminants. Standard parameters of surface water quality and concentrations of selected elements, including heavy metals, were determined at each frog collection site. Heavy metals also were measured in whole-frog composite extracts. Water cadmium concentrations in most sites and chloride concentrations in the main stem exceeded federal criteria for freshwater aquatic life. Mercury was detected in frogs from the two collection sites in Terlingua Creek. There was a seventeen percent incidence of testicular ovarian follicles in male frogs. Mean thyroid epithelial cell height was greater in frogs from one of the Terlingua Creek sites (Terlingua Abajo). No differences were observed in the incidence of hepatic macrophage aggregates among sites. In conclusion, although potential cause-effect relationships between indices of habitat quality and amphibian health could not be established, the results of this study raise concerns about the general quality of the aquatic habitat and the potential long-term consequences to the aquatic biota of the Big Bend region. The presence of ovarian follicles in male frogs is noteworthy but further study is necessary to determine whether this phenomenon is natural or anthropogenically induced.

  2. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  3. Intelligent Test Mechanism Design of Worn Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available With the continuous development of national economy, big gear was widely applied in metallurgy and mine domains. So, big gear plays an important role in above domains. In practical production, big gear abrasion and breach take place often. It affects normal production and causes unnecessary economic loss. A kind of intelligent test method was put forward on worn big gear mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. The measure equations transformations were made on involute straight gear. Original polar coordinate equations were transformed into rectangular coordinate equations. Big gear abrasion measure principle was introduced. Detection principle diagram was given. Detection route realization method was introduced. OADM12 laser sensor was selected. Detection on big gear abrasion area was realized by detection mechanism. Tested data of unworn gear and worn gear were led in designed calculation program written by Visual Basic language. Big gear abrasion quantity can be obtained. It provides a feasible method for intelligent test and intelligent repair welding on worn big gear.

  4. Urban water metabolism efficiency assessment: integrated analysis of available and virtual water.

    Science.gov (United States)

    Huang, Chu-Long; Vause, Jonathan; Ma, Hwong-Wen; Yu, Chang-Ping

    2013-05-01

    Resolving the complex environmental problems of water pollution and shortage which occur during urbanization requires the systematic assessment of urban water metabolism efficiency (WME). While previous research has tended to focus on either available or virtual water metabolism, here we argue that the systematic problems arising during urbanization require an integrated assessment of available and virtual WME, using an indicator system based on material flow analysis (MFA) results. Future research should focus on the following areas: 1) analysis of available and virtual water flow patterns and processes through urban districts in different urbanization phases in years with varying amounts of rainfall, and their environmental effects; 2) based on the optimization of social, economic and environmental benefits, establishment of an indicator system for urban WME assessment using MFA results; 3) integrated assessment of available and virtual WME in districts with different urbanization levels, to facilitate study of the interactions between the natural and social water cycles; 4) analysis of mechanisms driving differences in WME between districts with different urbanization levels, and the selection of dominant social and economic driving indicators, especially those impacting water resource consumption. Combinations of these driving indicators could then be used to design efficient water resource metabolism solutions, and integrated management policies for reduced water consumption. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  6. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  7. Topical and working papers on heavy water requirements and availability

    International Nuclear Information System (INIS)

    The documents included in this report are: Heavy water requirements and availability; technological infrastructure for heavy water plants; heavy water plant siting; hydrogen and methane availability; economics of heavy water production; monothermal, water fed heavy water process based on the ammonia/hydrogen isotopic exchange; production strategies to meet demand projections; hydrogen availability; deuterium sources; the independent UHDE heavy water process

  8. Big Data as a Source for Official Statistics

    Directory of Open Access Journals (Sweden)

    Daas Piet J.H.

    2015-06-01

    Full Text Available More and more data are being produced by an increasing number of electronic devices physically surrounding us and on the internet. The large amount of data and the high frequency at which they are produced have resulted in the introduction of the term ‘Big Data’. Because these data reflect many different aspects of our daily lives and because of their abundance and availability, Big Data sources are very interesting from an official statistics point of view. This article discusses the exploration of both opportunities and challenges for official statistics associated with the application of Big Data. Experiences gained with analyses of large amounts of Dutch traffic loop detection records and Dutch social media messages are described to illustrate the topics characteristic of the statistical analysis and use of Big Data.

  9. Future Availability of Water Supply from Karstic Springs under Probable Climate Change. The case of Aravissos, Central Macedonia, Greece.

    Science.gov (United States)

    Vafeiadis, M.; Spachos, Th.; Zampetoglou, K.; Soupilas, Th.

    2012-04-01

    The test site of Aravissos is located at 70 Km to the West (W-NW) of Thessaloniki at the south banks of mount Païko, in the north part of Central Macedonia The karstic Aravissos springs supply 40% of total volume needed for the water supply of Thessaloniki, Greece. As the water is of excellent quality, it is feed directly in the distribution network without any previous treatment. The availability of this source is therefore of high importance for the sustainable water supply of this area with almost 1000000 inhabitants. The water system of Aravissos is developed in a karstic limestone with an age of about Late Cretaceous that covers almost the entire western part of the big-anticline of Païko Mountain. The climate in this area and the water consumption area, Thessaloniki, is a typical Mediterranean climate with mild and humid winters and hot and dry summers. The total annual number of rainy days is around 110. The production of the Aravissos springs depends mostly from the annual precipitations. As the feeding catchement and the karst aquifer are not well defined, a practical empirical balance model, that contains only well known relevant terms, is applied for the simulation of the operation of the springs under normal water extraction for water supply in present time. The estimation of future weather conditions are based on GCM and RCM simulation data and the extension of trend lines of the actual data. The future evolution of the availability of adequate water quantities from the springs is finally estimated from the balance model and the simulated future climatic data. This study has been realised within the project CC-WaterS, funded by the SEE program of the European Regional Development Fund (http://www.ccwaters.eu/).

  10. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  11. Will Organization Design Be Affected By Big Data?

    Directory of Open Access Journals (Sweden)

    Giles Slinger

    2014-12-01

    Full Text Available Computing power and analytical methods allow us to create, collate, and analyze more data than ever before. When datasets are unusually large in volume, velocity, and variety, they are referred to as “big data.” Some observers have suggested that in order to cope with big data (a organizational structures will need to change and (b the processes used to design organizations will be different. In this article, we differentiate big data from relatively slow-moving, linked people data. We argue that big data will change organizational structures as organizations pursue the opportunities presented by big data. The processes by which organizations are designed, however, will be relatively unaffected by big data. Instead, organization design processes will be more affected by the complex links found in people data.

  12. Big data analytics to aid developing livable communities.

    Science.gov (United States)

    2015-12-31

    In transportation, ubiquitous deployment of low-cost sensors combined with powerful : computer hardware and high-speed network makes big data available. USDOT defines big : data research in transportation as a number of advanced techniques applied to...

  13. Surface-water quality and suspended-sediment quantity and quality within the Big River Basin, southeastern Missouri, 2011-13

    Science.gov (United States)

    Barr, Miya N.

    2016-01-28

    Missouri was the leading producer of lead in the United States—as well as the world—for more than a century. One of the lead sources is known as the Old Lead Belt, located in southeast Missouri. The primary ore mineral in the region is galena, which can be found both in surface deposits and underground as deep as 200 feet. More than 8.5 million tons of lead were produced from the Old Lead Belt before operations ceased in 1972. Although active lead mining has ended, the effects of mining activities still remain in the form of large mine waste piles on the landscape typically near tributaries and the main stem of the Big River, which drains the Old Lead Belt. Six large mine waste piles encompassing more than 2,800 acres, exist within the Big River Basin. These six mine waste piles have been an available source of trace element-rich suspended sediments transported by natural erosional processes downstream into the Big River.

  14. Predicting and mapping soil available water capacity in Korea

    Directory of Open Access Journals (Sweden)

    Suk Young Hong

    2013-04-01

    Full Text Available The knowledge on the spatial distribution of soil available water capacity at a regional or national extent is essential, as soil water capacity is a component of the water and energy balances in the terrestrial ecosystem. It controls the evapotranspiration rate, and has a major impact on climate. This paper demonstrates a protocol for mapping soil available water capacity in South Korea at a fine scale using data available from surveys. The procedures combined digital soil mapping technology with the available soil map of 1:25,000. We used the modal profile data from the Taxonomical Classification of Korean Soils. The data consist of profile description along with physical and chemical analysis for the modal profiles of the 380 soil series. However not all soil samples have measured bulk density and water content at −10 and −1500 kPa. Thus they need to be predicted using pedotransfer functions. Furthermore, water content at −10 kPa was measured using ground samples. Thus a correction factor is derived to take into account the effect of bulk density. Results showed that Andisols has the highest mean water storage capacity, followed by Entisols and Inceptisols which have loamy texture. The lowest water retention is Entisols which are dominated by sandy materials. Profile available water capacity to a depth of 1 m was calculated and mapped for Korea. The western part of the country shows higher available water capacity than the eastern part which is mountainous and has shallower soils. The highest water storage capacity soils are the Ultisols and Alfisols (mean of 206 and 205 mm, respectively. Validation of the maps showed promising results. The map produced can be used as an indication of soil physical quality of Korean soils.

  15. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  16. Ground-Water System in the Chimacum Creek Basin and Surface Water/Ground Water Interaction in Chimacum and Tarboo Creeks and the Big and Little Quilcene Rivers, Eastern Jefferson County, Washington

    Science.gov (United States)

    Simonds, F. William; Longpre, Claire I.; Justin, Greg B.

    2004-01-01

    throughout most of the year and the lower reaches have little or no gains. The Big Quilcene River generally gains water from the shallow ground-water system after it emerges from a bedrock canyon and loses water from the town of Quilcene to the mouth of the river in Quilcene Bay. The Little Quilcene River generally loses water to the shallow ground-water system, although two localized areas were found to have gaining conditions. The Big Quilcene and Little Quilcene Rivers incur significant losses on the alluvial plain at the head of Quilcene Bay. Each of the creeks examined had a unique pattern of gaining and losing reaches, owing to the hydraulic conductivity of the streambed material and the relative altitude of the surrounding water table. Although the magnitudes of gains and losses varied seasonally, the spatial distribution did not vary greatly, suggesting that patterns of gains and losses in surface-water systems depend greatly on the geology underlying the streambed.

  17. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  18. Modeling regeneration responses of big sagebrush (Artemisia tridentata) to abiotic conditions

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Ecosystems dominated by big sagebrush, Artemisia tridentata Nuttall (Asteraceae), which are the most widespread ecosystems in semiarid western North America, have been affected by land use practices and invasive species. Loss of big sagebrush and the decline of associated species, such as greater sage-grouse, are a concern to land managers and conservationists. However, big sagebrush regeneration remains difficult to achieve by restoration and reclamation efforts and there is no regeneration simulation model available. We present here the first process-based, daily time-step, simulation model to predict yearly big sagebrush regeneration including relevant germination and seedling responses to abiotic factors. We estimated values, uncertainty, and importance of 27 model parameters using a total of 1435 site-years of observation. Our model explained 74% of variability of number of years with successful regeneration at 46 sites. It also achieved 60% overall accuracy predicting yearly regeneration success/failure. Our results identify specific future research needed to improve our understanding of big sagebrush regeneration, including data at the subspecies level and improved parameter estimates for start of seed dispersal, modified wet thermal-time model of germination, and soil water potential influences. We found that relationships between big sagebrush regeneration and climate conditions were site specific, varying across the distribution of big sagebrush. This indicates that statistical models based on climate are unsuitable for understanding range-wide regeneration patterns or for assessing the potential consequences of changing climate on sagebrush regeneration and underscores the value of this process-based model. We used our model to predict potential regeneration across the range of sagebrush ecosystems in the western United States, which confirmed that seedling survival is a limiting factor, whereas germination is not. Our results also suggested that modeled

  19. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  20. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  1. Kazakhstan's Environment-Health system, a Big Data challenge

    Science.gov (United States)

    Vitolo, Claudia; Bella Gazdiyeva, Bella; Tucker, Allan; Russell, Andrew; Ali, Maged; Althonayan, Abraham

    2016-04-01

    Kazakhstan has witnessed a remarkable economic development in the past 15 years, becoming an upper-middle-income country. However it is still widely regarded as a developing nation, partially because of its population's low life expectancy which is 5 years below the average in similar economies. The environment is in a rather fragile state, affected by soil, water, air pollution, radioactive contamination and climate change. However, Kazakhstan's government is moving towards clean energy and environmental protection and calling on scientists to help prioritise investments. The British Council-funded "Kazakhstan's Environment-Health Risk Analysis (KEHRA)" project is one of the recently launched initiatives to support Kazakhstan healthier future. The underlying hypothesis of this research is that the above mentioned factors (air/water/soil pollution, etc.) affecting public health almost certainly do not act independently but rather trigger and exacerbate each other. Exploring the environment-health links in a multi-dimensional framework is a typical Big Data problem, in which the volume and variety of the data needed poses technical as well as scientific challenges. In Kazakhstan, the complexities related to managing and analysing Big Data are worsened by a number of obstacles at the data acquisition step: most of the data is not in digital form, spatial and temporal attributes are often ambiguous and the re-use and re-purpose of the information is subject to restrictive licenses and other mechanisms of control. In this work, we document the first steps taken towards building an understanding of the complex environment-health system in Kazakhstan, using interactive visualisation tools to identify and compare hot-spots of pollution and poor health outcomes, Big Data and web technologies to collect, manage and explore available information. In the future, the knowledge acquired will be modelled to develop evidence-based recommendation systems for decision makers in

  2. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  3. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  4. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  5. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  6. Predicting and mapping soil available water capacity in Korea.

    Science.gov (United States)

    Hong, Suk Young; Minasny, Budiman; Han, Kyung Hwa; Kim, Yihyun; Lee, Kyungdo

    2013-01-01

    The knowledge on the spatial distribution of soil available water capacity at a regional or national extent is essential, as soil water capacity is a component of the water and energy balances in the terrestrial ecosystem. It controls the evapotranspiration rate, and has a major impact on climate. This paper demonstrates a protocol for mapping soil available water capacity in South Korea at a fine scale using data available from surveys. The procedures combined digital soil mapping technology with the available soil map of 1:25,000. We used the modal profile data from the Taxonomical Classification of Korean Soils. The data consist of profile description along with physical and chemical analysis for the modal profiles of the 380 soil series. However not all soil samples have measured bulk density and water content at -10 and -1500 kPa. Thus they need to be predicted using pedotransfer functions. Furthermore, water content at -10 kPa was measured using ground samples. Thus a correction factor is derived to take into account the effect of bulk density. Results showed that Andisols has the highest mean water storage capacity, followed by Entisols and Inceptisols which have loamy texture. The lowest water retention is Entisols which are dominated by sandy materials. Profile available water capacity to a depth of 1 m was calculated and mapped for Korea. The western part of the country shows higher available water capacity than the eastern part which is mountainous and has shallower soils. The highest water storage capacity soils are the Ultisols and Alfisols (mean of 206 and 205 mm, respectively). Validation of the maps showed promising results. The map produced can be used as an indication of soil physical quality of Korean soils.

  7. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  8. Big Data in food and agriculture

    Directory of Open Access Journals (Sweden)

    Kelly Bronson

    2016-06-01

    Full Text Available Farming is undergoing a digital revolution. Our existing review of current Big Data applications in the agri-food sector has revealed several collection and analytics tools that may have implications for relationships of power between players in the food system (e.g. between farmers and large corporations. For example, Who retains ownership of the data generated by applications like Monsanto Corproation's Weed I.D . “app”? Are there privacy implications with the data gathered by John Deere's precision agricultural equipment? Systematically tracing the digital revolution in agriculture, and charting the affordances as well as the limitations of Big Data applied to food and agriculture, should be a broad research goal for Big Data scholarship. Such a goal brings data scholarship into conversation with food studies and it allows for a focus on the material consequences of big data in society.

  9. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  10. Slaves to Big Data. Or Are We?

    Directory of Open Access Journals (Sweden)

    Mireille Hildebrandt

    2013-10-01

    Full Text Available

    In this contribution, the notion of Big Data is discussed in relation to the monetisation of personal data. The claim of some proponents, as well as adversaries, that Big Data implies that ‘n = all’, meaning that we no longer need to rely on samples because we have all the data, is scrutinised and found to be both overly optimistic and unnecessarily pessimistic. A set of epistemological and ethical issues is presented, focusing on the implications of Big Data for our perception, cognition, fairness, privacy and due process. The article then looks into the idea of user-centric personal data management to investigate to what extent it provides solutions for some of the problems triggered by the Big Data conundrum. Special attention is paid to the core principle of data protection legislation, namely purpose binding. Finally, this contribution seeks to inquire into the influence of Big Data politics on self, mind and society, and asks how we can prevent ourselves from becoming slaves to Big Data.

  11. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  12. Big Data in the Aerospace Industry

    Directory of Open Access Journals (Sweden)

    Victor Emmanuell BADEA

    2018-01-01

    Full Text Available This paper presents the approaches related to the need for large volume data analysis, Big Data, and also the information that the beneficiaries of this analysis can interpret. Aerospace companies understand better the challenges of Big Data than the rest of the industries. Also, in this paper we describe a novel analytical system that enables query processing and predictive analytics over streams of large aviation data.

  13. Research on the Impact of Big Data on Logistics

    Directory of Open Access Journals (Sweden)

    Wang Yaxing

    2017-01-01

    Full Text Available In the context of big data development, a large amount of data will appear at logistics enterprises, especially in the aspect of logistics, such as transportation, warehousing, distribution and so on. Based on the analysis of the characteristics of big data, this paper studies the impact of big data on the logistics and its action mechanism, and gives reasonable suggestions. Through building logistics data center by using the big data technology, some hidden value information behind the data will be digged out, in which the logistics enterprises can benefit from it.

  14. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  15. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  16. Environmental assessment for the Strategic Petroleum Reserve Big Hill facility storage of commercial crude oil project, Jefferson County, Texas

    International Nuclear Information System (INIS)

    1999-03-01

    The Big Hill SPR facility located in Jefferson County, Texas has been a permitted operating crude oil storage site since 1986 with benign environmental impacts. However, Congress has not authorized crude oil purchases for the SPR since 1990, and six storage caverns at Big Hill are underutilized with 70 million barrels of available storage capacity. On February 17, 1999, the Secretary of Energy offered the 70 million barrels of available storage at Big Hill for commercial use. Interested commercial users would enter into storage contracts with DOE, and DOE would receive crude oil in lieu of dollars as rental fees. The site could potentially began to receive commercial oil in May 1999. This Environmental Assessment identified environmental changes that potentially would affect water usage, power usage, and air emissions. However, as the assessment indicates, changes would not occur to a major degree affecting the environment and no long-term short-term, cumulative or irreversible impacts have been identified

  17. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  18. Hot big bang or slow freeze?

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2014-09-01

    Full Text Available We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  19. Water and land availability for energy farming. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Schooley, F.A.; Mara, S.J.; Mendel, D.A.; Meagher, P.C.; So, E.C.

    1979-10-01

    The physical and economic availability of land and water resources for energy farming were determined. Ten water subbasins possessing favorable land and water availabilities were ranked according to their overall potential for biomass production. The study results clearly identify the Southeast as a favorable area for biomass farming. The Northwest and North-Central United States should also be considered on the basis of their highly favorable environmental characteristics. Both high and low estimates of water availability for 1985 and 2000 in each of 99 subbasins were prepared. Subbasins in which surface water consumption was more than 50% of surface water supply were eliminated from the land availability analysis, leaving 71 subbasins to be examined. The amount of acreage potentially available for biomass production in these subbasins was determined through a comparison of estimated average annual net returns developed for conventional agriculture and forestry with net returns for several biomass production options. In addition to a computerized method of ranking subbasins according to their overall potential for biomass production, a methodology for evaluating future energy farm locations was developed. This methodology included a general area selection procedure as well as specific site analysis recommendations. Thirty-five general factors and a five-step site-specific analysis procedure are described.

  20. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  1. A Maturity Analysis of Big Data Technologies

    Directory of Open Access Journals (Sweden)

    Radu BONCEA

    2017-01-01

    Full Text Available In recent years Big Data technologies have been developed at faster pace due to increase in demand from applications that generate and process vast amount of data. The Cloud Computing and the Internet of Things are the main drivers for developing enterprise solutions that support Business Intelligence which in turn, creates new opportunities and new business models. An enterprise can now collect data about its internal processes, process this data to gain new insights and business value and make better decisions. And this is the reason why Big Data is now seen as a vital component in any enterprise architecture. In this article the maturity of several Big Data technologies is put under analysis. For each technology there are several aspects considered, such as development status, market usage, licensing policies, availability for certifications, adoption, support for cloud computing and enterprise.

  2. Translating Big Data into Smart Data for Veterinary Epidemiology

    Directory of Open Access Journals (Sweden)

    Kimberly VanderWaal

    2017-07-01

    Full Text Available The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing “big” data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having “big data” to create “smart data,” with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  3. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  4. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  5. Geohydrology of Big Bear Valley, California: phase 1--geologic framework, recharge, and preliminary assessment of the source and age of groundwater

    Science.gov (United States)

    Flint, Lorraine E.; Brandt, Justin; Christensen, Allen H.; Flint, Alan L.; Hevesi, Joseph A.; Jachens, Robert; Kulongoski, Justin T.; Martin, Peter; Sneed, Michelle

    2012-01-01

    Big Bear Valley. The INFILv3 model was modified for this study to include a perched zone beneath the root zone to better simulate lateral seepage and recharge in the shallow subsurface in mountainous terrain. The climate input used in the INFILv3 model was developed by using daily climate data from 84 National Climatic Data Center stations and published Parameter Regression on Independent Slopes Model (PRISM) average monthly precipitation maps to match the drier average monthly precipitation measured in the Baldwin Lake drainage basin. This model resulted in a good representation of localized rain-shadow effects and calibrated well to measured lake volumes at Big Bear and Baldwin Lakes. The simulated average annual recharge was about 5,480 acre-ft/yr in the Big Bear study area, with about 2,800 acre-ft/yr in the Big Bear Lake surface-water drainage basin and about 2,680 acre-ft/yr in the Baldwin Lake surface-water drainage basin. One spring and eight wells were sampled and analyzed for chemical and isotopic data in 2005 and 2006 to determine if isotopic techniques could be used to assess the sources and ages of groundwater in the Big Bear Valley. This approach showed that the predominant source of recharge to the Big Bear Valley is winter precipitation falling on the surrounding mountains. The tritium and uncorrected carbon-14 ages of samples collected from wells for this study indicated that the groundwater basin contains water of different ages, ranging from modern to about 17,200-years old.The results of these investigations provide an understanding of the lateral and vertical extent of the groundwater basin, the spatial distribution of groundwater recharge, the processes responsible for the recharge, and the source and age of groundwater in the groundwater basin. Although the studies do not provide an understanding of the detailed water-bearing properties necessary to determine the groundwater availability of the basin, they do provide a framework for the future

  6. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  7. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  8. The nice people who live up in the cold place above you put lots of money into sense things to look into the big deep water and see weird-ass things

    Science.gov (United States)

    Pelz, M.; Scherwath, M.; Hoeberechts, M.

    2017-12-01

    There is lots of stuff in the very big water we want to look at. But because our bodies are soft and can't hold air good, we use computer senses to help us look at all the stuff down there instead.It's actually really good thinking because we don't have to get wet and we can use computer senses under the water all the time, even when the air is cold and it sucks to be outside. We can also go really deep which is cool because weird-ass stuff is down there and we would get pressed too small if we tried to go in person. The sense things idea also save us lots of money because we only have to use other people's water cars once a year to make sure our sense things are working all the time and that we can still see stuff right. Our sense things are made of power lines that go out into the big water and come back to our work-house so if we don't want to keep looking at the same thing, we can tell the sense things to do it different from our house using the lines. This is pretty good because we can change our minds a lot and still get good ideas about what is happening in the big deep water where the weird-ass stuff is.Our head-guys give us money for this thing because we think it will let us know if the ground will shake and kill us before it starts shaking. This is kind of important because we can get out of the way and we can take our good stuff with us too if we know early that it will start shaking and making big-ass waves. Head-guys like to make people feel safe and we are good at helping with that, we think.But we made sure our sense thing can be used for more than just being ready to run away if the ground moves (even though this is a good use). There are also lots of weird-ass and weird-front animals in the big water. Some are not good looking at all, but they do cool stuff with their bodies or they are really good for eating and that makes them really interesting so we look at them too.Last but not least, we use our sense things up in the really cold big water

  9. Big Data and Health Economics: Opportunities, Challenges and Risks

    Directory of Open Access Journals (Sweden)

    Diego Bodas-Sagi

    2018-03-01

    Full Text Available Big Data offers opportunities in many fields. Healthcare is not an exception. In this paper we summarize the possibilities of Big Data and Big Data technologies to offer useful information to policy makers. In a world with tight public budgets and ageing populations we feel necessary to save costs in any production process. The use of outcomes from Big Data could be in the future a way to improve decisions at a lower cost than today. In addition to list the advantages of properly using data and technologies from Big Data, we also show some challenges and risks that analysts could face. We also present an hypothetical example of the use of administrative records with health information both for diagnoses and patients.

  10. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    Science.gov (United States)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  11. Hydrodynamic Modeling of Santa Marta's Big Marsh

    International Nuclear Information System (INIS)

    Saldarriaga, Juan

    1991-01-01

    The ecological degradation of Santa Marta's Big Marsh and their next areas it has motivated the realization of diagnosis studies and design by several state and private entities. One of the recommended efforts for international advisory it was to develop an ecological model that allowed the handling of the water body and the economic test of alternative of solution to those ecological problems. The first part of a model of this type is in turn a model that simulates the movement of the water inside the marsh, that is to say, a hydrodynamic model. The realization of this was taken charge to the civil engineering department, on the part of Colciencias. This article contains a general explanation of the hydrodynamic pattern that this being developed by a professors group. The ecological causes are described and antecedent, the parts that conform the complex of the Santa Marta big Marsh The marsh modeling is made and it is explained in qualitative form the model type Hydrodynamic used

  12. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  13. The Big Five Personality Factors and Application Fields

    Directory of Open Access Journals (Sweden)

    Agnė Matuliauskaitė

    2011-07-01

    Full Text Available The Big five factors are used in many research fields. The literature survey showed that the personality trait theory was used to study and explain relations with different variables. The article focuses on a brief description of methods that can help with identifying the Big five factors and considers the model for applying them in personnel selection. The paper looks at scientific researches assessing relations between the Big five factors and different variables such as job performance, academic performance, student knowledge management and evaluation.Article in Lithuanian

  14. Big Data Analytics in the Management of Business

    Directory of Open Access Journals (Sweden)

    Jelonek Dorota

    2017-01-01

    Full Text Available Data, information, knowledge have always played a critical role in business. The amount of various data that can be collected and stored is increasing, therefore companies need new solutions for data processing and analysis. The paper presents considerations on the concept of Big Data. The aim of the paper is to demonstrate that Big Data analytics is an effective support in managing the company. It also indicates the areas and activities where the use of Big Data analytics can bring the greatest benefits to companies.

  15. Big data in psychology: A framework for research advancement.

    Science.gov (United States)

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Forecast Model of Urban Stagnant Water Based on Logistic Regression

    Directory of Open Access Journals (Sweden)

    Liu Pan

    2017-01-01

    Full Text Available With the development of information technology, the construction of water resource system has been gradually carried out. In the background of big data, the work of water information needs to carry out the process of quantitative to qualitative change. Analyzing the correlation of data and exploring the deep value of data which are the key of water information’s research. On the basis of the research on the water big data and the traditional data warehouse architecture, we try to find out the connection of different data source. According to the temporal and spatial correlation of stagnant water and rainfall, we use spatial interpolation to integrate data of stagnant water and rainfall which are from different data source and different sensors, then use logistic regression to find out the relationship between them.

  17. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  18. Toward a Literature-Driven Definition of Big Data in Healthcare

    Directory of Open Access Journals (Sweden)

    Emilie Baro

    2015-01-01

    Full Text Available Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n and the number of variables (p for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n*p≥7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR data.

  19. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  20. Big Canyon Creek Ecological Restoration Strategy.

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Lynn; Richardson, Shannon

    2007-10-01

    He-yey, Nez Perce for steelhead or rainbow trout (Oncorhynchus mykiss), are a culturally and ecologically significant resource within the Big Canyon Creek watershed; they are also part of the federally listed Snake River Basin Steelhead DPS. The majority of the Big Canyon Creek drainage is considered critical habitat for that DPS as well as for the federally listed Snake River fall chinook (Oncorhynchus tshawytscha) ESU. The Nez Perce Soil and Water Conservation District (District) and the Nez Perce Tribe Department of Fisheries Resources Management-Watershed (Tribe), in an effort to support the continued existence of these and other aquatic species, have developed this document to direct funding toward priority restoration projects in priority areas for the Big Canyon Creek watershed. In order to achieve this, the District and the Tribe: (1) Developed a working group and technical team composed of managers from a variety of stakeholders within the basin; (2) Established geographically distinct sub-watershed areas called Assessment Units (AUs); (3) Created a prioritization framework for the AUs and prioritized them; and (4) Developed treatment strategies to utilize within the prioritized AUs. Assessment Units were delineated by significant shifts in sampled juvenile O. mykiss (steelhead/rainbow trout) densities, which were found to fall at fish passage barriers. The prioritization framework considered four aspects critical to determining the relative importance of performing restoration in a certain area: density of critical fish species, physical condition of the AU, water quantity, and water quality. It was established, through vigorous data analysis within these four areas, that the geographic priority areas for restoration within the Big Canyon Creek watershed are Big Canyon Creek from stream km 45.5 to the headwaters, Little Canyon from km 15 to 30, the mainstem corridors of Big Canyon (mouth to 7km) and Little Canyon (mouth to 7km). The District and the Tribe

  1. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  2. Technology Evaluation for the Big Spring Water Treatment System at the Y-12 National Security Complex, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Bechtel Jacobs Company LLC

    2002-01-01

    The Y-12 National Security Complex (Y-12 Complex) is an active manufacturing and developmental engineering facility that is located on the U.S. Department of Energy (DOE) Oak Ridge Reservation. Building 9201-2 was one of the first process buildings constructed at the Y-12 Complex. Construction involved relocating and straightening of the Upper East Fork Poplar Creek (UEFPC) channel, adding large quantities of fill material to level areas along the creek, and pumping of concrete into sinkholes and solution cavities present within the limestone bedrock. Flow from a large natural spring designated as ''Big Spring'' on the original 1943 Stone and Webster Building 9201-2 Field Sketch FS6003 was captured and directed to UEFPC through a drainpipe designated Outfall 51. The building was used from 1953 to 1955 for pilot plant operations for an industrial process that involved the use of large quantities of elemental mercury. Past operations at the Y-12 Complex led to the release of mercury to the environment. Significant environmental media at the site were contaminated by accidental releases of mercury from the building process facilities piping and sumps associated with Y-12 Complex mercury handling facilities. Releases to the soil surrounding the buildings have resulted in significant levels of mercury in these areas of contamination, which is ultimately transported to UEFPC, its streambed, and off-site. Bechtel Jacobs Company LLC (BJC) is the DOE-Oak Ridge Operations prime contractor responsible for conducting environmental restoration activities at the Y-12 Complex. In order to mitigate the mercury being released to UEFPC, the Big Spring Water Treatment System will be designed and constructed as a Comprehensive Environmental Response, Compensation, and Liability Act action. This facility will treat the combined flow from Big Spring feeding Outfall 51 and the inflow now being processed at the East End Mercury Treatment System (EEMTS). Both discharge to UEFPC adjacent to

  3. Integrative methods for analyzing big data in precision medicine.

    Science.gov (United States)

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  5. Assessment of water availability in Chindwinn catchment

    International Nuclear Information System (INIS)

    Phyu Oo Khin; Ohn Gyaw

    2001-01-01

    A study of water balance over Chindwinn Catchment has been carried out by using three decades of available climatological and hydrological data (i.e. from 1967). The study was based on the monthly, annual and normal values. Actual evapotranspiration (AET) computed by as well as on the using Penman (1963) as well as Hargreaves (1985) methods. Some of the reliable data of evaporation at the stations were also used to estimate actual evaporation with the pancoefficient value 0.7. The values of actual evapotranspiration estimated by Hargreaves method was lower than the values estimated by Penman, but most followed the same significant trend. The soil moisture deficiency generally occurs during November and April. A few cases of soil moisture deficiency do occur in August, September and October. However, on the overall availability of water in the catchment is quite promising. The residual resulted from the water balance estimation may be assumed as soil moisture in the catchment by neglecting some losses from the catchment. (author)

  6. 78 FR 27233 - Clean Water Act: Availability of List Decisions

    Science.gov (United States)

    2013-05-09

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9811-4] Clean Water Act: Availability of List Decisions AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of availability. SUMMARY: This notice announces the availability of EPA's action identifying water quality limited segments and associated...

  7. The Ethics of Big Data and Nursing Science.

    Science.gov (United States)

    Milton, Constance L

    2017-10-01

    Big data is a scientific, social, and technological trend referring to the process and size of datasets available for analysis. Ethical implications arise as healthcare disciplines, including nursing, struggle over questions of informed consent, privacy, ownership of data, and its possible use in epistemology. The author offers straight-thinking possibilities for the use of big data in nursing science.

  8. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  9. Availability of Water in the Kabul Basin, Afghanistan

    Science.gov (United States)

    Mack, Thomas J.; Chornack, Michael P.; Coplen, T.B.; Plummer, Niel; Rezai, M.T.; Verstraeten, Ingrid M.

    2010-01-01

    The availability of water resources is vital to the social and economic well being and rebuilding of Afghanistan. Kabul City currently (2010) has a population of nearly 4 million and is growing rapidly as a result of periods of relative security and the return of refugees. Population growth and recent droughts have placed new stresses on the city's limited water resources and have caused many wells to become contaminated, dry, or inoperable in recent years. The projected vulnerability of Central and West Asia to climate change (Cruz and others, 2007; Milly and others, 2005) and observations of diminishing glaciers in Afghanistan (Molnia, 2009) have heightened concerns for future water availability in the Kabul Basin of Afghanistan.

  10. Commentary: Epidemiology in the era of big data.

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  11. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig proteins.

    Directory of Open Access Journals (Sweden)

    Rajeev Raman

    Full Text Available BACKGROUND: Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. PRINCIPAL FINDINGS: We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th (Lig A9 and 10(th repeats (Lig A10; and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon. All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm, probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. CONCLUSIONS: We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  12. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  13. Plant-available soil water capacity: estimation methods and implications

    Directory of Open Access Journals (Sweden)

    Bruno Montoani Silva

    2014-04-01

    Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.

  14. Benefits, Challenges and Tools of Big Data Management

    Directory of Open Access Journals (Sweden)

    Fernando L. F. Almeida

    2017-10-01

    Full Text Available Big Data is one of the most predominant field of knowledge and research that has generated high repercussion in the process of digital transformation of organizations in recent years. The Big Data's main goal is to improve work processes through analysis and interpretation of large amounts of data. Knowing how Big Data works, its benefits, challenges and tools, are essential elements for business success. Our study performs a systematic review on Big Data field adopting a mind map approach, which allows us to easily and visually identify its main elements and dependencies. The findings identified and mapped a total of 12 main branches of benefits, challenges and tools, and also a total of 52 sub branches in each of the main areas of the model.

  15. Analyzing Big Data with the Hybrid Interval Regression Methods

    Directory of Open Access Journals (Sweden)

    Chia-Hui Huang

    2014-01-01

    Full Text Available Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM to analyze big data. Recently, the smooth support vector machine (SSVM was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  16. Investigation of Great Basin big sagebrush and black greasewood as biogeochemical indicators of uranium mineralization. Final report. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Diebold, F.E.; McGrath, S.

    1982-11-01

    The effects of varying phosphate concentrations in natural aqueous systems upon the uptake of uranium by big sagebrush (Artemesia tridentata subsp. tridentata) and black greasewood (Sarcobatus vermiculatus (Hook) Torr.) were investigated. Two separate growth experiments with five drip-flow hyroponic units were used and plant seedlings were grown for 60 days in solutions of varying phosphate and uranium concentrations. Successful growth experiments were obtained only for big sagebrush; black greasewood did not sustain sufficient growth. The phosphate concentration of the water did affect the uptake of uranium by the big sagebrush, and this effect is most pronounced in the region of higher concentrations of uranium in the water. The ratio of the concentration of uranium in the plant to that in the water was observed to decrease with increasing uranium concentration in solution. This is indicative of an absorption barrier in the plants. The field data shows that big sagebrush responds to uranium concentrations in the soil water and not the groundwater. The manifestation of these results is that the use of big sagebrush as a biogeochemical indicator of uranium is not recommended. Since the concentration of phosphate must also be knwon in the water supplying the uranium to the plant, one should analyze this natural aqueous phase as a hydrochemical indicator rather than the big sagebrush

  17. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  18. Inflated granularity: Spatial “Big Data” and geodemographics

    Directory of Open Access Journals (Sweden)

    Craig M Dalton

    2015-08-01

    Full Text Available Data analytics, particularly the current rhetoric around “Big Data”, tend to be presented as new and innovative, emerging ahistorically to revolutionize modern life. In this article, we situate one branch of Big Data analytics, spatial Big Data, through a historical predecessor, geodemographic analysis, to help develop a critical approach to current data analytics. Spatial Big Data promises an epistemic break in marketing, a leap from targeting geodemographic areas to targeting individuals. Yet it inherits characteristics and problems from geodemographics, including a justification through the market, and a process of commodification through the black-boxing of technology. As researchers develop sustained critiques of data analytics and its effects on everyday life, we must so with a grounding in the cultural and historical contexts from which data technologies emerged. This article and others (Barnes and Wilson, 2014 develop a historically situated, critical approach to spatial Big Data. This history illustrates connections to the critical issues of surveillance, redlining, and the production of consumer subjects and geographies. The shared histories and structural logics of spatial Big Data and geodemographics create the space for a continued critique of data analyses’ role in society.

  19. Big Sites, Big Questions, Big Data, Big Problems: Scales of Investigation and Changing Perceptions of Archaeological Practice in the Southeastern United States

    Directory of Open Access Journals (Sweden)

    Cameron B Wesson

    2014-08-01

    Full Text Available Since at least the 1930s, archaeological investigations in the southeastern United States have placed a priority on expansive, near-complete, excavations of major sites throughout the region. Although there are considerable advantages to such large–scale excavations, projects conducted at this scale are also accompanied by a series of challenges regarding the comparability, integrity, and consistency of data recovery, analysis, and publication. We examine the history of large–scale excavations in the southeast in light of traditional views within the discipline that the region has contributed little to the ‘big questions’ of American archaeology. Recently published analyses of decades old data derived from Southeastern sites reveal both the positive and negative aspects of field research conducted at scales much larger than normally undertaken in archaeology. Furthermore, given the present trend toward the use of big data in the social sciences, we predict an increased use of large pre–existing datasets developed during the New Deal and other earlier periods of archaeological practice throughout the region.

  20. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  1. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  2. Flood-inundation maps for Big Creek from the McGinnis Ferry Road bridge to the confluence of Hog Wallow Creek, Alpharetta and Roswell, Georgia

    Science.gov (United States)

    Musser, Jonathan W.

    2015-08-20

    Digital flood-inundation maps for a 12.4-mile reach of Big Creek that extends from 260 feet above the McGinnis Ferry Road bridge to the U.S. Geological Survey (USGS) streamgage at Big Creek below Hog Wallow Creek at Roswell, Georgia (02335757), were developed by the USGS in cooperation with the cities of Alpharetta and Roswell, Georgia. The inundation maps, which can be accessed through the USGS Flood Inundation Mapping Science Web site at http://water.usgs.gov/osw/flood_inundation/, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the USGS streamgage at Big Creek near Alpharetta, Georgia (02335700). Real-time stage information from this USGS streamgage may be obtained at http://waterdata.usgs.gov/ and can be used in conjunction with these maps to estimate near real-time areas of inundation. The National Weather Service (NWS) is incorporating results from this study into the Advanced Hydrologic Prediction Service (AHPS) flood-warning system http://water.weather.gov/ahps/). The NWS forecasts flood hydrographs for many streams where the USGS operates streamgages and provides flow data. The forecasted peak-stage information for the USGS streamgage at Big Creek near Alpharetta (02335700), available through the AHPS Web site, may be used in conjunction with the maps developed for this study to show predicted areas of flood inundation.

  3. Big Data Components for Business Process Optimization

    Directory of Open Access Journals (Sweden)

    Mircea Raducu TRIFU

    2016-01-01

    Full Text Available In these days, more and more people talk about Big Data, Hadoop, noSQL and so on, but very few technical people have the necessary expertise and knowledge to work with those concepts and technologies. The present issue explains one of the concept that stand behind two of those keywords, and this is the map reduce concept. MapReduce model is the one that makes the Big Data and Hadoop so powerful, fast, and diverse for business process optimization. MapReduce is a programming model with an implementation built to process and generate large data sets. In addition, it is presented the benefits of integrating Hadoop in the context of Business Intelligence and Data Warehousing applications. The concepts and technologies behind big data let organizations to reach a variety of objectives. Like other new information technologies, the main important objective of big data technology is to bring dramatic cost reduction.

  4. Big data business models: Challenges and opportunities

    Directory of Open Access Journals (Sweden)

    Ralph Schroeder

    2016-12-01

    Full Text Available This paper, based on 28 interviews from a range of business leaders and practitioners, examines the current state of big data use in business, as well as the main opportunities and challenges presented by big data. It begins with an account of the current landscape and what is meant by big data. Next, it draws distinctions between the ways organisations use data and provides a taxonomy of big data business models. We observe a variety of different business models, depending not only on sector, but also on whether the main advantages derive from analytics capabilities or from having ready access to valuable data sources. Some major challenges emerge from this account, including data quality and protectiveness about sharing data. The conclusion discusses these challenges, and points to the tensions and differing perceptions about how data should be governed as between business practitioners, the promoters of open data, and the wider public.

  5. Water availability and trachoma.

    Science.gov (United States)

    West, S; Lynch, M; Turner, V; Munoz, B; Rapoza, P; Mmbaga, B B; Taylor, H R

    1989-01-01

    As part of an epidemiological survey of risk factors for trachoma in 20 villages in the United Republic of Tanzania, we investigated the relationship of village water pumps, distance to water source, and quantity of household water to the risk of inflammatory trachoma. We also evaluated whether there was an association between the cleanliness of children's faces and these water variables. No association was found between the presence of a village water supply and the prevalence of trachoma. However, the risk of trachoma in the household increased with the distance to a water source--although there was no association with the estimated daily amount of water brought into the house. Likewise, children were more likely to have unclean faces if they lived more than 30 minutes from a water source, but whether they had clean faces was not associated with the daily quantity of water brought into the household. The effect of the distance to water supply on trachoma may well reflect the value placed on water within the family, and this determines the priority for its use for hygiene purposes. The results of the study suggest that changing the access to water per se may be insufficient to alter the prevalence of trachoma without also a concomitant effort to change the perception of how water should be utilized in the home.

  6. A survey on platforms for big data analytics.

    Science.gov (United States)

    Singh, Dilpreet; Reddy, Chandan K

    The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.

  7. Elevated levels of plasma Big endothelin-1 and its relation to hypertension and skin lesions in individuals exposed to arsenic

    International Nuclear Information System (INIS)

    Hossain, Ekhtear; Islam, Khairul; Yeasmin, Fouzia; Karim, Md. Rezaul; Rahman, Mashiur; Agarwal, Smita; Hossain, Shakhawoat; Aziz, Abdul; Al Mamun, Abdullah; Sheikh, Afzal; Haque, Abedul; Hossain, M. Tofazzal; Hossain, Mostaque; Haris, Parvez I.; Ikemura, Noriaki; Inoue, Kiyoshi; Miyataka, Hideki; Himeno, Seiichiro; Hossain, Khaled

    2012-01-01

    Chronic arsenic (As) exposure affects the endothelial system causing several diseases. Big endothelin-1 (Big ET-1), the biological precursor of endothelin-1 (ET-1) is a more accurate indicator of the degree of activation of the endothelial system. Effect of As exposure on the plasma Big ET-1 levels and its physiological implications have not yet been documented. We evaluated plasma Big ET-1 levels and their relation to hypertension and skin lesions in As exposed individuals in Bangladesh. A total of 304 study subjects from the As-endemic and non-endemic areas in Bangladesh were recruited for this study. As concentrations in water, hair and nails were measured by Inductively Coupled Plasma Mass Spectroscopy (ICP-MS). The plasma Big ET-1 levels were measured using a one-step sandwich enzyme immunoassay kit. Significant increase in Big ET-1 levels were observed with the increasing concentrations of As in drinking water, hair and nails. Further, before and after adjusting with different covariates, plasma Big ET-1 levels were found to be significantly associated with the water, hair and nail As concentrations of the study subjects. Big ET-1 levels were also higher in the higher exposure groups compared to the lowest (reference) group. Interestingly, we observed that Big ET-1 levels were significantly higher in the hypertensive and skin lesion groups compared to the normotensive and without skin lesion counterpart, respectively of the study subjects in As-endemic areas. Thus, this study demonstrated a novel dose–response relationship between As exposure and plasma Big ET-1 levels indicating the possible involvement of plasma Big ET-1 levels in As-induced hypertension and skin lesions. -- Highlights: ► Plasma Big ET-1 is an indicator of endothelial damage. ► Plasma Big ET-1 level increases dose-dependently in arsenic exposed individuals. ► Study subjects in arsenic-endemic areas with hypertension have elevated Big ET-1 levels. ► Study subjects with arsenic

  8. Elevated levels of plasma Big endothelin-1 and its relation to hypertension and skin lesions in individuals exposed to arsenic

    Energy Technology Data Exchange (ETDEWEB)

    Hossain, Ekhtear; Islam, Khairul; Yeasmin, Fouzia [Department of Biochemistry and Molecular Biology, Rajshahi University, Rajshahi-6205 (Bangladesh); Karim, Md. Rezaul [Department of Applied Nutrition and Food Technology, Islamic University, Kushtia-7003 (Bangladesh); Rahman, Mashiur; Agarwal, Smita; Hossain, Shakhawoat; Aziz, Abdul; Al Mamun, Abdullah; Sheikh, Afzal; Haque, Abedul; Hossain, M. Tofazzal [Department of Biochemistry and Molecular Biology, Rajshahi University, Rajshahi-6205 (Bangladesh); Hossain, Mostaque [Department of Medicine, Bangladesh Institute of Research and Rehabilitation in Diabetes, Endocrine and Metabolic Disorders (BIRDEM), Dhaka (Bangladesh); Haris, Parvez I. [Faculty of Health and Life Sciences, De Montfort University, Leicester, LE1 9BH (United Kingdom); Ikemura, Noriaki; Inoue, Kiyoshi; Miyataka, Hideki; Himeno, Seiichiro [Laboratory of Molecular Nutrition and Toxicology, Faculty of Pharmaceutical Sciences, Tokushima Bunri University, Tokushima 770–8514 (Japan); Hossain, Khaled, E-mail: khossain69@yahoo.com [Department of Biochemistry and Molecular Biology, Rajshahi University, Rajshahi-6205 (Bangladesh)

    2012-03-01

    Chronic arsenic (As) exposure affects the endothelial system causing several diseases. Big endothelin-1 (Big ET-1), the biological precursor of endothelin-1 (ET-1) is a more accurate indicator of the degree of activation of the endothelial system. Effect of As exposure on the plasma Big ET-1 levels and its physiological implications have not yet been documented. We evaluated plasma Big ET-1 levels and their relation to hypertension and skin lesions in As exposed individuals in Bangladesh. A total of 304 study subjects from the As-endemic and non-endemic areas in Bangladesh were recruited for this study. As concentrations in water, hair and nails were measured by Inductively Coupled Plasma Mass Spectroscopy (ICP-MS). The plasma Big ET-1 levels were measured using a one-step sandwich enzyme immunoassay kit. Significant increase in Big ET-1 levels were observed with the increasing concentrations of As in drinking water, hair and nails. Further, before and after adjusting with different covariates, plasma Big ET-1 levels were found to be significantly associated with the water, hair and nail As concentrations of the study subjects. Big ET-1 levels were also higher in the higher exposure groups compared to the lowest (reference) group. Interestingly, we observed that Big ET-1 levels were significantly higher in the hypertensive and skin lesion groups compared to the normotensive and without skin lesion counterpart, respectively of the study subjects in As-endemic areas. Thus, this study demonstrated a novel dose–response relationship between As exposure and plasma Big ET-1 levels indicating the possible involvement of plasma Big ET-1 levels in As-induced hypertension and skin lesions. -- Highlights: ► Plasma Big ET-1 is an indicator of endothelial damage. ► Plasma Big ET-1 level increases dose-dependently in arsenic exposed individuals. ► Study subjects in arsenic-endemic areas with hypertension have elevated Big ET-1 levels. ► Study subjects with arsenic

  9. Central America : Big Data in Action for Development

    OpenAIRE

    World Bank

    2014-01-01

    This report stemmed from a World Bank pilot activity to explore the potential of big data to address development challenges in Central American countries. As part of this activity we collected and analyzed a number of examples of leveraging big data for development. Because of the growing interest in this topic this report makes available to a broader audience those examples as well as the...

  10. Assessing the effects of adaptation measures on optimal water resources allocation under varied water availability conditions

    Science.gov (United States)

    Liu, Dedi; Guo, Shenglian; Shao, Quanxi; Liu, Pan; Xiong, Lihua; Wang, Le; Hong, Xingjun; Xu, Yao; Wang, Zhaoli

    2018-01-01

    Human activities and climate change have altered the spatial and temporal distribution of water availability which is a principal prerequisite for allocation of different water resources. In order to quantify the impacts of climate change and human activities on water availability and optimal allocation of water resources, hydrological models and optimal water resource allocation models should be integrated. Given that increasing human water demand and varying water availability conditions necessitate adaptation measures, we propose a framework to assess the effects of these measures on optimal allocation of water resources. The proposed model and framework were applied to a case study of the middle and lower reaches of the Hanjiang River Basin in China. Two representative concentration pathway (RCP) scenarios (RCP2.6 and RCP4.5) were employed to project future climate, and the Variable Infiltration Capacity (VIC) hydrological model was used to simulate the variability of flows under historical (1956-2011) and future (2012-2099) conditions. The water availability determined by simulating flow with the VIC hydrological model was used to establish the optimal water resources allocation model. The allocation results were derived under an extremely dry year (with an annual average water flow frequency of 95%), a very dry year (with an annual average water flow frequency of 90%), a dry year (with an annual average water flow frequency of 75%), and a normal year (with an annual average water flow frequency of 50%) during historical and future periods. The results show that the total available water resources in the study area and the inflow of the Danjiangkou Reservoir will increase in the future. However, the uneven distribution of water availability will cause water shortage problems, especially in the boundary areas. The effects of adaptation measures, including water saving, and dynamic control of flood limiting water levels (FLWLs) for reservoir operation, were

  11. Designing Cloud Infrastructure for Big Data in E-government

    Directory of Open Access Journals (Sweden)

    Jelena Šuh

    2015-03-01

    Full Text Available The development of new information services and technologies, especially in domains of mobile communications, Internet of things, and social media, has led to appearance of the large quantities of unstructured data. The pervasive computing also affects the e-government systems, where big data emerges and cannot be processed and analyzed in a traditional manner due to its complexity, heterogeneity and size. The subject of this paper is the design of the cloud infrastructure for big data storage and processing in e-government. The goal is to analyze the potential of cloud computing for big data infrastructure, and propose a model for effective storing, processing and analyzing big data in e-government. The paper provides an overview of current relevant concepts related to cloud infrastructure design that should provide support for big data. The second part of the paper gives a model of the cloud infrastructure based on the concepts of software defined networks and multi-tenancy. The final goal is to support projects in the field of big data in e-government

  12. Translating Big Data into Smart Data for Veterinary Epidemiology.

    Science.gov (United States)

    VanderWaal, Kimberly; Morrison, Robert B; Neuhauser, Claudia; Vilalta, Carles; Perez, Andres M

    2017-01-01

    The increasing availability and complexity of data has led to new opportunities and challenges in veterinary epidemiology around how to translate abundant, diverse, and rapidly growing "big" data into meaningful insights for animal health. Big data analytics are used to understand health risks and minimize the impact of adverse animal health issues through identifying high-risk populations, combining data or processes acting at multiple scales through epidemiological modeling approaches, and harnessing high velocity data to monitor animal health trends and detect emerging health threats. The advent of big data requires the incorporation of new skills into veterinary epidemiology training, including, for example, machine learning and coding, to prepare a new generation of scientists and practitioners to engage with big data. Establishing pipelines to analyze big data in near real-time is the next step for progressing from simply having "big data" to create "smart data," with the objective of improving understanding of health risks, effectiveness of management and policy decisions, and ultimately preventing or at least minimizing the impact of adverse animal health issues.

  13. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  14. Think big: learning contexts, algorithms and data science

    Directory of Open Access Journals (Sweden)

    Baldassarre Michele

    2016-12-01

    Full Text Available Due to the increasing growth in available data in recent years, all areas of research and the managements of institutions and organisations, specifically schools and universities, feel the need to give meaning to this availability of data. This article, after a brief reference to the definition of big data, intends to focus attention and reflection on their type to proceed to an extension of their characterisation. One of the hubs to make feasible the use of Big Data in operational contexts is to give a theoretical basis to which to refer. The Data, Information, Knowledge and Wisdom (DIKW model correlates these four aspects, concluding in Data Science, which in many ways could revolutionise the established pattern of scientific investigation. The Learning Analytics applications on online learning platforms can be tools for evaluating the quality of teaching. And that is where some problems arise. It becomes necessary to handle with care the available data. Finally, a criterion for deciding whether it makes sense to think of an analysis based on Big Data can be to think about the interpretability and relevance in relation to both institutional and personal processes.

  15. Integrating R and Hadoop for Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2014-06-01

    Full Text Available Analyzing and working with big data could be very difficult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Official statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools successfully and wide spread used for storage and processing of big data sets on clusters of commodity hardware is Hadoop. Hadoop framework contains libraries, a distributed file-system (HDFS, a resource-management platform and implements a version of the MapReduce programming model for large scale data processing. In this paper we investigate the possibilities of integrating Hadoop with R which is a popular software used for statistical computing and data visualization. We present three ways of integrating them: R with Streaming, Rhipe and RHadoop and we emphasize the advantages and disadvantages of each solution.

  16. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  17. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  18. Climate change and water availability for vulnerable agriculture

    Science.gov (United States)

    Dalezios, Nicolas; Tarquis, Ana Maria

    2017-04-01

    Climatic projections for the Mediterranean basin indicate that the area will suffer a decrease in water resources due to climate change. The key climatic trends identified for the Mediterranean region are continuous temperature increase, further drying with precipitation decrease and the accentuation of climate extremes, such as droughts, heat waves and/or forest fires, which are expected to have a profound effect on agriculture. Indeed, the impact of climate variability on agricultural production is important at local, regional, national, as well as global scales. Agriculture of any kind is strongly influenced by the availability of water. Climate change will modify rainfall, evaporation, runoff, and soil moisture storage patterns. Changes in total seasonal precipitation or in its pattern of variability are both important. Similarly, with higher temperatures, the water-holding capacity of the atmosphere and evaporation into the atmosphere increase, and this favors increased climate variability, with more intense precipitation and more droughts. As a result, crop yields are affected by variations in climatic factors, such as air temperature and precipitation, and the frequency and severity of the above mentioned extreme events. The aim of this work is to briefly present the main effects of climate change and variability on water resources with respect to water availability for vulnerable agriculture, namely in the Mediterranean region. Results of undertaken studies in Greece on precipitation patterns and drought assessment using historical data records are presented. Based on precipitation frequency analysis, evidence of precipitation reductions is shown. Drought is assessed through an agricultural drought index, namely the Vegetation Health Index (VHI), in Thessaly, a drought-prone region in central Greece. The results justify the importance of water availability for vulnerable agriculture and the need for drought monitoring in the Mediterranean basin as part of

  19. SDN Low Latency for Medical Big Data Using Wavelets

    Directory of Open Access Journals (Sweden)

    Fadia Shah

    2017-06-01

    Full Text Available New era is the age of 5G. The network has moved from the simple internet connection towards advanced LTE connections and transmission. The information and communication technology has reshaped telecommunication. For this, among many types of big data, Medical Big Data is one of the most sensitive forms of data. Wavelet is a technical tool to reduce the size of this data to make it available for the user for more time. It is also responsible for low latency and high speed data transmission over the network. The key concern is the Medical Big Data should be accurate and reliable enough so that the recommended treatment should be the concerned one. This paper proposed the scheme to support the concept of data availability without losing crucial information, via Wavelet the Medical Data compression and through SDN supportive architecture by making data availability over the wireless network. Such scheme is in favor of the efficient use of technology for the benefit of human beings in the support of medical treatments.

  20. Water availability and trachoma.

    OpenAIRE

    West, S.; Lynch, M.; Turner, V.; Munoz, B.; Rapoza, P.; Mmbaga, B. B.; Taylor, H. R.

    1989-01-01

    As part of an epidemiological survey of risk factors for trachoma in 20 villages in the United Republic of Tanzania, we investigated the relationship of village water pumps, distance to water source, and quantity of household water to the risk of inflammatory trachoma. We also evaluated whether there was an association between the cleanliness of children's faces and these water variables. No association was found between the presence of a village water supply and the prevalence of trachoma. H...

  1. NDE Big Data Framework, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NDE data has become "Big Data", and is overwhelming the abilities of NDE technicians and commercially available tools to deal with it. In the current state of the...

  2. Assessing water availability over peninsular Malaysia using public domain satellite data products

    International Nuclear Information System (INIS)

    Ali, M I; Hashim, M; Zin, H S M

    2014-01-01

    Water availability monitoring is an essential task for water resource sustainability and security. In this paper, the assessment of satellite remote sensing technique for determining water availability is reported. The water-balance analysis is used to compute the spatio-temporal water availability with main inputs; the precipitation and actual evapotranspiration rate (AET), both fully derived from public-domain satellite products of Tropical Rainfall Measurement Mission (TRMM) and MODIS, respectively. Both these satellite products were first subjected to calibration to suit corresponding selected local precipitation and AET samples. Multi-temporal data sets acquired 2000-2010 were used in this study. The results of study, indicated strong agreement of monthly water availability with the basin flow rate (r 2 = 0.5, p < 0.001). Similar agreements were also noted between the estimated annual average water availability with the in-situ measurement. It is therefore concluded that the method devised in this study provide a new alternative for water availability mapping over large area, hence offers the only timely and cost-effective method apart from providing comprehensive spatio-temporal patterns, crucial in water resource planning to ensure water security

  3. Construction of a groundwater-flow model for the Big Sioux Aquifer using airborne electromagnetic methods, Sioux Falls, South Dakota

    Science.gov (United States)

    Valder, Joshua F.; Delzer, Gregory C.; Carter, Janet M.; Smith, Bruce D.; Smith, David V.

    2016-09-28

    The city of Sioux Falls is the fastest growing community in South Dakota. In response to this continued growth and planning for future development, Sioux Falls requires a sustainable supply of municipal water. Planning and managing sustainable groundwater supplies requires a thorough understanding of local groundwater resources. The Big Sioux aquifer consists of glacial outwash sands and gravels and is hydraulically connected to the Big Sioux River, which provided about 90 percent of the city’s source-water production in 2015. Managing sustainable groundwater supplies also requires an understanding of groundwater availability. An effective mechanism to inform water management decisions is the development and utilization of a groundwater-flow model. A groundwater-flow model provides a quantitative framework for synthesizing field information and conceptualizing hydrogeologic processes. These groundwater-flow models can support decision making processes by mapping and characterizing the aquifer. Accordingly, the city of Sioux Falls partnered with the U.S. Geological Survey to construct a groundwater-flow model. Model inputs will include data from advanced geophysical techniques, specifically airborne electromagnetic methods.

  4. Implementing the “Big Data” Concept in Official Statistics

    Directory of Open Access Journals (Sweden)

    О. V.

    2017-02-01

    Full Text Available Big data is a huge resource that needs to be used at all levels of economic planning. The article is devoted to the study of the development of the concept of “Big Data” in the world and its impact on the transformation of statistical simulation of economic processes. Statistics at the current stage should take into account the complex system of international economic relations, which functions in the conditions of globalization and brings new forms of economic development in small open economies. Statistical science should take into account such phenomena as gig-economy, common economy, institutional factors, etc. The concept of “Big Data” and open data are analyzed, problems of implementation of “Big Data” in the official statistics are shown. The ways of implementation of “Big Data” in the official statistics of Ukraine through active use of technological opportunities of mobile operators, navigation systems, surveillance cameras, social networks, etc. are presented. The possibilities of using “Big Data” in different sectors of the economy, also on the level of companies are shown. The problems of storage of large volumes of data are highlighted. The study shows that “Big Data” is a huge resource that should be used across the Ukrainian economy.

  5. Big Data: Survey, Technologies, Opportunities, and Challenges

    Directory of Open Access Journals (Sweden)

    Nawsher Khan

    2014-01-01

    Full Text Available Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  6. Mercury concentrations and distribution in soil, water, mine waste leachates, and air in and around mercury mines in the Big Bend region, Texas, USA

    Science.gov (United States)

    Gray, John E.; Theodorakos, Peter M.; Fey, David L.; Krabbenhoft, David P.

    2015-01-01

    Samples of soil, water, mine waste leachates, soil gas, and air were collected from areas mined for mercury (Hg) and baseline sites in the Big Bend area, Texas, to evaluate potential Hg contamination in the region. Soil samples collected within 300 m of an inactive Hg mine contained elevated Hg concentrations (3.8–11 µg/g), which were considerably higher than Hg in soil collected from baseline sites (0.03–0.05 µg/g) distal (as much as 24 km) from mines. Only three soil samples collected within 300 m of the mine exceeded the probable effect concentration for Hg of 1.06 µg/g, above which harmful effects are likely to be observed in sediment-dwelling organisms. Concentrations of Hg in mine water runoff (7.9–14 ng/L) were generally higher than those found in springs and wells (0.05–3.1 ng/L), baseline streams (1.1–9.7 ng/L), and sources of drinking water (0.63–9.1 ng/L) collected in the Big Bend region. Concentrations of Hg in all water samples collected in this study were considerably below the 2,000 ng/L drinking water Hg guideline and the 770 ng/L guideline recommended by the U.S. Environmental Protection Agency (USEPA) to protect aquatic wildlife from chronic effects of Hg. Concentrations of Hg in water leachates obtained from leaching of mine wastes varied widely from wastes were elevated, persistent wind in southwest Texas disperses Hg in the air within a few meters of the ground surface.

  7. Water-scarcity patterns : spatiotemporal interdependencies between water use and water availability in a semi-arid river basin

    NARCIS (Netherlands)

    van Oel, P.R.

    2009-01-01

    This thesis addresses the interdependencies between water use and water availability and describes a model that has been developed to improve understanding of the processes that drive changes and variations in the spatial and temporal distribution of water resources in a semi-arid river basin. These

  8. A Brief Review on Leading Big Data Models

    Directory of Open Access Journals (Sweden)

    Sugam Sharma

    2014-11-01

    Full Text Available Today, science is passing through an era of transformation, where the inundation of data, dubbed data deluge is influencing the decision making process. The science is driven by the data and is being termed as data science. In this internet age, the volume of the data has grown up to petabytes, and this large, complex, structured or unstructured, and heterogeneous data in the form of “Big Data” has gained significant attention. The rapid pace of data growth through various disparate sources, especially social media such as Facebook, has seriously challenged the data analytic capabilities of traditional relational databases. The velocity of the expansion of the amount of data gives rise to a complete paradigm shift in how new age data is processed. Confidence in the data engineering of the existing data processing systems is gradually fading whereas the capabilities of the new techniques for capturing, storing, visualizing, and analyzing data are evolving. In this review paper, we discuss some of the modern Big Data models that are leading contributors in the NoSQL era and claim to address Big Data challenges in reliable and efficient ways. Also, we take the potential of Big Data into consideration and try to reshape the original operationaloriented definition of “Big Science” (Furner, 2003 into a new data-driven definition and rephrase it as “The science that deals with Big Data is Big Science.”

  9. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  10. RETRAN operational transient analysis of the Big Rock Point plant boiling water reactor

    International Nuclear Information System (INIS)

    Sawtelle, G.R.; Atchison, J.D.; Farman, R.F.; VandeWalle, D.J.; Bazydlo, H.G.

    1983-01-01

    Energy Incorporated used the RETRAN computer code to model and calculate nine Consumers Power Company Big Rock Point Nuclear Power Plant transients. RETRAN, a best-estimate, one-dimensional, homogeneous-flow thermal-equilibrium code, is applicable to FSAR Chapter 15 transients for Conditions 1 through IV. The BWR analyses were performed in accordance with USNRC Standard Review Plan criteria and in response to the USNRC Systematic Evaluation Program. The RETRAN Big Rock Point model was verified by comparison to plant startup test data. This paper discusses the unique modeling techniques used in RETRAN to model this steam-drum-type BWR. Transient analyses results are also presented

  11. The BIG Data Center: from deposition to integration to translation.

    Science.gov (United States)

    2017-01-04

    Biological data are generated at unprecedentedly exponential rates, posing considerable challenges in big data deposition, integration and translation. The BIG Data Center, established at Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, provides a suite of database resources, including (i) Genome Sequence Archive, a data repository specialized for archiving raw sequence reads, (ii) Gene Expression Nebulas, a data portal of gene expression profiles based entirely on RNA-Seq data, (iii) Genome Variation Map, a comprehensive collection of genome variations for featured species, (iv) Genome Warehouse, a centralized resource housing genome-scale data with particular focus on economically important animals and plants, (v) Methylation Bank, an integrated database of whole-genome single-base resolution methylomes and (vi) Science Wikis, a central access point for biological wikis developed for community annotations. The BIG Data Center is dedicated to constructing and maintaining biological databases through big data integration and value-added curation, conducting basic research to translate big data into big knowledge and providing freely open access to a variety of data resources in support of worldwide research activities in both academia and industry. All of these resources are publicly available and can be found at http://bigd.big.ac.cn. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  13. Mapping water availability, projected use and cost in the western United States

    Science.gov (United States)

    Tidwell, Vincent C.; Moreland, Barbara D.; Zemlick, Katie M.; Roberts, Barry L.; Passell, Howard D.; Jensen, Daniel; Forsgren, Christopher; Sehlke, Gerald; Cook, Margaret A.; King, Carey W.; Larsen, Sara

    2014-05-01

    New demands for water can be satisfied through a variety of source options. In some basins surface and/or groundwater may be available through permitting with the state water management agency (termed unappropriated water), alternatively water might be purchased and transferred out of its current use to another (termed appropriated water), or non-traditional water sources can be captured and treated (e.g., wastewater). The relative availability and cost of each source are key factors in the development decision. Unfortunately, these measures are location dependent with no consistent or comparable set of data available for evaluating competing water sources. With the help of western water managers, water availability was mapped for over 1200 watersheds throughout the western US. Five water sources were individually examined, including unappropriated surface water, unappropriated groundwater, appropriated water, municipal wastewater and brackish groundwater. Also mapped was projected change in consumptive water use from 2010 to 2030. Associated costs to acquire, convey and treat the water, as necessary, for each of the five sources were estimated. These metrics were developed to support regional water planning and policy analysis with initial application to electric transmission planning in the western US.

  14. 77 FR 54909 - Clean Water Act: Availability of List Decisions

    Science.gov (United States)

    2012-09-06

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9724-6] Clean Water Act: Availability of List Decisions... notice announces EPA's decision to identify certain water quality limited waters and the associated pollutant to be listed pursuant to the Clean Water Act Section 303(d)(2) on New York's list of impaired...

  15. Reduction of Turbidity of Water Using Locally Available Natural Coagulants

    Science.gov (United States)

    Asrafuzzaman, Md.; Fakhruddin, A. N. M.; Hossain, Md. Alamgir

    2011-01-01

    Turbidity imparts a great problem in water treatment. Moringa oleifera, Cicer arietinum, and Dolichos lablab were used as locally available natural coagulants in this study to reduce turbidity of synthetic water. The tests were carried out, using artificial turbid water with conventional jar test apparatus. Optimum mixing intensity and duration were determined. After dosing water-soluble extracts of Moringa oleifera, Cicer arietinum, and Dolichos lablab reduced turbidity to 5.9, 3.9, and 11.1 nephelometric turbidity unit (NTU), respectively, from 100 NTU and 5, 3.3, and 9.5, NTU, respectively, after dosing and filtration. Natural coagulants worked better with high, turbid, water compare to medium, or low, turbid, water. Highest turbidity reduction efficiency (95.89%) was found with Cicer arietinum. About 89 to 96% total coliform reduction were also found with natural coagulant treatment of turbid water. Using locally available natural coagulants, suitable, easier, and environment friendly options for water treatment were observed. PMID:23724307

  16. A peek into the future of radiology using big data applications

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2017-01-01

    Full Text Available Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs – Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs – Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR, volumetric rendering (VR, and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, “big data should not become “dump data” due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the

  17. Multilayer geospatial analysis of water availability for shale resources development in Mexico

    Science.gov (United States)

    Galdeano, C.; Cook, M. A.; Webber, M. E.

    2017-08-01

    Mexico’s government enacted an energy reform in 2013 that aims to foster competitiveness and private investment throughout the energy sector value chain. As part of this reform, it is expected that extraction of oil and gas via hydraulic fracturing will increase in five shale basins (e.g. Burgos, Sabinas, Tampico, Tuxpan, and Veracruz). Because hydraulic fracturing is a water-intensive activity, it is relevant to assess the potential water availability for this activity in Mexico. This research aims to quantify the water availability for hydraulic fracturing in Mexico and identify its spatial distribution along the five shale basins. The methodology consisted of a multilayer geospatial analysis that overlays the water availability in the watersheds and aquifers with the different types of shale resources areas (e.g. oil and associated gas, wet gas and condensate, and dry gas) in the five shale basins. The aquifers and watersheds in Mexico are classified in four zones depending on average annual water availability. Three scenarios were examined based on different impact level on watersheds and aquifers from hydraulic fracturing. For the most conservative scenario analyzed, the results showed that the water available could be used to extract between 8.15 and 70.42 Quadrillion British thermal units (Quads) of energy in the typical 20-30 year lifetime of the hydraulic fracturing wells that could be supplied with the annual water availability overlaying the shale areas, with an average across estimates of around 18.05 Quads. However, geographic variation in water availability could represent a challenge for extracting the shale reserves. Most of the water available is located closer to the Gulf of Mexico, but the areas with the larger recoverable shale reserves coincide with less water availability in Northern Mexico. New water management techniques (such as recycling and re-use), more efficient fracturing methods, shifts in usage patterns, or other water sources need

  18. The use of big data in transfusion medicine.

    Science.gov (United States)

    Pendry, K

    2015-06-01

    'Big data' refers to the huge quantities of digital information now available that describe much of human activity. The science of data management and analysis is rapidly developing to enable organisations to convert data into useful information and knowledge. Electronic health records and new developments in Pathology Informatics now support the collection of 'big laboratory and clinical data', and these digital innovations are now being applied to transfusion medicine. To use big data effectively, we must address concerns about confidentiality and the need for a change in culture and practice, remove barriers to adopting common operating systems and data standards and ensure the safe and secure storage of sensitive personal information. In the UK, the aim is to formulate a single set of data and standards for communicating test results and so enable pathology data to contribute to national datasets. In transfusion, big data has been used for benchmarking, detection of transfusion-related complications, determining patterns of blood use and definition of blood order schedules for surgery. More generally, rapidly available information can monitor compliance with key performance indicators for patient blood management and inventory management leading to better patient care and reduced use of blood. The challenges of enabling reliable systems and analysis of big data and securing funding in the restrictive financial climate are formidable, but not insurmountable. The promise is that digital information will soon improve the implementation of best practice in transfusion medicine and patient blood management globally. © 2015 British Blood Transfusion Society.

  19. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  20. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  1. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  2. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  3. Big Data and reality

    Directory of Open Access Journals (Sweden)

    Ryan Shaw

    2015-11-01

    Full Text Available DNA sequencers, Twitter, MRIs, Facebook, particle accelerators, Google Books, radio telescopes, Tumblr: what do these things have in common? According to the evangelists of “data science,” all of these are instruments for observing reality at unprecedentedly large scales and fine granularities. This perspective ignores the social reality of these very different technological systems, ignoring how they are made, how they work, and what they mean in favor of an exclusive focus on what they generate: Big Data. But no data, big or small, can be interpreted without an understanding of the process that generated them. Statistical data science is applicable to systems that have been designed as scientific instruments, but is likely to lead to confusion when applied to systems that have not. In those cases, a historical inquiry is preferable.

  4. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  5. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  6. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  7. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  8. Big Data in Caenorhabditis elegans: quo vadis?

    Science.gov (United States)

    Hutter, Harald; Moerman, Donald

    2015-11-05

    A clear definition of what constitutes "Big Data" is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of "complete" data sets for this organism is actually rather small--not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein-protein interaction--important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. © 2015 Hutter and Moerman. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  9. Urban Big Data and Sustainable Development Goals: Challenges and Opportunities

    Directory of Open Access Journals (Sweden)

    Ali Kharrazi

    2016-12-01

    Full Text Available Cities are perhaps one of the most challenging and yet enabling arenas for sustainable development goals. The Sustainable Development Goals (SDGs emphasize the need to monitor each goal through objective targets and indicators based on common denominators in the ability of countries to collect and maintain relevant standardized data. While this approach is aimed at harmonizing the SDGs at the national level, it presents unique challenges and opportunities for the development of innovative urban-level metrics through big data innovations. In this article, we make the case for advancing more innovative targets and indicators relevant to the SDGs through the emergence of urban big data. We believe that urban policy-makers are faced with unique opportunities to develop, experiment, and advance big data practices relevant to sustainable development. This can be achieved by situating the application of big data innovations through developing mayoral institutions for the governance of urban big data, advancing the culture and common skill sets for applying urban big data, and investing in specialized research and education programs.

  10. Military Simulation Big Data: Background, State of the Art, and Challenges

    Directory of Open Access Journals (Sweden)

    Xiao Song

    2015-01-01

    Full Text Available Big data technology has undergone rapid development and attained great success in the business field. Military simulation (MS is another application domain producing massive datasets created by high-resolution models and large-scale simulations. It is used to study complicated problems such as weapon systems acquisition, combat analysis, and military training. This paper firstly reviewed several large-scale military simulations producing big data (MS big data for a variety of usages and summarized the main characteristics of result data. Then we looked at the technical details involving the generation, collection, processing, and analysis of MS big data. Two frameworks were also surveyed to trace the development of the underlying software platform. Finally, we identified some key challenges and proposed a framework as a basis for future work. This framework considered both the simulation and big data management at the same time based on layered and service oriented architectures. The objective of this review is to help interested researchers learn the key points of MS big data and provide references for tackling the big data problem and performing further research.

  11. Big Rock Point: 35 years of electrical generation

    International Nuclear Information System (INIS)

    Petrosky, T.D.

    1998-01-01

    On September 27, 1962, the 75 MWe boiling water reactor, designed and built by General Electric, of the Big Rock Point Nuclear Power Station went critical for the first time. The US Atomic Energy Commission (AEC) and the plant operator, Consumers Power, had designed the plant also as a research reactor. The first studies were devoted to fuel behavior, higher burnup, and materials research. The reactor was also used for medical technology: Co-60 radiation sources were produced for the treatment of more than 120,000 cancer patients. After the accident at the Three Mile Island-2 nuclear generating unit in 1979, Big Rock Point went through an extensive backfitting phase. Personnel from numerous other American nuclear power plants were trained at the simulator of Big Rock Point. The plant was decommissioned permanently on August 29, 1997 after more than 35 years of operation and a cumulated electric power production of 13,291 GWh. A period of five to seven years is estimated for decommissioning and demolition work up to the 'green field' stage. (orig.) [de

  12. Occurrence and transport of nitrogen in the Big Sunflower River, northwestern Mississippi, October 2009-June 2011

    Science.gov (United States)

    Barlow, Jeannie R.B.; Coupe, Richard H.

    2014-01-01

    The Big Sunflower River Basin, located within the Yazoo River Basin, is subject to large annual inputs of nitrogen from agriculture, atmospheric deposition, and point sources. Understanding how nutrients are transported in, and downstream from, the Big Sunflower River is key to quantifying their eutrophying effects on the Gulf. Recent results from two Spatially Referenced Regressions on Watershed attributes (SPARROW models), which include the Big Sunflower River, indicate minimal losses of nitrogen in stream reaches typical of the main channels of major river systems. If SPARROW assumptions of relatively conservative transport of nitrogen are correct and surface-water losses through the bed of the Big Sunflower River are negligible, then options for managing nutrient loads to the Gulf of Mexico may be limited. Simply put, if every pound of nitrogen entering the Delta is eventually delivered to the Gulf, then the only effective nutrient management option in the Delta is to reduce inputs. If, on the other hand, it can be shown that processes within river channels of the Mississippi Delta act to reduce the mass of nitrogen in transport, other hydrologic approaches may be designed to further limit nitrogen transport. Direct validation of existing SPARROW models for the Delta is a first step in assessing the assumptions underlying those models. In order to characterize spatial and temporal variability of nitrogen in the Big Sunflower River Basin, water samples were collected at four U.S. Geological Survey gaging stations located on the Big Sunflower River between October 1, 2009, and June 30, 2011. Nitrogen concentrations were generally highest at each site during the spring of the 2010 water year and the fall and winter of the 2011 water year. Additionally, the dominant form of nitrogen varied between sites. For example, in samples collected from the most upstream site (Clarksdale), the concentration of organic nitrogen was generally higher than the concentrations of

  13. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  14. The Role of Social Responsibility in Big Business Practics

    Directory of Open Access Journals (Sweden)

    V A Gurinov

    2010-06-01

    Full Text Available The study of corporate social responsibility has become especially relevant in national science in the context of the development of big business able to assume significant social responsibilities. The article focuses on the issues of the nature and specificity of social responsibility of big business in Russia. The levels of social responsibility and the arrangements for social programmes implementation are also highlighted.

  15. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  16. Water-quality effects on phytoplankton species and density and trophic state indices at Big Base and Little Base Lakes, Little Rock Air Force Base, Arkansas, June through August, 2015

    Science.gov (United States)

    Driver, Lucas; Justus, Billy

    2016-01-01

    Big Base and Little Base Lakes are located on Little Rock Air Force Base, Arkansas, and their close proximity to a dense residential population and an active military/aircraft installation make the lakes vulnerable to water-quality degradation. The U.S. Geological Survey (USGS) conducted a study from June through August 2015 to investigate the effects of water quality on phytoplankton species and density and trophic state in Big Base and Little Base Lakes, with particular regard to nutrient concentrations. Nutrient concentrations, trophic-state indices, and the large part of the phytoplankton biovolume composed of cyanobacteria, indicate eutrophic conditions were prevalent for Big Base and Little Base Lakes, particularly in August 2015. Cyanobacteria densities and biovolumes measured in this study likely pose a low to moderate risk of adverse algal toxicity, and the high proportion of filamentous cyanobacteria in the lakes, in relation to other algal groups, is important from a fisheries standpoint because these algae are a poor food source for many aquatic taxa. In both lakes, total nitrogen to total phosphorus (N:P) ratios declined over the sampling period as total phosphorus concentrations increased relative to nitrogen concentrations. The N:P ratios in the August samples (20:1 and 15:1 in Big Base and Little Base Lakes, respectively) and other indications of eutrophic conditions are of concern and suggest that exposure of the two lakes to additional nutrients could cause unfavorable dissolved-oxygen conditions and increase the risk of cyanobacteria blooms and associated cyanotoxin issues.

  17. Influence of free water availability on a desert carnivore and herbivore.

    Science.gov (United States)

    Kluever, Bryan M; Gese, Eric M; Dempsey, Steven J

    2017-04-01

    Anthropogenic manipulation of finite resources on the landscape to benefit individual species or communities is commonly employed by conservation and management agencies. One such action in arid regions is the construction and maintenance of water developments (i.e., wildlife guzzlers) adding free water on the landscape to buttress local populations, influence animal movements, or affect distributions of certain species of interest. Despite their prevalence, the utility of wildlife guzzlers remains largely untested. We employed a before-after control-impact (BACI) design over a 4-year period on the US Army Dugway Proving Ground, Utah, USA, to determine whether water availability at wildlife guzzlers influenced relative abundance of black-tailed jackrabbits Lepus californicus and relative use of areas near that resource by coyotes Canis latrans , and whether coyote visitations to guzzlers would decrease following elimination of water. Eliminating water availability at guzzlers did not influence jackrabbit relative abundance. Coyote relative use was impacted by water availability, with elimination of water reducing use in areas associated with our treatment, but not with areas associated with our control. Visitations of radio-collared coyotes to guzzlers declined nearly 3-fold following elimination of water. Our study provides the first evidence of a potential direct effect of water sources on a mammalian carnivore in an arid environment, but the ecological relevance of our finding is debatable. Future investigations aimed at determining water effects on terrestrial mammals could expand on our findings by incorporating manipulations of water availability, obtaining absolute estimates of population parameters and vital rates and incorporating fine-scale spatiotemporal data.

  18. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  19. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  20. Free Release Standards Utilized at Big Rock Point

    International Nuclear Information System (INIS)

    Robert P. Wills

    2000-01-01

    The decommissioning of Consumers Energy's Big Rock Point (BRP) site involves decommissioning its 75-MW boiling water reactor and all of the associated facilities. Consumers Energy is committed to restoring the site to greenfield conditions. This commitment means that when the decommissioning is complete, all former structures will have been removed, and the site will be available for future use without radiological restrictions. BRP's radiation protection management staff determined that the typical methods used to comply with U.S Nuclear Regulatory Commission (NRC) regulations for analyzing volumetric material for radionuclides would not fulfill the demands of a facility undergoing decommissioning. The challenge at hand is to comply with regulatory requirements and put into production a large-scale bulk release production program. This report describes Consumers Energy's planned approach to the regulatory aspects of free release

  1. Availability and quality of water related to western energy

    International Nuclear Information System (INIS)

    Hudson, H.H.

    1981-01-01

    Much of the nation's energy resources is contained in seven states of the western United States. Arizona, New Mexico, Colorado, Utah, Wyoming, Montana, and North Dakota contain 40% of the nation's coal and 90% of its uranium and shale oil. Although rich in energy resources, these states are chronically deficient in water. Coal mining and subsequent land reclamation require relatively small amounts of water. Plans that require large quantities of water to transport and convert the coal to energy include the operation of coal-slurry pipelines, thermal-electric power generation, and coal gasification. Production of oil from shale by conventional mining techniques may require about three or four unit volumes of water for each unit volume of shale oil produced. Nearly half of this water would be needed to reestablish vegetation on waste material. In-situ extraction of oil would require substantially less water. Extracting and processing uranium require relatively small amounts of water. There may be problems of the quality of local groundwater where solution mining is practiced and where uranium ore is removed from water-saturated rocks that are then exposed to oxidation. Estimates of amounts of water required to support the development of western energy resources are highly variable and depend on the conversion technology, the level of anticipated development, and the quality of the water required by any given use or process. Conservative estimates exceed 2000 cu hm/year by the year 2000. Although water supplies in the amounts anticipated as being needed for energy development are available within the seven states, their availability locally may depend on satisfying environmental objections, modifying legal and institutional arrangements that presently control water distribution and use, and constructing additional reservoirs and distribution systems

  2. Passport to the Big Bang moves across the road

    CERN Document Server

    Corinne Pralavorio

    2015-01-01

    The ATLAS platform of the Passport to the Big Bang circuit has been relocated in front of the CERN Reception.   The ATLAS platform of the Passport to the Big Bang, outside the CERN Reception building. The Passport to the Big Bang platform of the ATLAS Experiment has been moved in front of the CERN Reception to make it more visible and accessible. It had to be dismantled and moved from its previous location in the garden of the Globe of Science and Innovation due to the major refurbishment work in progress on the Globe, and is now fully operational in its new location on the other side of the road, in the Main Reception car-park. The Passport to the Big Bang circuit, inaugurated in 2013, comprises ten platforms installed in front of ten CERN sites and aims to help local residents and visitors to the region understand CERN's research. Dedicated Passport to the Big Bang flyers, containing all necessary information and riddles for you to solve, are available at the CERN Rec...

  3. Nursing Knowledge: Big Data Science-Implications for Nurse Leaders.

    Science.gov (United States)

    Westra, Bonnie L; Clancy, Thomas R; Sensmeier, Joyce; Warren, Judith J; Weaver, Charlotte; Delaney, Connie W

    2015-01-01

    The integration of Big Data from electronic health records and other information systems within and across health care enterprises provides an opportunity to develop actionable predictive models that can increase the confidence of nursing leaders' decisions to improve patient outcomes and safety and control costs. As health care shifts to the community, mobile health applications add to the Big Data available. There is an evolving national action plan that includes nursing data in Big Data science, spearheaded by the University of Minnesota School of Nursing. For the past 3 years, diverse stakeholders from practice, industry, education, research, and professional organizations have collaborated through the "Nursing Knowledge: Big Data Science" conferences to create and act on recommendations for inclusion of nursing data, integrated with patient-generated, interprofessional, and contextual data. It is critical for nursing leaders to understand the value of Big Data science and the ways to standardize data and workflow processes to take advantage of newer cutting edge analytics to support analytic methods to control costs and improve patient quality and safety.

  4. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  5. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  6. Global Water Availability and Requirements for Future Food Production

    NARCIS (Netherlands)

    Gerten, D.; Heinke, J.; Hoff, H.; Biemans, H.; Fader, M.; Waha, K.

    2011-01-01

    This study compares, spatially explicitly and at global scale, per capita water availability and water requirements for food production presently (1971-2000) and in the future given climate and population change (2070-99). A vegetation and hydrology model Lund-Potsdam-Jena managed Land (LPJmL) was

  7. 78 FR 20912 - Clean Water Act: Availability of List Decisions

    Science.gov (United States)

    2013-04-08

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9798-8] Clean Water Act: Availability of List Decisions.... SUMMARY: The Clean Water Act requires that States periodically submit, and EPA approve or disapprove... are not stringent enough to attain or maintain State water quality standards and for which total...

  8. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  9. Effects of geothermal energy utilization on stream biota and water quality at The Geysers, California. Final report. [Big Sulphur, Little Sulphur, Squaw, and Pieta Creeks

    Energy Technology Data Exchange (ETDEWEB)

    LeGore, R.S.

    1975-01-01

    The discussion is presented under the following section headings: biological studies, including fish, insects, and microbiology; stream hydrology; stream water quality, including methods and results; the contribution of tributaries to Big Sulphur Creek, including methods, results, and tributary characterization; standing water at wellheads; steam condensate quality; accidental discharges; trout spawning bed quality; major conclusions; list of references; and appendices. It is concluded that present operational practices at Geysers geothermal field do not harm the biological resources in adjacent streams. The only effects of geothermal development observed during the study were related to operational accidents. (JGB)

  10. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  11. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  12. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  13. Review and classification of indicators of green water availability and scarcity

    NARCIS (Netherlands)

    Schyns, Joseph Franciscus; Hoekstra, Arjen Ysbert; Booij, Martijn J.

    2015-01-01

    Research on water scarcity has mainly focussed on blue water (ground- and surface water), but green water (soil moisture returning to the atmosphere through evaporation) is also scarce, because its availability is limited and there are competing demands for green water. Crop production, grazing

  14. Entering the 'big data' era in medicinal chemistry: molecular promiscuity analysis revisited.

    Science.gov (United States)

    Hu, Ye; Bajorath, Jürgen

    2017-06-01

    The 'big data' concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate.

  15. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  16. TELECOM BIG DATA FOR URBAN TRANSPORT ANALYSIS – A CASE STUDY OF SPLIT-DALMATIA COUNTY IN CROATIA

    Directory of Open Access Journals (Sweden)

    M. Baučić

    2017-09-01

    Full Text Available Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the Study of Tourist Movement and Traffic in Split-Dalmatia County in Croatia is carried out as a part of the “IPA Adriatic CBC//N.0086/INTERMODAL” project. This paper briefly explains the big data used in the study and the results of the study. Furthermore, this paper investigates the main considerations when using telecom customer big data: data privacy and data quality. The paper concludes with GIS visualisation and proposes the further use of big data used in the study.

  17. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  18. A Big Data Decision-making Mechanism for Food Supply Chain

    Directory of Open Access Journals (Sweden)

    Ji Guojun

    2017-01-01

    Full Text Available Many companies have captured and analyzed huge volumes of data to improve the decision mechanism of supply chain, this paper presents a big data harvest model that uses big data as inputs to make more informed decisions in the food supply chain. By introducing a method of Bayesian network, this paper integrates sample data and finds a cause-and-effect between data to predict market demand. Then the deduction graph model that translates foods demand into processes and divides processes into tasks and assets is presented, and an example of how big data in the food supply chain can be combined with Bayesian network and deduction graph model to guide production decision. Our conclusions indicate that the decision-making mechanism has vast potential by extracting value from big data.

  19. Možnosti využitia Big Data pre Competitive Inteligence

    OpenAIRE

    Verníček, Marek

    2016-01-01

    The main purpose of this thesis is to investigate the use of Big Data for the methods and procedures of Competitive Intelligence. Among the goals of the work is a toolkit for small and large businesses which is supposed to support their work with the whole process of Big Data work. Another goal is to design an effective solution of processing Big Data to gain a competitive advantage in business. The theoretical part of the work processes available scientific literature in the Czech Republic a...

  20. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  1. Impacts of fresh and aged biochars on plant available water and water use efficiency

    Science.gov (United States)

    The ability of soils to hold sufficient plant available water (PAW) between rainfall events is critical to crop productivity. Most studies indicate that biochar amendments decrease soil bulk density and increase soil water retention. However, limited knowledge exists regarding biochars ability to in...

  2. Big Data for Precision Medicine

    Directory of Open Access Journals (Sweden)

    Daniel Richard Leff

    2015-09-01

    Full Text Available This article focuses on the potential impact of big data analysis to improve health, prevent and detect disease at an earlier stage, and personalize interventions. The role that big data analytics may have in interrogating the patient electronic health record toward improved clinical decision support is discussed. We examine developments in pharmacogenetics that have increased our appreciation of the reasons why patients respond differently to chemotherapy. We also assess the expansion of online health communications and the way in which this data may be capitalized on in order to detect public health threats and control or contain epidemics. Finally, we describe how a new generation of wearable and implantable body sensors may improve wellbeing, streamline management of chronic diseases, and improve the quality of surgical implants.

  3. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  4. A Grey Theory Based Approach to Big Data Risk Management Using FMEA

    Directory of Open Access Journals (Sweden)

    Maisa Mendonça Silva

    2016-01-01

    Full Text Available Big data is the term used to denote enormous sets of data that differ from other classic databases in four main ways: (huge volume, (high velocity, (much greater variety, and (big value. In general, data are stored in a distributed fashion and on computing nodes as a result of which big data may be more susceptible to attacks by hackers. This paper presents a risk model for big data, which comprises Failure Mode and Effects Analysis (FMEA and Grey Theory, more precisely grey relational analysis. This approach has several advantages: it provides a structured approach in order to incorporate the impact of big data risk factors; it facilitates the assessment of risk by breaking down the overall risk to big data; and finally its efficient evaluation criteria can help enterprises reduce the risks associated with big data. In order to illustrate the applicability of our proposal in practice, a numerical example, with realistic data based on expert knowledge, was developed. The numerical example analyzes four dimensions, that is, managing identification and access, registering the device and application, managing the infrastructure, and data governance, and 20 failure modes concerning the vulnerabilities of big data. The results show that the most important aspect of risk to big data relates to data governance.

  5. Hydrogeology, geochemistry, and quality of water of The Basin and Oak Spring areas of the Chisos Mountains, Big Bend National Park, Texas

    Science.gov (United States)

    Baker, E.T.; Buszka, P.M.

    1993-01-01

    Test drilling near two sewage lagoons in The Basin area of the Chisos Mountains, Big Bend National Park, Texas, has shown that the alluvium and colluvium on which the lagoons are located is not saturated in the immediate vicinity of the lagoons. A shallow aquifer, therefore, does not exist in this critical area at and near the lagoons. Should seepage outflow from the lagoons occur, the effluent from the lagoons might eventually be incorporated into shallow ground water moving westward in the direction of Oak Spring. Under these conditions such water could reach the spring. Test borings that bottomed in bedrock below the alluvial and colluvial fill material are dry, indicating that no substantial leakage from the lagoons was detected. Therefore, no contaminant plume was identified. Fill material in The Basin does not contain water everywhere in its extensive outcropping area and supplies only a small quantity of ground water to Window Pouroff, which is the only natural surface outlet of The Basin.

  6. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  7. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  8. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  9. 'Big data' in mental health research: current status and emerging possibilities.

    Science.gov (United States)

    Stewart, Robert; Davis, Katrina

    2016-08-01

    'Big data' are accumulating in a multitude of domains and offer novel opportunities for research. The role of these resources in mental health investigations remains relatively unexplored, although a number of datasets are in use and supporting a range of projects. We sought to review big data resources and their use in mental health research to characterise applications to date and consider directions for innovation in future. A narrative review. Clear disparities were evident in geographic regions covered and in the disorders and interventions receiving most attention. We discuss the strengths and weaknesses of the use of different types of data and the challenges of big data in general. Current research output from big data is still predominantly determined by the information and resources available and there is a need to reverse the situation so that big data platforms are more driven by the needs of clinical services and service users.

  10. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    Directory of Open Access Journals (Sweden)

    Mike W.-L. Cheung

    2016-05-01

    Full Text Available Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists – and probably the most crucial one – is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  11. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  12. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  13. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  14. Concurrence of big data analytics and healthcare: A systematic review.

    Science.gov (United States)

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  15. Potential Impacts of Food Production on Freshwater Availability Considering Water Sources

    Directory of Open Access Journals (Sweden)

    Shinjiro Yano

    2016-04-01

    Full Text Available We quantify the potential impacts of global food production on freshwater availability (water scarcity footprint; WSF by applying the water unavailability factor (fwua as a characterization factor and a global water resource model based on life cycle impact assessment (LCIA. Each water source, including rainfall, surface water, and groundwater, has a distinct fwua that is estimated based on the renewability rate of each geographical water cycle. The aggregated consumptive water use level for food production (water footprint inventory; WI was found to be 4344 km3/year, and the calculated global total WSF was 18,031 km3 H2Oeq/year, when considering the difference in water sources. According to the fwua concept, which is based on the land area required to obtain a unit volume of water from each source, the calculated annual impact can also be represented as 98.5 × 106 km2. This value implies that current agricultural activities requires a land area that is over six times larger than global total cropland. We also present the net import of the WI and WSF, highlighting the importance of quantitative assessments for utilizing global water resources to achieve sustainable water use globally.

  16. BIG: a Grid Portal for Biomedical Data and Images

    Directory of Open Access Journals (Sweden)

    Giovanni Aloisio

    2004-06-01

    Full Text Available Modern management of biomedical systems involves the use of many distributed resources, such as high performance computational resources to analyze biomedical data, mass storage systems to store them, medical instruments (microscopes, tomographs, etc., advanced visualization and rendering tools. Grids offer the computational power, security and availability needed by such novel applications. This paper presents BIG (Biomedical Imaging Grid, a Web-based Grid portal for management of biomedical information (data and images in a distributed environment. BIG is an interactive environment that deals with complex user's requests, regarding the acquisition of biomedical data, the "processing" and "delivering" of biomedical images, using the power and security of Computational Grids.

  17. Social networks, big data and transport planning

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz Sanchez, T.; Lidon Mars Aicart, M. del; Arroyo Lopez, M.R.; Serna Nocedal, A.

    2016-07-01

    The characteristics of people who are related or tied to each individual affects her activitytravel behavior. That influence is especially associated to social and recreational activities, which are increasingly important. Collecting high quality data from those social networks is very difficult, because respondents are asked about their general social life, which is most demanding to remember that specific facts. On the other hand, currently there are different potential sources of transport data, which is characterized by the huge amount of information available, the velocity with it is obtained and the variety of format in which is presented. This sort of information is commonly known as Big Data. In this paper we identify potential sources of social network related big data that can be used in Transport Planning. Then, a review of current applications in Transport Planning is presented. Finally, some future prospects of using social network related big data are highlighted. (Author)

  18. Climate change impacts on snow water availability in the Euphrates-Tigris basin

    Directory of Open Access Journals (Sweden)

    M. Özdoğan

    2011-09-01

    Full Text Available This study investigates the effects of projected climate change on snow water availability in the Euphrates-Tigris basin using the Variable Infiltration Capacity (VIC macro scale hydrologic model and a set of regional climate-change outputs from 13 global circulation models (GCMs forced with two greenhouse gas emission scenarios for two time periods in the 21st century (2050 and 2090. The hydrologic model produces a reasonable simulation of seasonal and spatial variation in snow cover and associated snow water equivalent (SWE in the mountainous areas of the basin, although its performance is poorer at marginal snow cover sites. While there is great variation across GCM outputs influencing snow water availability, the majority of models and scenarios suggest a significant decline (between 10 and 60 percent in available snow water, particularly under the high-impact A2 climate change scenario and later in the 21st century. The changes in SWE are more stable when multi-model ensemble GCM outputs are used to minimize inter-model variability, suggesting a consistent and significant decrease in snow-covered areas and associated water availability in the headwaters of the Euphrates-Tigris basin. Detailed analysis of future climatic conditions point to the combined effects of reduced precipitation and increased temperatures as primary drivers of reduced snowpack. Results also indicate a more rapid decline in snow cover in the lower elevation zones than the higher areas in a changing climate but these findings also contain a larger uncertainty. The simulated changes in snow water availability have important implications for the future of water resources and associated hydropower generation and land-use management and planning in a region already ripe for interstate water conflict. While the changes in the frequency and intensity of snow-bearing circulation systems or the interannual variability related to climate were not considered, the simulated

  19. Big Data: an exploration of research, technologies and application cases

    Directory of Open Access Journals (Sweden)

    Emilcy J. Hernández-Leal

    2017-05-01

    Full Text Available Big Data has become a worldwide trend and although still lacks a scientific or academic consensual concept, every day it portends greater market growth that surrounds and the associated research areas. This paper reports a systematic review of the literature on Big Data considering a state of the art about techniques and technologies associated with Big Data, which include capture, processing, analysis and data visualization. The characteristics, strengths, weaknesses and opportunities for some applications and Big Data models that include support mainly for modeling, analysis, and data mining are explored. Likewise, some of the future trends for the development of Big Data are introduced by basic aspects, scope, and importance of each one. The methodology used for exploration involves the application of two strategies, the first corresponds to a scientometric analysis and the second corresponds to a categorization of documents through a web tool to support the process of literature review. As results, a summary and conclusions about the subject are generated and possible scenarios arise for research work in the field.

  20. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  1. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  2. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  3. BIG GEO DATA MANAGEMENT: AN EXPLORATION WITH SOCIAL MEDIA AND TELECOMMUNICATIONS OPEN DATA

    Directory of Open Access Journals (Sweden)

    C. Arias Munoz

    2016-06-01

    Full Text Available The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.

  4. Open Source Tools for Assessment of Global Water Availability, Demands, and Scarcity

    Science.gov (United States)

    Li, X.; Vernon, C. R.; Hejazi, M. I.; Link, R. P.; Liu, Y.; Feng, L.; Huang, Z.; Liu, L.

    2017-12-01

    Water availability and water demands are essential factors for estimating water scarcity conditions. To reproduce historical observations and to quantify future changes in water availability and water demand, two open source tools have been developed by the JGCRI (Joint Global Change Research Institute): Xanthos and GCAM-STWD. Xanthos is a gridded global hydrologic model, designed to quantify and analyze water availability in 235 river basins. Xanthos uses a runoff generation and a river routing modules to simulate both historical and future estimates of total runoff and streamflows on a monthly time step at a spatial resolution of 0.5 degrees. GCAM-STWD is a spatiotemporal water disaggregation model used with the Global Change Assessment Model (GCAM) to spatially downscale global water demands for six major enduse sectors (irrigation, domestic, electricity generation, mining, and manufacturing) from the region scale to the scale of 0.5 degrees. GCAM-STWD then temporally downscales the gridded annual global water demands to monthly results. These two tools, written in Python, can be integrated to assess global, regional or basin-scale water scarcity or water stress. Both of the tools are extensible to ensure flexibility and promote contribution from researchers that utilize GCAM and study global water use and supply.

  5. 77 FR 15368 - Clean Water Act; Availability of List Decisions

    Science.gov (United States)

    2012-03-15

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9646-9] Clean Water Act; Availability of List Decisions AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of Availability and Request for Public Comment. SUMMARY: This action announces the availability of the Environmental Protection Agency's (EPA...

  6. Global assessment of predictability of water availability: A bivariate probabilistic Budyko analysis

    Science.gov (United States)

    Wang, Weiguang; Fu, Jianyu

    2018-02-01

    Estimating continental water availability is of great importance for water resources management, in terms of maintaining ecosystem integrity and sustaining society development. To more accurately quantify the predictability of water availability, on the basis of univariate probabilistic Budyko framework, a bivariate probabilistic Budyko approach was developed using copula-based joint distribution model for considering the dependence between parameter ω of Wang-Tang's equation and the Normalized Difference Vegetation Index (NDVI), and was applied globally. The results indicate the predictive performance in global water availability is conditional on the climatic condition. In comparison with simple univariate distribution, the bivariate one produces the lower interquartile range under the same global dataset, especially in the regions with higher NDVI values, highlighting the importance of developing the joint distribution by taking into account the dependence structure of parameter ω and NDVI, which can provide more accurate probabilistic evaluation of water availability.

  7. TELECOM BIG DATA FOR URBAN TRANSPORT ANALYSIS – A CASE STUDY OF SPLIT-DALMATIA COUNTY IN CROATIA

    OpenAIRE

    M. Baučić; N. Jajac; M. Bućan

    2017-01-01

    Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the...

  8. Entering the ‘big data’ era in medicinal chemistry: molecular promiscuity analysis revisited

    Science.gov (United States)

    Hu, Ye; Bajorath, Jürgen

    2017-01-01

    The ‘big data’ concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate. PMID:28670471

  9. 78 FR 45925 - Clean Water Act: Availability of List Decisions

    Science.gov (United States)

    2013-07-30

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9840-5] Clean Water Act: Availability of List Decisions AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of availability. SUMMARY: This notice announces the availability of EPA's Responsiveness Summary Concerning EPA's May 9, 2013 Public Notice of...

  10. Modeling and Analysis in Marine Big Data: Advances and Challenges

    Directory of Open Access Journals (Sweden)

    Dongmei Huang

    2015-01-01

    Full Text Available It is aware that big data has gathered tremendous attentions from academic research institutes, governments, and enterprises in all aspects of information sciences. With the development of diversity of marine data acquisition techniques, marine data grow exponentially in last decade, which forms marine big data. As an innovation, marine big data is a double-edged sword. On the one hand, there are many potential and highly useful values hidden in the huge volume of marine data, which is widely used in marine-related fields, such as tsunami and red-tide warning, prevention, and forecasting, disaster inversion, and visualization modeling after disasters. There is no doubt that the future competitions in marine sciences and technologies will surely converge into the marine data explorations. On the other hand, marine big data also brings about many new challenges in data management, such as the difficulties in data capture, storage, analysis, and applications, as well as data quality control and data security. To highlight theoretical methodologies and practical applications of marine big data, this paper illustrates a broad view about marine big data and its management, makes a survey on key methods and models, introduces an engineering instance that demonstrates the management architecture, and discusses the existing challenges.

  11. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  12. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  13. The big data phenomenon: The business and public impact

    Directory of Open Access Journals (Sweden)

    Chroneos-Krasavac Biljana

    2016-01-01

    Full Text Available The subject of the research in this paper is the emergence of big data phenomenon and application of big data technologies for business' needs with the specific emphasis on marketing and trade. The purpose of the research is to make a comprehensive overview of different discussions about the characteristics, application possibilities, achievements, constraints and the future of big data development. Based on the relevant literature, the concept of big data is presented and the potential of large impact of big data on business activities is discussed. One of the key findings indicates that the most prominent change that big data brings to the business arena is the appearance of new business models, as well as revisions of the existing ones. Substantial part of the paper is devoted to the marketing and marketing research which are under the strong impact of big data. The most exciting outcomes of the research in this domain concerns the new abilities in profiling the customers. In addition to the vast amount of structured data which are used in marketing for a long period, big data initiatives suggest the inclusion of semi-structured and unstructured data, opening up the room for substantial improvements in customer profile analysis. Considering the usage of information communication technologies (ICT as a prerequisite for big data project success, the concept of Networked Readiness Index (NRI is presented and the position of Serbia and regional countries in NRI framework is analyzed. The main outcome of the analysis points out that Serbia, with its NRI score took the lowest position in the region, excluding Albania. Also, Serbia is lagging behind the appropriate EU mean values regarding all observed composite indicators - pillars. Further on, this analysis reveals the domains of ICT usage in Serbia, which could be focused for an improvement and where incentives can be made. These domains are: political and regulatory environment, business and

  14. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  15. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  16. Big data science: A literature review of nursing research exemplars.

    Science.gov (United States)

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Big Social Network Data and Sustainable Economic Development

    Directory of Open Access Journals (Sweden)

    Umit Can

    2017-11-01

    Full Text Available New information technologies have led to the rapid and effective growth of social networks. The amount of data produced by social networks has increased the value of the big data concept, which is one of the popular current phenomena. The immediate or unpredictable effects of a wide array of economic activities on large masses and the reactions to them can be measured by using social media platforms and big data methods. Thus, it would be extremely beneficial to analyze the harmful environmental and social impacts that are caused by unsustainable business applications. As social networks and big data are popular realms currently, their efficient use would be an important factor in sustainable economic development. Accurate analysis of people’s consumption habits and economic tendencies would provide significant advantages to companies. Moreover, unknown consumption factors that affect the economic preferences of individuals can be discovered and economic efficiency can be increased. This study shows that the numerous solution opportunities that are provided by social networks and big data have become significant tools in dynamic policy creation by companies and states, in solving problems related to women’s rights, the environment, and health.

  18. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  19. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  20. Urban Big Data and the Development of City Intelligence

    Directory of Open Access Journals (Sweden)

    Yunhe Pan

    2016-06-01

    Full Text Available This study provides a definition for urban big data while exploring its features and applications of China's city intelligence. The differences between city intelligence in China and the “smart city” concept in other countries are compared to highlight and contrast the unique definition and model for China's city intelligence in this paper. Furthermore, this paper examines the role of urban big data in city intelligence by showing that it not only serves as the cornerstone of this trend as it also plays a core role in the diffusion of city intelligence technology and serves as an inexhaustible resource for the sustained development of city intelligence. This study also points out the challenges of shaping and developing of China's urban big data. Considering the supporting and core role that urban big data plays in city intelligence, the study then expounds on the key points of urban big data, including infrastructure support, urban governance, public services, and economic and industrial development. Finally, this study points out that the utility of city intelligence as an ideal policy tool for advancing the goals of China's urban development. In conclusion, it is imperative that China make full use of its unique advantages—including using the nation's current state of development and resources, geographical advantages, and good human relations—in subjective and objective conditions to promote the development of city intelligence through the proper application of urban big data.

  1. A Study of the Application of Big Data in a Rural Comprehensive Information Service

    Directory of Open Access Journals (Sweden)

    Leifeng Guo

    2015-05-01

    Full Text Available Big data has attracted extensive interest due to its potential tremendous social and scientific value. Researchers are also trying to extract potential value from agriculture big data. This paper presents a study of information services based on big data from the perspective of a rural comprehensive information service. First, we introduce the background of the rural comprehensive information service, and then we present in detail the National Rural Comprehensive Information Service Platform (NRCISP, which is supported by the national science and technology support program. Next, we discuss big data in the NRCISP according to data characteristics, data sources, and data processing. Finally, we discuss a service model and services based on big data in the NRCISP.

  2. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  3. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  4. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  5. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  6. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  7. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  8. The Big Data Tools Impact on Development of Simulation-Concerned Academic Disciplines

    Directory of Open Access Journals (Sweden)

    A. A. Sukhobokov

    2015-01-01

    Full Text Available The article gives a definition of Big Data on the basis of 5V (Volume, Variety, Velocity, Veracity, Value as well as shows examples of tasks that require using Big Data tools in a diversity of areas, namely: health, education, financial services, industry, agriculture, logistics, retail, information technology, telecommunications and others. An overview of Big Data tools is delivered, including open source products, IBM Bluemix and SAP HANA platforms. Examples of architecture of corporate data processing and management systems using Big Data tools are shown for big Internet companies and for enterprises in traditional industries. Within the overview, a classification of Big Data tools is proposed that fills gaps of previously developed similar classifications. The new classification contains 19 classes and allows embracing several hundreds of existing and emerging products.The uprise and use of Big Data tools, in addition to solving practical problems, affects the development of scientific disciplines concerning the simulation of technical, natural or socioeconomic systems and the solution of practical problems based on developed models. New schools arise in these disciplines. These new schools decide peculiar to each discipline tasks, but for systems with a much bigger number of internal elements and connections between them. Characteristics of the problems to be solved under new schools, not always meet the criteria for Big Data. It is suggested to identify the Big Data as a part of the theory of sorting and searching algorithms. In other disciplines the new schools are called by analogy with Big Data: Big Calculation in numerical methods, Big Simulation in imitational modeling, Big Management in the management of socio-economic systems, Big Optimal Control in the optimal control theory. The paper shows examples of tasks and methods to be developed within new schools. The educed tendency is not limited to the considered disciplines: there are

  9. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  10. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  11. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  12. The emerging role of Big Data in key development issues: Opportunities, challenges, and concerns

    Directory of Open Access Journals (Sweden)

    Nir Kshetri

    2014-12-01

    Full Text Available This paper presents a review of academic literature, policy documents from government organizations and international agencies, and reports from industries and popular media on the trends in Big Data utilization in key development issues and its worthwhileness, usefulness, and relevance. By looking at Big Data deployment in a number of key economic sectors, it seeks to provide a better understanding of the opportunities and challenges of using it for addressing key issues facing the developing world. It reviews the uses of Big Data in agriculture and farming activities in developing countries to assess the capabilities required at various levels to benefit from Big Data. It also provides insights into how the current digital divide is associated with and facilitated by the pattern of Big Data diffusion and its effective use in key development areas. It also discusses the lessons that developing countries can learn from the utilization of Big Data in big corporations as well as in other activities in industrialized countries.

  13. The Challenges of Data Quality and Data Quality Assessment in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Li Cai

    2015-05-01

    Full Text Available High-quality data are the precondition for analyzing and using big data and for guaranteeing the value of the data. Currently, comprehensive analysis and research of quality standards and quality assessment methods for big data are lacking. First, this paper summarizes reviews of data quality research. Second, this paper analyzes the data characteristics of the big data environment, presents quality challenges faced by big data, and formulates a hierarchical data quality framework from the perspective of data users. This framework consists of big data quality dimensions, quality characteristics, and quality indexes. Finally, on the basis of this framework, this paper constructs a dynamic assessment process for data quality. This process has good expansibility and adaptability and can meet the needs of big data quality assessment. The research results enrich the theoretical scope of big data and lay a solid foundation for the future by establishing an assessment model and studying evaluation algorithms.

  14. Influence of Big Data on Manufacturing Industry and Strategies of Enterprises: A Literature Review

    Directory of Open Access Journals (Sweden)

    Zhao Zhao

    2017-01-01

    Full Text Available Along with the rapid development of information technologies, such as cloud computing, mobile internet and internet of things, and the promotion of IT application, all kinds of data are generated and accumulated rapidly in various ways, big data era is coming, in which enterprises are faced with both opportunities and unprecedented challenges. Various processes, from decision making to operation and from designing to marketing, are being influenced by big data in manufacturing industry. This paper, according to the nature and features of big data, analyzes and extends a classical model of organizational change, Leavitt’s model of organizational change, in order to explore the ways for enterprises to cope with challenges and seize chances of development in big data era. Then, using the extended Leavitt’s model, opportunities and challenges derive from big data are combed, and approaches to making use of big data and coping with big data are generalized from five perspectives, including task, structure, people, technology and environment.

  15. Water availability and demand in the development regions of South Africa

    Directory of Open Access Journals (Sweden)

    A. B. de Villiers

    1988-03-01

    Full Text Available The availability of water data in the development regions is at present insufficient. This is due to the fact that water supply and demand is calculated for the physical drainage regions (watersheds, while the development regions do not correspond with the drainage regions. The necessary calculations can accordingly presently not be made. In this paper this problem is addressed.

  16. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  17. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  18. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  19. Towards in silico prognosis using big data

    Directory of Open Access Journals (Sweden)

    Ohs Nicholas

    2016-09-01

    Full Text Available Clinical diagnosis and prognosis usually rely on few or even single measurements despite clinical big data being available. This limits the exploration of complex diseases such as adolescent idiopathic scoliosis (AIS where the associated low bone mass remains unexplained. Observed low physical activity and increased RANKL/OPG, however, both indicate a mechanobiological cause. To deepen disease understanding, we propose an in silico prognosis approach using clinical big data, i.e. medical images, serum markers, questionnaires and live style data from mobile monitoring devices and explore the role of inadequate physical activity in a first AIS prototype. It employs a cellular automaton (CA to represent the medical image, micro-finite element analysis to calculate loading, and a Boolean network to integrate the other biomarkers. Medical images of the distal tibia, physical activity scores, and vitamin D and PTH levels were integrated as measured clinically while the time development of bone density and RANKL/OPG was observed. Simulation of an AIS patient with normal physical activity and patient-specific vitamin D and PTH levels showed minor changes in bone density whereas the simulation of the same AIS patient but with reduced physical activity led to low density. Both showed unchanged RANKL/OPG and considerable cortical resorption. We conclude that our integrative in silico approach allows to account for a variety of clinical big data to study complex diseases.

  20. Scientific information in support of water resource management of the Big River area, Rhode Island

    Science.gov (United States)

    Armstrong, David S.; Masterson, John P.; Robinson, Keith W.; Crawley, Kathleen M.

    2015-01-01

    The Rhode Island Water Resources Board (RIWRB) is concerned that the demand for water may exceed the available public water supply in central and southern Rhode Island. Although water is often assumed to be plentiful in Rhode Island because of abundant rainfall, an adequate supply of water is not always available everywhere in the state during dry periods. Concerns that water demand may exceed supply are greatest during the summer, when lower water levels and increased drought potential combine with seasonal increases in peak water demand (Rhode Island Water Resources Board, 2012). High summer water demands are due to increases in outdoor water use, such as lawn watering and agricultural irrigation, and to increased summer population in coastal areas. Water-supply concerns are particularly acute in central and southern Rhode Island, where groundwater is the primary source of drinking water.

  1. Design of Intelligent Manufacturing Big Data Cloud Service Platform

    Directory of Open Access Journals (Sweden)

    Cai Danlin

    2018-01-01

    Full Text Available With the coming of the intelligent manufacturing, the technology and application of industrial big data will be popular in the future. The productivity, competitiveness and innovation of the manufacturing industries will be improved through the integrated innovation of big data technology and industries. Besides, products, production process, management, services, new form and new models will be more intellectualized. They will support the transformation and upgrading of manufacturing industry and the construction of an open, shared and collaborative ecological environment for intelligent manufacturing industry.

  2. Surveillance, Snowden, and Big Data: Capacities, consequences, critique

    Directory of Open Access Journals (Sweden)

    David Lyon

    2014-07-01

    Full Text Available The Snowden revelations about National Security Agency surveillance, starting in 2013, along with the ambiguous complicity of internet companies and the international controversies that followed provide a perfect segue into contemporary conundrums of surveillance and Big Data. Attention has shifted from late C20th information technologies and networks to a C21st focus on data, currently crystallized in “Big Data.” Big Data intensifies certain surveillance trends associated with information technology and networks, and is thus implicated in fresh but fluid configurations. This is considered in three main ways: One, the capacities of Big Data (including metadata intensify surveillance by expanding interconnected datasets and analytical tools. Existing dynamics of influence, risk-management, and control increase their speed and scope through new techniques, especially predictive analytics. Two, while Big Data appears to be about size, qualitative change in surveillance practices is also perceptible, accenting consequences. Important trends persist – the control motif, faith in technology, public-private synergies, and user-involvement – but the future-orientation increasingly severs surveillance from history and memory and the quest for pattern-discovery is used to justify unprecedented access to data. Three, the ethical turn becomes more urgent as a mode of critique. Modernity's predilection for certain definitions of privacy betrays the subjects of surveillance who, so far from conforming to the abstract, disembodied image of both computing and legal practices, are engaged and embodied users-in-relation whose activities both fuel and foreclose surveillance.

  3. Farewell to a Big and Rich Nuclear Power Club?

    International Nuclear Information System (INIS)

    Takeda, A.

    2001-01-01

    For the last few decades of the 20th, century, we have seen a large number of big nuclear power plants being built and operated in a few rich countries like the United States, France, Germany, the United Kingdom, and Japan. They have standardized the 1000 MWe-type light water reactors, which have the actual generating capacity of more than 1100 MW. (author)

  4. Big data technologies in e-learning

    Directory of Open Access Journals (Sweden)

    Gyulara A. Mamedova

    2017-01-01

    Full Text Available Recently, e-learning around the world is rapidly developing, and the main problem is to provide the students with quality educational information on time. This task cannot be solved without analyzing the large flow of information, entering the information environment of e-learning from participants in the educational process – students, lecturers, administration, etc. In this environment, there are a large number of different types of data, both structured and unstructured. Data processing is difficult to implement by traditional statistical methods. The aim of the study is to show that for the development and implementation of successful e-learning systems, it is necessary to use new technologies that would allow storing and processing large data streams.In order to store the big data, a large amount of disk space is required. It is shown that to solve this problem it is efficient to use clustered NAS (Network Area Storage technology, which allows storing information of educational institutions on NAS servers and sharing them with Internet. To process and personalize the Big Data in the environment of e-learning, it is proposed to use the technologies MapReduce, Hadoop, NoSQL and others. The article gives examples of the use of these technologies in the cloud environment. These technologies in e-learning allow achieving flexibility, scalability, availability, quality of service, security, confidentiality and ease of educational information use.Another important problem of e-learning is the identification of new, sometimes hidden, interconnection in Big Data, new knowledge (data mining, which can be used to improve the educational process and improve its management. To classify electronic educational resources, identify patterns of students with similar psychological, behavioral and intellectual characteristics, developing individualized educational programs, it is proposed to use methods of analysis of Big Data.The article shows that at

  5. Availability of water resources in the rio Bermudez micro-basin. Central Region of Costa Rica

    International Nuclear Information System (INIS)

    Hernando Echevarria, L.; Orozco Montoya, R.

    2015-01-01

    The Rio Bermudez micro-basin makes up part of the principal hydrological resource area in the Central Region of Costa Rica. For this reason a study was done to determine the availability of hydrological resources in said micro-basin to identify areas with potential water availability problems. A monthly water balance was calculated using land use, geomorphology and climate parameters. From these water balance studies, the amount of available water was calculated and classified into four categories, however, in this micro-basin, only three categories were identified: high, medium and moderate water availability. No areas were identified with low water availability, indicating availability is sufficient; however, there is increasing demand on water resources because over half of the micro-basin area is classified as having moderate water availability. (Author)

  6. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  8. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  9. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  10. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  11. Detection of Ground Water Availability at Buhias Island, Sitaro Regency

    Directory of Open Access Journals (Sweden)

    Zetly E Tamod

    2016-08-01

    Full Text Available The study aims to detect ground water availability at Buhias Island, Siau Timur Selatan District, Sitaro Regency. The research method used the survey method by geoelectrical instrument based on subsurface rock resistivity as a geophysical exploration results with geoelectrical method of Wenner-Schlumberger configuration. Resistivity geoelectrical method is done by injecting a flow into the earth surface, then it is measured the potential difference. This study consists of 4 tracks in which each track is made the stretch model of soil layer on subsurface of ground.  Then, the exploration results were processed using software RES2DINV to look at the data of soil layer based on the value of resistivity (2D. Interpretation result of the track 1 to 4 concluded that there is a layer of ground water. State of dominant ground water contains the saline (brackish. Location of trajectory in the basin to the lowland areas is mostly mangrove swamp vegetation. That location is the junction between the results of the runoff of rainfall water that falls down from the hills with sea water. Bedrock as a constituent of rock layer formed from marine sediments that carry minerals salts.

  12. Big Five Measurement via Q-Sort

    Directory of Open Access Journals (Sweden)

    Chris D. Fluckinger

    2014-08-01

    Full Text Available Socially desirable responding presents a difficult challenge in measuring personality. I tested whether a partially ipsative measure—a normatively scored Q-sort containing traditional Big Five items—would produce personality scores indicative of less socially desirable responding compared with Likert-based measures. Across both instructions to respond honestly and in the context of applying for a job, the Q-sort produced lower mean scores, lower intercorrelations between dimensions, and similar validity in predicting supervisor performance ratings to Likert. In addition, the Q-sort produced a more orthogonal structure (but not fully orthogonal when modeled at the latent level. These results indicate that the Q-sort method did constrain socially desirable responding. Researchers and practitioners should consider Big Five measurement via Q-sort for contexts in which high socially desirable responding is expected.

  13. Research in Big Data Warehousing using Hadoop

    Directory of Open Access Journals (Sweden)

    Abderrazak Sebaa

    2017-05-01

    Full Text Available Traditional data warehouses have played a key role in decision support system until the recent past. However, the rapid growing of the data generation by the current applications requires new data warehousing systems: volume and format of collected datasets, data source variety, integration of unstructured data and powerful analytical processing. In the age of the Big Data, it is important to follow this pace and adapt the existing warehouse systems to overcome the new issues and challenges. In this paper, we focus on the data warehousing over big data. We discuss the limitations of the traditional ones. We present its alternative technologies and related future work for data warehousing.

  14. WaterOnto: Ontology of Context-Aware Grid-Based Riverine Water Management System

    Directory of Open Access Journals (Sweden)

    Muhammad Hussain Mughal

    2017-06-01

    Full Text Available The management of riverine water always remains a big challenge, because the volatility of water flow creates hurdles to determine the exact time and quantity of water flowing in rivers and available for daily use. The volatile water caused by various water sources and irregular flow pattern generates different kinds of challenges for management. Distribution of flow of water in irrigation network affects the relevant community in either way. In the monsoon seasons, river belt community high risk of flood, while far living community suffering drought. Contemplating this situation, we have developed an ontology for context-aware information representation of riverine water management system abetting the visualization and proactive planning for the complex real-time situation. The purpose of this WaterOnto is to improve river water management and enable for efficient use of this precious natural resource. This would also be helpful to save the extra water being discharged in sea & non-irrigational areas, and magnitude and location of water leakage. We conceptualized stakeholder and relevant entities. We developed a taxonomy of irrigation system concepts in machine process able structure. Being woven these hierarchies together we developed a detailed conceptualization of river flow that helps us to manage the flow of water and enable to extract danger situation.

  15. Big sized players on the European Union’s financial advisory market

    Directory of Open Access Journals (Sweden)

    Nicolae, C.

    2013-06-01

    Full Text Available The paper presents the activity and the objectives of “The Big Four” Group of Financial Advisory Firms. The “Big Four” are the four largest international professional services networks in accountancy and professional services, offering audit, assurance, tax, consulting, advisory, actuarial, corporate finance and legal services. They handle the vast majority of audits for publicly traded companies as well as many private companies, creating an oligopoly in auditing large companies. It is reported that the Big Four audit all but one of the companies that constitute the FTSE 100, and 240 of the companies in the FTSE 250, an index of the leading mid-cap listing companies.

  16. IDENTIFYING AND ANALYZING THE TRANSIENT AND PERMANENT BARRIERS FOR BIG DATA

    Directory of Open Access Journals (Sweden)

    SARFRAZ NAWAZ BROHI

    2016-12-01

    Full Text Available Auspiciously, big data analytics had made it possible to generate value from immense amounts of raw data. Organizations are able to seek incredible insights which assist them in effective decision making and providing quality of service by establishing innovative strategies to recognize, examine and address the customers’ preferences. However, organizations are reluctant to adopt big data solutions due to several barriers such as data storage and transfer, scalability, data quality, data complexity, timeliness, security, privacy, trust, data ownership, and transparency. Despite the discussion on big data opportunities, in this paper, we present the findings of our in-depth review process that was focused on identifying as well as analyzing the transient and permanent barriers for adopting big data. Although, the transient barriers for big data can be eliminated in the near future with the advent of innovative technical contributions, however, it is challenging to eliminate the permanent barriers enduringly, though their impact could be recurrently reduced with the efficient and effective use of technology, standards, policies, and procedures.

  17. Big Data, data integrity, and the fracturing of the control zone

    Directory of Open Access Journals (Sweden)

    Carl Lagoze

    2014-11-01

    Full Text Available Despite all the attention to Big Data and the claims that it represents a “paradigm shift” in science, we lack understanding about what are the qualities of Big Data that may contribute to this revolutionary impact. In this paper, we look beyond the quantitative aspects of Big Data (i.e. lots of data and examine it from a sociotechnical perspective. We argue that a key factor that distinguishes “Big Data” from “lots of data” lies in changes to the traditional, well-established “control zones” that facilitated clear provenance of scientific data, thereby ensuring data integrity and providing the foundation for credible science. The breakdown of these control zones is a consequence of the manner in which our network technology and culture enable and encourage open, anonymous sharing of information, participation regardless of expertise, and collaboration across geographic, disciplinary, and institutional barriers. We are left with the conundrum—how to reap the benefits of Big Data while re-creating a trust fabric and an accountable chain of responsibility that make credible science possible.

  18. 75 FR 52735 - Clean Water Act Section 303(d): Availability of List Decisions

    Science.gov (United States)

    2010-08-27

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9189-7] Clean Water Act Section 303(d): Availability of List...: This notice announces the availability of EPA's decision identifying 12 water quality limited waterbodies and associated pollutants in South Dakota to be listed pursuant to the Clean Water Act Section 303...

  19. 76 FR 20664 - Clean Water Act Section 303(d): Availability of List Decisions

    Science.gov (United States)

    2011-04-13

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9294-5] Clean Water Act Section 303(d): Availability of List... notice announces the availability of EPA's action identifying water quality limited segments and associated pollutants in Louisiana to be listed pursuant to Clean Water Act Section 303(d), and request for...

  20. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  1. Hydrogeochemical and stream sediment reconnaissance basic data for Big Delta Quadrangle, Alaska

    International Nuclear Information System (INIS)

    1981-01-01

    Field and laboratory data are presented for 1380 water samples from the Big Delta Quadrangle, Alaska. The samples were collected by Los Alamos Scientific Laboratory; laboratory analysis and data reporting were performed by the Uranium Resource Evaluation Project at Oak Ridge, Tennessee

  2. An optimal big data workflow for biomedical image analysis

    Directory of Open Access Journals (Sweden)

    Aurelle Tchagna Kouanou

    Full Text Available Background and objective: In the medical field, data volume is increasingly growing, and traditional methods cannot manage it efficiently. In biomedical computation, the continuous challenges are: management, analysis, and storage of the biomedical data. Nowadays, big data technology plays a significant role in the management, organization, and analysis of data, using machine learning and artificial intelligence techniques. It also allows a quick access to data using the NoSQL database. Thus, big data technologies include new frameworks to process medical data in a manner similar to biomedical images. It becomes very important to develop methods and/or architectures based on big data technologies, for a complete processing of biomedical image data. Method: This paper describes big data analytics for biomedical images, shows examples reported in the literature, briefly discusses new methods used in processing, and offers conclusions. We argue for adapting and extending related work methods in the field of big data software, using Hadoop and Spark frameworks. These provide an optimal and efficient architecture for biomedical image analysis. This paper thus gives a broad overview of big data analytics to automate biomedical image diagnosis. A workflow with optimal methods and algorithm for each step is proposed. Results: Two architectures for image classification are suggested. We use the Hadoop framework to design the first, and the Spark framework for the second. The proposed Spark architecture allows us to develop appropriate and efficient methods to leverage a large number of images for classification, which can be customized with respect to each other. Conclusions: The proposed architectures are more complete, easier, and are adaptable in all of the steps from conception. The obtained Spark architecture is the most complete, because it facilitates the implementation of algorithms with its embedded libraries. Keywords: Biomedical images, Big

  3. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  4. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  5. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  6. 76 FR 74057 - Clean Water Act Section 303(d): Availability of List Decisions

    Science.gov (United States)

    2011-11-30

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9498-4] Clean Water Act Section 303(d): Availability of List Decisions AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of availability. SUMMARY: This notice announces the availability of EPA's action identifying water quality limited segments and...

  7. 75 FR 68783 - Clean Water Act Section 303(d): Availability of List Decisions

    Science.gov (United States)

    2010-11-09

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9223-5] Clean Water Act Section 303(d): Availability of List Decisions AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of availability. SUMMARY: This action announces the availability of EPA decisions identifying water quality limited segments and...

  8. Education Policy Research in the Big Data Era: Methodological Frontiers, Misconceptions, and Challenges

    Science.gov (United States)

    Wang, Yinying

    2017-01-01

    Despite abundant data and increasing data availability brought by technological advances, there has been very limited education policy studies that have capitalized on big data--characterized by large volume, wide variety, and high velocity. Drawing on the recent progress of using big data in public policy and computational social science…

  9. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  10. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  11. The role of reservoir storage in large-scale surface water availability analysis for Europe

    Science.gov (United States)

    Garrote, L. M.; Granados, A.; Martin-Carrasco, F.; Iglesias, A.

    2017-12-01

    A regional assessment of current and future water availability in Europe is presented in this study. The assessment was made using the Water Availability and Adaptation Policy Analysis (WAAPA) model. The model was built on the river network derived from the Hydro1K digital elevation maps, including all major river basins of Europe. Reservoir storage volume was taken from the World Register of Dams of ICOLD, including all dams with storage capacity over 5 hm3. Potential Water Availability is defined as the maximum amount of water that could be supplied at a certain point of the river network to satisfy a regular demand under pre-specified reliability requirements. Water availability is the combined result of hydrological processes, which determine streamflow in natural conditions, and human intervention, which determines the available hydraulic infrastructure to manage water and establishes water supply conditions through operating rules. The WAAPA algorithm estimates the maximum demand that can be supplied at every node of the river network accounting for the regulation capacity of reservoirs under different management scenarios. The model was run for a set of hydrologic scenarios taken from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP), where the PCRGLOBWB hydrological model was forced with results from five global climate models. Model results allow the estimation of potential water stress by comparing water availability to projections of water abstractions along the river network under different management alternatives. The set of sensitivity analyses performed showed the effect of policy alternatives on water availability and highlighted the large uncertainties linked to hydrological and anthropological processes.

  12. Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response

    Directory of Open Access Journals (Sweden)

    Femke Mulder

    2016-08-01

    Full Text Available The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief.

  13. Natural radioactivity in bottled mineral water available in Australia

    International Nuclear Information System (INIS)

    Cooper, M.B.; Ralph, B.J.; Wilks, M.J.

    1981-08-01

    The levels of naturally-occurring radioactive elements in bottled mineral water, commercially available in Australia, have been assessed. The survey concentrated upon 226 Ra, 228 Ra and 210 Pb, radionuclides which have a high toxicity in drinking water. Detectable levels of 226 Ra were found to range from 0.02Bq/1 to 0.32Bq/1 in locally-bottled water and from 0.02Bq/1 to 0.44Bq/1 in imported brands. 210 Pb levels were found to be generally very low ( 228 Ra content of bottled water will have a similar distribution to that of 226 Ra. Concentrations of 228 Ra in excess of 0.7Bq/1 were measured in a number of samples. The radiological health implications of the consumption of bottled mineral water are discussed with reference to existing drinking water standards and also in terms of radiation exposure and the increased risk to health. It was concluded that, although some brands of water contain radioactivity in excess of the drinking-water limits recommended by Australian and overseas authorities, the annual radiation dose to an individual will be below the dose-equivalent limits recommended by the International Commission on Radiological Protection for life-long exposure. The increased risk of radiation-induced fatal disease due to the consumption of bottled mineral water is estimated to be less than 10 -5 and is therefore negligible

  14. Big data in psychology: Introduction to the special issue.

    Science.gov (United States)

    Harlow, Lisa L; Oswald, Frederick L

    2016-12-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: (a) The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. (b) Availability of large data sets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. (c) Identifying, addressing, and being sensitive to ethical considerations when analyzing large data sets gained from public or private sources. (d) The unavoidable necessity of validating predictive models in big data by applying a model developed on 1 dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Hydrologic modeling for monitoring water availability in Eastern and Southern Africa

    Science.gov (United States)

    McNally, A.; Harrison, L.; Shukla, S.; Pricope, N. G.; Peters-Lidard, C. D.

    2017-12-01

    Severe droughts in 2015, 2016 and 2017 in Ethiopia, Southern Africa, and Somalia have negatively impacted agriculture and municipal water supplies resulting in food and water insecurity. Information from remotely sensed data and field reports indicated that the Famine Early Warning Systems Network (FEWS NET) Land Data Assimilation (FLDAS) accurately tracked both the anomalously low soil moisture, evapotranspiration and runoff conditions. This work presents efforts to more precisely monitor how the water balance responds to water availability deficits (i.e. drought) as estimated by the FLDAS with CHIRPS precipitation, MERRA-2 meteorological forcing and the Noah33 land surface model.Preliminary results indicate that FLDAS streamflow estimates are well correlated with observed streamflow where irrigation and other channel modifications are not present; FLDAS evapotranspiration (ET) is well correlated with ET from the Operational Simplified Surface Energy Balance model (SSEBop) in Eastern and Southern Africa. We then use these results to monitor availability, and explore trends in water supply and demand.

  16. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  17. Big Data in Drug Discovery.

    Science.gov (United States)

    Brown, Nathan; Cambruzzi, Jean; Cox, Peter J; Davies, Mark; Dunbar, James; Plumbley, Dean; Sellwood, Matthew A; Sim, Aaron; Williams-Jones, Bryn I; Zwierzyna, Magdalena; Sheppard, David W

    2018-01-01

    Interpretation of Big Data in the drug discovery community should enhance project timelines and reduce clinical attrition through improved early decision making. The issues we encounter start with the sheer volume of data and how we first ingest it before building an infrastructure to house it to make use of the data in an efficient and productive way. There are many problems associated with the data itself including general reproducibility, but often, it is the context surrounding an experiment that is critical to success. Help, in the form of artificial intelligence (AI), is required to understand and translate the context. On the back of natural language processing pipelines, AI is also used to prospectively generate new hypotheses by linking data together. We explain Big Data from the context of biology, chemistry and clinical trials, showcasing some of the impressive public domain sources and initiatives now available for interrogation. © 2018 Elsevier B.V. All rights reserved.

  18. Codevelopment in personality : the interplay between big five traits, self esteem, and satisfaction in couples and families

    OpenAIRE

    Weidmann, Rebekka

    2016-01-01

    Big Five traits and self-esteem play a crucial role in explaining satisfaction in couples. Moreover, no clear answer exists whether similarity in Big Five traits and self-esteem predict couple satisfaction. Further, little evidence exists showing whether relationship satisfaction predicts Big Five traits and self-esteem. These personality constructs have rarely been studied conjointly and no research is available to give some indication of how family members impact each other in Big Five trai...

  19. A peek into the future of radiology using big data applications.

    Science.gov (United States)

    Kharat, Amit T; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs - Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs - Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, "big data should not become "dump data" due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized and

  20. A peek into the future of radiology using big data applications

    Science.gov (United States)

    Kharat, Amit T.; Singhal, Shubham

    2017-01-01

    Big data is extremely large amount of data which is available in the radiology department. Big data is identified by four Vs – Volume, Velocity, Variety, and Veracity. By applying different algorithmic tools and converting raw data to transformed data in such large datasets, there is a possibility of understanding and using radiology data for gaining new knowledge and insights. Big data analytics consists of 6Cs – Connection, Cloud, Cyber, Content, Community, and Customization. The global technological prowess and per-capita capacity to save digital information has roughly doubled every 40 months since the 1980's. By using big data, the planning and implementation of radiological procedures in radiology departments can be given a great boost. Potential applications of big data in the future are scheduling of scans, creating patient-specific personalized scanning protocols, radiologist decision support, emergency reporting, virtual quality assurance for the radiologist, etc. Targeted use of big data applications can be done for images by supporting the analytic process. Screening software tools designed on big data can be used to highlight a region of interest, such as subtle changes in parenchymal density, solitary pulmonary nodule, or focal hepatic lesions, by plotting its multidimensional anatomy. Following this, we can run more complex applications such as three-dimensional multi planar reconstructions (MPR), volumetric rendering (VR), and curved planar reconstruction, which consume higher system resources on targeted data subsets rather than querying the complete cross-sectional imaging dataset. This pre-emptive selection of dataset can substantially reduce the system requirements such as system memory, server load and provide prompt results. However, a word of caution, “big data should not become “dump data” due to inadequate and poor analysis and non-structured improperly stored data. In the near future, big data can ring in the era of personalized

  1. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  2. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  3. Fuzzy VIKOR approach for selection of big data analyst in procurement management

    Directory of Open Access Journals (Sweden)

    Surajit Bag

    2016-07-01

    Full Text Available Background: Big data and predictive analysis have been hailed as the fourth paradigm of science. Big data and analytics are critical to the future of business sustainability. The demand for data scientists is increasing with the dynamic nature of businesses, thus making it indispensable to manage big data, derive meaningful results and interpret management decisions. Objectives: The purpose of this study was to provide a brief conceptual review of big data and analytics and further illustrate the use of a multicriteria decision-making technique in selecting the right skilled candidate for big data and analytics in procurement management. Method: It is important for firms to select and recruit the right data analyst, both in terms of skills sets and scope of analysis. The nature of such a problem is complex and multicriteria decision-making, which deals with both qualitative and quantitative factors. In the current study, an application of the Fuzzy VIsekriterijumska optimizacija i KOmpromisno Resenje (VIKOR method was used to solve the big data analyst selection problem. Results: From this study, it was identified that Technical knowledge (C1, Intellectual curiosity (C4 and Business acumen (C5 are the strongest influential criteria and must be present in the candidate for the big data and analytics job. Conclusion: Fuzzy VIKOR is the perfect technique in this kind of multiple criteria decisionmaking problematic scenario. This study will assist human resource managers and procurement managers in selecting the right workforce for big data analytics.

  4. Pengembangan Aplikasi Antarmuka Layanan Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Gede Karya

    2017-11-01

    Full Text Available In the 2016 Higher Competitive Grants Research (Hibah Bersaing Dikti, we have been successfully developed models, infrastructure and modules of Hadoop-based big data analysis application. It has also successfully developed a virtual private network (VPN network that allows integration and access to the infrastructure from outside the FTIS Computer Laboratorium. Infrastructure and application modules of analysis are then wanted to be presented as services to small and medium enterprises (SMEs in Indonesia. This research aims to develop application of big data analysis service interface integrated with Hadoop-Cluster. The research begins with finding appropriate methods and techniques for scheduling jobs, calling for ready-made Java Map-Reduce (MR application modules, and techniques for tunneling input / output and meta-data construction of service request (input and service output. The above methods and techniques are then developed into a web-based service application, as well as an executable module that runs on Java and J2EE based programming environment and can access Hadoop-Cluster in the FTIS Computer Lab. The resulting application can be accessed by the public through the site http://bigdata.unpar.ac.id. Based on the test results, the application has functioned well in accordance with the specifications and can be used to perform big data analysis. Keywords: web based service, big data analysis, Hadop, J2EE Abstrak Pada penelitian Hibah Bersaing Dikti tahun 2016 telah berhasil dikembangkan model, infrastruktur dan modul-modul aplikasi big data analysis berbasis Hadoop. Selain itu juga telah berhasil dikembangkan jaringan virtual private network (VPN yang memungkinkan integrasi dan akses infrastruktur tersebut dari luar Laboratorium Komputer FTIS. Infrastruktur dan modul aplikasi analisis tersebut selanjutnya ingin dipresentasikan sebagai layanan kepada usaha kecil dan menengah (UKM di Indonesia. Penelitian ini bertujuan untuk mengembangkan

  5. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  6. Review of Big Gods: How Religion Transformed Cooperation and Conflict

    Directory of Open Access Journals (Sweden)

    Thomas Joseph Coleman III

    2014-05-01

    Full Text Available 'Big Gods: How Religion Transformed Cooperation and Conflict', presents an empirically grounded rational reconstruction detailing the role that belief in “big gods” (i.e., omniscient, omnipresent, and omnipotent gods has played in the formation of society from a cultural-evolutionary perspective. Ara Norenzayan’s primary thesis is neatly summed up in the title of the book: religion has historically served—and perhaps still serves—as a building block and maintenance system in societies around the world.

  7. Climate Change Impacts on Water Availability and Use in the Limpopo River Basin

    Directory of Open Access Journals (Sweden)

    Tingju Zhu

    2012-01-01

    Full Text Available This paper analyzes the effects of climate change on water availability and use in the Limpopo River Basin of Southern Africa, using a linked modeling system consisting of a semi-distributed global hydrological model and the Water Simulation Module (WSM of the International Model for Policy Analysis of Agricultural Commodities and Trade (IMPACT. Although the WSM simulates all major water use sectors, the focus of this study is to evaluate the implications of climate change on irrigation water supply in the catchments of the Limpopo River Basin within the four riparian countries: Botswana, Mozambique, South Africa, and Zimbabwe. The analysis found that water resources of the Limpopo River Basin are already stressed under today’s climate conditions. Projected water infrastructure and management interventions are expected to improve the situation by 2050 if current climate conditions continue into the future. However, under the climate change scenarios studied here, water supply availability is expected to worsen considerably by 2050. Assessing hydrological impacts of climate change is crucial given that expansion of irrigated areas has been postulated as a key adaptation strategy for Sub-Saharan Africa. Such expansion will need to take into account future changes in water availability in African river basins.

  8. Telecom Big Data for Urban Transport Analysis - a Case Study of Split-Dalmatia County in Croatia

    Science.gov (United States)

    Baučić, M.; Jajac, N.; Bućan, M.

    2017-09-01

    Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the Study of Tourist Movement and Traffic in Split-Dalmatia County in Croatia is carried out as a part of the "IPA Adriatic CBC//N.0086/INTERMODAL" project. This paper briefly explains the big data used in the study and the results of the study. Furthermore, this paper investigates the main considerations when using telecom customer big data: data privacy and data quality. The paper concludes with GIS visualisation and proposes the further use of big data used in the study.

  9. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  10. Estimation of water retention and availability in soils of Rio Grande do Sul

    OpenAIRE

    Reichert,José Miguel; Albuquerque,Jackson Adriano; Kaiser,Douglas Rodrigo; Reinert,Dalvan José; Urach,Felipe Lavarda; Carlesso,Reimar

    2009-01-01

    Dispersed information on water retention and availability in soils may be compiled in databases to generate pedotransfer functions. The objectives of this study were: to generate pedotransfer functions to estimate soil water retention based on easily measurable soil properties; to evaluate the efficiency of existing pedotransfer functions for different geographical regions for the estimation of water retention in soils of Rio Grande do Sul (RS); and to estimate plant-available water capacity ...

  11. Methods for Quantifying Shallow-Water Habitat Availability in the Missouri River

    Energy Technology Data Exchange (ETDEWEB)

    Hanrahan, Timothy P.; Larson, Kyle B.

    2012-04-09

    As part of regulatory requirements for shallow-water habitat (SWH) restoration, the U.S. Army Corps of Engineers (USACE) completes periodic estimates of the quantity of SWH available throughout the lower 752 mi of the Missouri River. To date, these estimates have been made by various methods that consider only the water depth criterion for SWH. The USACE has completed estimates of SWH availability based on both depth and velocity criteria at four river bends (hereafter called reference bends), encompassing approximately 8 river miles within the lower 752 mi of the Missouri River. These estimates were made from the results of hydraulic modeling of water depth and velocity throughout each bend. Hydraulic modeling of additional river bends is not expected to be completed for deriving estimates of available SWH. Instead, future estimates of SWH will be based on the water depth criterion. The objective of this project, conducted by the Pacific Northwest National Laboratory for the USACE Omaha District, was to develop geographic information system methods for estimating the quantity of available SWH based on water depth only. Knowing that only a limited amount of water depth and channel geometry data would be available for all the remaining bends within the lower 752 mi of the Missouri River, the intent was to determine what information, if any, from the four reference bends could be used to develop methods for estimating SWH at the remaining bends. Specifically, we examined the relationship between cross-section channel morphology and relative differences between SWH estimates based on combined depth and velocity criteria and the depth-only criterion to determine if a correction factor could be applied to estimates of SWH based on the depth-only criterion. In developing these methods, we also explored the applicability of two commonly used geographic information system interpolation methods (TIN and ANUDEM) for estimating SWH using four different elevation data

  12. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  13. Astronomy in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Yanxia Zhang

    2015-05-01

    Full Text Available The fields of Astrostatistics and Astroinformatics are vital for dealing with the big data issues now faced by astronomy. Like other disciplines in the big data era, astronomy has many V characteristics. In this paper, we list the different data mining algorithms used in astronomy, along with data mining software and tools related to astronomical applications. We present SDSS, a project often referred to by other astronomical projects, as the most successful sky survey in the history of astronomy and describe the factors influencing its success. We also discuss the success of Astrostatistics and Astroinformatics organizations and the conferences and summer schools on these issues that are held annually. All the above indicates that astronomers and scientists from other areas are ready to face the challenges and opportunities provided by massive data volume.

  14. Analysis on Big Data Problems and Technique Supports of Archives Informatization

    Directory of Open Access Journals (Sweden)

    Du Xiaoyan

    2017-06-01

    Full Text Available [Purpose/significance] The realistic questions of the archives informatization management are faced with data size rapidly increasing, and their types and structures more diverse and complex. [Method/process] Based on the essential attribute of archives in this paper, the big data characteristics of digital archives in their storage and utilization links were analyzed, and the support of new big data techniques in the course of archives informatization, and their applications to the storage and utilization of digital archives and knowledge discovery were researched. [Result/conclusion] Modern processing technology for big data would not only bring certain supports for the management of archives informatization, but also promote the development of its theory and practice.

  15. Big Data Analytics Platforms analyze from startups to traditional database players

    Directory of Open Access Journals (Sweden)

    Ionut TARANU

    2015-07-01

    Full Text Available Big data analytics enables organizations to analyze a mix of structured, semi-structured and unstructured data in search of valuable business information and insights. The analytical findings can lead to more effective marketing, new revenue opportunities, better customer service, improved operational efficiency, competitive advantages over rival organizations and other business benefits. With so many emerging trends around big data and analytics, IT organizations need to create conditions that will allow analysts and data scientists to experiment. "You need a way to evaluate, prototype and eventually integrate some of these technologies into the business," says Chris Curran[1]. In this paper we are going to review 10 Top Big Data Analytics Platforms and compare the key-features.

  16. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  17. Perceptions about availability and adequacy of drinking water in a large California school district.

    Science.gov (United States)

    Patel, Anisha I; Bogart, Laura M; Uyeda, Kimberly E; Rabin, Alexa; Schuster, Mark A

    2010-03-01

    Concerns about the influence of sugar-sweetened beverage consumption on obesity have led experts to recommend that water be freely available in schools. We explored perceptions about the adequacy of drinking water provision in a large California school district to develop policies and programs to encourage student water consumption. From March to September 2007, we used semistructured interviews to ask 26 California key stakeholders - including school administrators and staff, health and nutrition agency representatives, and families - about school drinking water accessibility; attitudes about, facilitators of, and barriers to drinking water provision; and ideas for increasing water consumption. Interviews were analyzed to determine common themes. Although stakeholders said that water was available from school drinking fountains, they expressed concerns about the appeal, taste, appearance, and safety of fountain water and worried about the affordability and environmental effect of bottled water sold in schools. Stakeholders supported efforts to improve free drinking water availability in schools, but perceived barriers (eg, cost) and mistaken beliefs that regulations and beverage contracts prohibit serving free water may prevent schools from doing so. Some schools provide water through cold-filtered water dispensers and self-serve water coolers. This is the first study to explore stakeholder perceptions about the adequacy of drinking water in US schools. Although limited in scope, our study suggests that water available in at least some schools may be inadequate. Collaborative efforts among schools, communities, and policy makers are needed to improve school drinking water provision.

  18. Big Data and Consumer Participation in Privacy Contracts: Deciding who Decides on Privacy

    Directory of Open Access Journals (Sweden)

    Michiel Rhoen

    2015-02-01

    Full Text Available Big data puts data protection to the test. Consumers granting permission to process their personal data are increasingly opening up their personal lives, thanks to the “datafication” of everyday life, indefinite data retention and the increasing sophistication of algorithms for analysis.The privacy implications of big data call for serious consideration of consumers’ opportunities to participate in decision-making processes about their contracts. If these opportunities are insufficient, the resulting rules may represent special interests rather than consumers’ needs. This may undermine the legitimacy of big data applications.This article argues that providing sufficient consumer participation in privacy matters requires choosing the best available decision making mechanism. Is a consumer to negotiate his own privacy terms in the market, will lawmakers step in on his behalf, or is he to seek protection through courts? Furthermore is this a matter of national law or European law? These choices will affect the opportunities for achieving different policy goals associated with the possible benefits of the “big data revolution”.

  19. Flood-inundation maps for a 12.5-mile reach of Big Papillion Creek at Omaha, Nebraska

    Science.gov (United States)

    Strauch, Kellan R.; Dietsch, Benjamin J.; Anderson, Kayla J.

    2016-03-22

    Digital flood-inundation maps for a 12.5-mile reach of the Big Papillion Creek from 0.6 mile upstream from the State Street Bridge to the 72nd Street Bridge in Omaha, Nebraska, were created by the U.S. Geological Survey (USGS) in cooperation with the Papio-Missouri River Natural Resources District. The flood-inundation maps, which can be accessed through the USGS Flood Inundation Mapping Science Web site at http://water.usgs.gov/osw/flood_inundation/, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the USGS streamgage on the Big Papillion Creek at Fort Street at Omaha, Nebraska (station 06610732). Near-real-time stages at this streamgage may be obtained on the Internet from the USGS National Water Information System at http://waterdata.usgs.gov/ or the National Weather Service Advanced Hydrologic Prediction Service at http:/water.weather.gov/ahps/, which also forecasts flood hydrographs at this site.

  20. The potential impacts of biomass feedstock production on water resource availability.

    Science.gov (United States)

    Stone, K C; Hunt, P G; Cantrell, K B; Ro, K S

    2010-03-01

    Biofuels are a major topic of global interest and technology development. Whereas bioenergy crop production is highly dependent on water, bioenergy development requires effective allocation and management of water. The objectives of this investigation were to assess the bioenergy production relative to the impacts on water resource related factors: (1) climate and weather impact on water supplies for biomass production; (2) water use for major bioenergy crop production; and (3) potential alternatives to improve water supplies for bioenergy. Shifts to alternative bioenergy crops with greater water demand may produce unintended consequences for both water resources and energy feedstocks. Sugarcane and corn require 458 and 2036 m(3) water/m(3) ethanol produced, respectively. The water requirements for corn grain production to meet the US-DOE Billion-Ton Vision may increase approximately 6-fold from 8.6 to 50.1 km(3). Furthermore, climate change is impacting water resources throughout the world. In the western US, runoff from snowmelt is occurring earlier altering the timing of water availability. Weather extremes, both drought and flooding, have occurred more frequently over the last 30 years than the previous 100 years. All of these weather events impact bioenergy crop production. These events may be partially mitigated by alternative water management systems that offer potential for more effective water use and conservation. A few potential alternatives include controlled drainage and new next-generation livestock waste treatment systems. Controlled drainage can increase water available to plants and simultaneously improve water quality. New livestock waste treatments systems offer the potential to utilize treated wastewater to produce bioenergy crops. New technologies for cellulosic biomass conversion via thermochemical conversion offer the potential for using more diverse feedstocks with dramatically reduced water requirements. The development of bioenergy

  1. Big Data Analyses in Health and Opportunities for Research in Radiology.

    Science.gov (United States)

    Aphinyanaphongs, Yindalon

    2017-02-01

    This article reviews examples of big data analyses in health care with a focus on radiology. We review the defining characteristics of big data, the use of natural language processing, traditional and novel data sources, and large clinical data repositories available for research. This article aims to invoke novel research ideas through a combination of examples of analyses and domain knowledge. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Advances in mobile cloud computing and big data in the 5G era

    CERN Document Server

    Mastorakis, George; Dobre, Ciprian

    2017-01-01

    This book reports on the latest advances on the theories, practices, standards and strategies that are related to the modern technology paradigms, the Mobile Cloud computing (MCC) and Big Data, as the pillars and their association with the emerging 5G mobile networks. The book includes 15 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of Big Data and Mobile Cloud Computing, from basic concepts to advanced findings, reporting the state-of-the-art on Big Data management. It demonstrates and discusses methods and practices to improve multi-source Big Data manipulation techniques, as well as the integration of resources availability through the 3As (Anywhere, Anything, Anytime) paradigm, using the 5G access technologies.

  3. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  4. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  5. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  6. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  7. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  8. Mercury concentrations and distribution in soil, water, mine waste leachates, and air in and around mercury mines in the Big Bend region, Texas, USA.

    Science.gov (United States)

    Gray, John E; Theodorakos, Peter M; Fey, David L; Krabbenhoft, David P

    2015-02-01

    Samples of soil, water, mine waste leachates, soil gas, and air were collected from areas mined for mercury (Hg) and baseline sites in the Big Bend area, Texas, to evaluate potential Hg contamination in the region. Soil samples collected within 300 m of an inactive Hg mine contained elevated Hg concentrations (3.8-11 µg/g), which were considerably higher than Hg in soil collected from baseline sites (0.03-0.05 µg/g) distal (as much as 24 km) from mines. Only three soil samples collected within 300 m of the mine exceeded the probable effect concentration for Hg of 1.06 µg/g, above which harmful effects are likely to be observed in sediment-dwelling organisms. Concentrations of Hg in mine water runoff (7.9-14 ng/L) were generally higher than those found in springs and wells (0.05-3.1 ng/L), baseline streams (1.1-9.7 ng/L), and sources of drinking water (0.63-9.1 ng/L) collected in the Big Bend region. Concentrations of Hg in all water samples collected in this study were considerably below the 2,000 ng/L drinking water Hg guideline and the 770 ng/L guideline recommended by the U.S. Environmental Protection Agency (USEPA) to protect aquatic wildlife from chronic effects of Hg. Concentrations of Hg in water leachates obtained from leaching of mine wastes varied widely from <0.001 to 760 µg of Hg in leachate/g of sample leached, but only one leachate exceeded the USEPA Hg industrial soil screening level of 31 µg/g. Concentrations of Hg in soil gas collected at mined sites (690-82,000 ng/m(3)) were highly elevated compared to soil gas collected from baseline sites (1.2-77 ng/m(3)). However, air collected from mined areas at a height of 2 m above the ground surface contained concentrations of Hg (4.9-64 ng/m(3)) that were considerably lower than Hg in soil gas from the mined areas. Although concentrations of Hg emitted from mine-contaminated soils and mine wastes were elevated, persistent wind in southwest Texas disperses Hg in the air within a few meters of the

  9. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    Science.gov (United States)

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  10. Big Data Comes to School

    Directory of Open Access Journals (Sweden)

    Bill Cope

    2016-03-01

    Full Text Available The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.

  11. The role of water availability in controlling coupled vegetation-atmosphere dynamics

    Science.gov (United States)

    Scanlon, Todd Michael

    This work examines how water availability affects vegetation structure and vegetation-atmosphere exchange of water, carbon, and energy for a savanna ecosystem. The study site is the Kalahari Transect (KT), in southern Africa, which follows a north-south decline in mean annual rainfall from ˜1600 mm/yr to ˜250 mm/yr between the latitudes 12°--26°S. Eddy covariance (EC) flux measurements taken over a time frame of 1--9 days at four sites along the transect during the wet (growing) season revealed that the ecosystem water use efficiency for the sites, defined as the ratio of net carbon flux to evapotranspiration, decreased with increasing mean annual rainfall. EC data were used to parameterize a large eddy simulation model, which was applied over a heterogeneous remotely-sensed surface. Water availability for the vegetation was found to affect the relative controls (structural vs. meteorological) on the spatial distribution of vegetation fluxes. When the spatial distribution of vapor pressure deficit, D, was most predictable (i.e. non water-limiting conditions) it was unimportant in shaping the distribution of the vegetation fluxes, while at times when D was least predictable (i.e. water-limiting conditions) it was most important. This observation is explained by the relative degree of vegetation-atmosphere coupling and the complexity of the non-local effects on D , both of which are dependent upon water availability. Based upon the differing ways in which trees and grass respond to interannual variability in rainfall, a new method was developed to estimate fractional tree, grass, and bare soil cover from a synthesis of satellite and ground-based data. This method was applied to the KT where it was found that tree fractional cover declines with mean annual rainfall, while grass fractional cover peaks near the middle of the gradient. A soil moisture model applied to this data indicated a shift from nutrient- to water-limitation from the mesic to arid portions of

  12. Simulated big sagebrush regeneration supports predicted changes at the trailing and leading edges of distribution shifts

    Science.gov (United States)

    Schlaepfer, Daniel R.; Taylor, Kyle A.; Pennington, Victoria E.; Nelson, Kellen N.; Martin, Trace E.; Rottler, Caitlin M.; Lauenroth, William K.; Bradford, John B.

    2015-01-01

    Many semi-arid plant communities in western North America are dominated by big sagebrush. These ecosystems are being reduced in extent and quality due to economic development, invasive species, and climate change. These pervasive modifications have generated concern about the long-term viability of sagebrush habitat and sagebrush-obligate wildlife species (notably greater sage-grouse), highlighting the need for better understanding of the future big sagebrush distribution, particularly at the species' range margins. These leading and trailing edges of potential climate-driven sagebrush distribution shifts are likely to be areas most sensitive to climate change. We used a process-based regeneration model for big sagebrush, which simulates potential germination and seedling survival in response to climatic and edaphic conditions and tested expectations about current and future regeneration responses at trailing and leading edges that were previously identified using traditional species distribution models. Our results confirmed expectations of increased probability of regeneration at the leading edge and decreased probability of regeneration at the trailing edge below current levels. Our simulations indicated that soil water dynamics at the leading edge became more similar to the typical seasonal ecohydrological conditions observed within the current range of big sagebrush ecosystems. At the trailing edge, an increased winter and spring dryness represented a departure from conditions typically supportive of big sagebrush. Our results highlighted that minimum and maximum daily temperatures as well as soil water recharge and summer dry periods are important constraints for big sagebrush regeneration. Overall, our results confirmed previous predictions, i.e., we see consistent changes in areas identified as trailing and leading edges; however, we also identified potential local refugia within the trailing edge, mostly at sites at higher elevation. Decreasing

  13. Environmental effects of the Big Rapids dam remnant removal, Big Rapids, Michigan, 2000-02

    Science.gov (United States)

    Healy, Denis F.; Rheaume, Stephen J.; Simpson, J. Alan

    2003-01-01

    The U.S. Geological Survey (USGS), in cooperation with the city of Big Rapids, investigated the environmental effects of removal of a dam-foundation remnant and downstream cofferdam from the Muskegon River in Big Rapids, Mich. The USGS applied a multidiscipline approach, which determined the water quality, sediment character, and stream habitat before and after dam removal. Continuous water-quality data and discrete water-quality samples were collected, the movement of suspended and bed sediment were measured, changes in stream habitat were assessed, and streambed elevations were surveyed. Analyses of water upstream and downstream from the dam showed that the dam-foundation remnant did not affect water quality. Dissolved-oxygen concentrations downstream from the dam remnant were depressed for a short period (days) during the beginning of the dam removal, in part because of that removal effort. Sediment transport from July 2000 through March 2002 was 13,800 cubic yards more at the downstream site than the upstream site. This increase in sediment represents the remobilized sediment upstream from the dam, bank erosion when the impoundment was lowered, and contributions from small tributaries between the sites. Five habitat reaches were monitored before and after dam-remnant removal. The reaches consisted of a reference reach (A), upstream from the effects of the impoundment; the impoundment (B); and three sites below the impoundment where habitat changes were expected (C, D, and E, in downstream order). Stream-habitat assessment reaches varied in their responses to the dam-remnant removal. Reference reach A was not affected. In impoundment reach B, Great Lakes and Environmental Assessment Section (GLEAS) Procedure 51 ratings went from fair to excellent. For the three downstream reaches, reach C underwent slight habitat degradation, but ratings remained good; reach D underwent slight habitat degradation with ratings changing from excellent to good; and, in an area

  14. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  15. Significance of Supply Logistics in Big Cities

    Directory of Open Access Journals (Sweden)

    Mario Šafran

    2012-10-01

    Full Text Available The paper considers the concept and importance of supplylogistics as element in improving storage, supply and transportof goods in big cities. There is always room for improvements inthis segmenl of economic activities, and therefore continuousoptimisation of the cargo flows from the manufacturer to theend user is impor1a11t. Due to complex requirements in thecargo supply a11d the "spoiled" end users, modem cities represe/ll great difficulties and a big challenge for the supply organisers.The consumers' needs in big cities have developed over therecent years i11 such a way that they require supply of goods severaltimes a day at precisely determined times (orders are receivedby e-mail, and the information transfer is therefore instantaneous.In order to successfully meet the consumers'needs in advanced economic systems, advanced methods ofgoods supply have been developed and improved, such as 'justin time'; ''door-to-door", and "desk-to-desk". Regular operationof these systems requires supply logistics 1vhiclz includes thetotalthroughpw of materials, from receiving the raw materialsor reproduction material to the delive1y of final products to theend users.

  16. Big data and information management: modeling the context decisional supported by sistemography

    Directory of Open Access Journals (Sweden)

    William Barbosa Vianna

    2016-04-01

    Full Text Available Introduction: The study justified by the scarcity of studies in the field of information science that addressing the phenomenon of big data from the perspective of information management. that will allow further development of computer simulation. Objective: The objective is to identify and represent the general elements of the decision-making process in the context of big data. Methodology: It is an exploratory study and theoretical and deductive nature. Results: It resulted in the identification of the main elements involved in decision-making on big data environment and its sistemografic representation. Conclusions: It was possible to develop a representation which will allow further development of computer simulation.

  17. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  18. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  19. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  20. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  1. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  2. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  3. Ground water security and drought in Africa: linking availability, access, and demand.

    Science.gov (United States)

    Calow, Roger C; Macdonald, Alan M; Nicol, Alan L; Robins, Nick S

    2010-01-01

    Drought in Africa has been extensively researched, particularly from meteorological, agricultural, and food security perspectives. However, the impact of drought on water security, particularly ground water dependent rural water supplies, has received much less attention. Policy responses have concentrated on food needs, and it has often been difficult to mobilize resources for water interventions, despite evidence that access to safe water is a serious and interrelated concern. Studies carried out in Ghana, Malawi, South Africa, and Ethiopia highlight how rural livelihoods are affected by seasonal stress and longer-term drought. Declining access to food and water is a common and interrelated problem. Although ground water plays a vital role in buffering the effects of rainfall variability, water shortages and difficulties in accessing water that is available can affect domestic and productive water uses, with knock-on effects on food consumption and production. Total depletion of available ground water resources is rarely the main concern. A more common scenario is a spiral of water insecurity as shallow water sources fail, additional demands are put on remaining sources, and mechanical failures increase. These problems can be planned for within normal development programs. Water security mapping can help identify vulnerable areas, and changes to monitoring systems can ensure early detection of problems. Above all, increasing the coverage of ground water-based rural water supplies, and ensuring that the design and siting of water points is informed by an understanding of hydrogeological conditions and user demand, can significantly increase the resilience of rural communities to climate variability.

  4. An overview of big data and data science education at South African universities

    Directory of Open Access Journals (Sweden)

    Eduan Kotzé

    2016-02-01

    Full Text Available Man and machine are generating data electronically at an astronomical speed and in such a way that society is experiencing cognitive challenges to analyse this data meaningfully. Big data firms, such as Google and Facebook, identified this problem several years ago and are continuously developing new technologies or improving existing technologies in order to facilitate the cognitive analysis process of these large data sets. The purpose of this article is to contribute to our theoretical understanding of the role that big data might play in creating new training opportunities for South African universities. The article investigates emerging literature on the characteristics and main components of big data, together with the Hadoop application stack as an example of big data technology. Due to the rapid development of big data technology, a paradigm shift of human resources is required to analyse these data sets; therefore, this study examines the state of big data teaching at South African universities. This article also provides an overview of possible big data sources for South African universities, as well as relevant big data skills that data scientists need. The study also investigates existing academic programs in South Africa, where the focus is on teaching advanced database systems. The study found that big data and data science topics are introduced to students on a postgraduate level, but that the scope is very limited. This article contributes by proposing important theoretical topics that could be introduced as part of the existing academic programs. More research is required, however, to expand these programs in order to meet the growing demand for data scientists with big data skills.

  5. Visualization of radiation dose big data acquired by monitoring posts

    International Nuclear Information System (INIS)

    Hashimoto, Takeyuki; Jumonji, Hiromichi

    2014-01-01

    Currently, in Fukushima Prefecture, 3625 radiation dose monitoring posts is available, and the radiation data is acquired every 10 minutes. However, an effective visualization of such an enormous amount of data has not been sufficiently performed. In this study, pull out the meaningful information from the big data, to achieve an effective visualization. By comparing the physical attenuation with the radiation dose changes, we can predict the trend of environment attenuation. We visualize the influence of the environment by plotting the results to the map. As a result, the difference in the increase or decrease depending on the location appeared. Under the influence of snow cover, a phenomenon that radiation dose is reduced in winter were also seen. We considered that these results will be effective for the policies of decontamination and the estimation of the amount of snow as water resources. (author)

  6. Growth is required for perception of water availability to pattern root branches in plants.

    Science.gov (United States)

    Robbins, Neil E; Dinneny, José R

    2018-01-23

    Water availability is a potent regulator of plant development and induces root branching through a process termed hydropatterning. Hydropatterning enables roots to position lateral branches toward regions of high water availability, such as wet soil or agar media, while preventing their emergence where water is less available, such as in air. The mechanism by which roots perceive the spatial distribution of water during hydropatterning is unknown. Using primary roots of Zea mays (maize) we reveal that developmental competence for hydropatterning is limited to the growth zone of the root tip. Past work has shown that growth generates gradients in water potential across an organ when asymmetries exist in the distribution of available water. Using mathematical modeling, we predict that substantial growth-sustained water potential gradients are also generated in the hydropatterning competent zone and that such biophysical cues inform the patterning of lateral roots. Using diverse chemical and environmental treatments we experimentally demonstrate that growth is necessary for normal hydropatterning of lateral roots. Transcriptomic characterization of the local response of tissues to a moist surface or air revealed extensive regulation of signaling and physiological pathways, some of which we show are growth-dependent. Our work supports a "sense-by-growth" mechanism governing hydropatterning, by which water availability cues are rendered interpretable through growth-sustained water movement. Copyright © 2018 the Author(s). Published by PNAS.

  7. Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution

    Science.gov (United States)

    Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.

    2017-10-01

    Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.

  8. Big Bayou Creek and Little Bayou Creek Watershed Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Kszos, L.A.; Peterson, M.J.; Ryon; Smith, J.G.

    1999-03-01

    Biological monitoring of Little Bayou and Big Bayou creeks, which border the Paducah Site, has been conducted since 1987. Biological monitoring was conducted by University of Kentucky from 1987 to 1991 and by staff of the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) from 1991 through March 1999. In March 1998, renewed Kentucky Pollutant Discharge Elimination System (KPDES) permits were issued to the US Department of Energy (DOE) and US Enrichment Corporation. The renewed DOE permit requires that a watershed monitoring program be developed for the Paducah Site within 90 days of the effective date of the renewed permit. This plan outlines the sampling and analysis that will be conducted for the watershed monitoring program. The objectives of the watershed monitoring are to (1) determine whether discharges from the Paducah Site and the Solid Waste Management Units (SWMUs) associated with the Paducah Site are adversely affecting instream fauna, (2) assess the ecological health of Little Bayou and Big Bayou creeks, (3) assess the degree to which abatement actions ecologically benefit Big Bayou Creek and Little Bayou Creek, (4) provide guidance for remediation, (5) provide an evaluation of changes in potential human health concerns, and (6) provide data which could be used to assess the impact of inadvertent spills or fish kill. According to the cleanup will result in these watersheds [Big Bayou and Little Bayou creeks] achieving compliance with the applicable water quality criteria.

  9. Big Data Challenges

    Directory of Open Access Journals (Sweden)

    Alexandru Adrian TOLE

    2013-10-01

    Full Text Available The amount of data that is traveling across the internet today, not only that is large, but is complex as well. Companies, institutions, healthcare system etc., all of them use piles of data which are further used for creating reports in order to ensure continuity regarding the services that they have to offer. The process behind the results that these entities requests represents a challenge for software developers and companies that provide IT infrastructure. The challenge is how to manipulate an impressive volume of data that has to be securely delivered through the internet and reach its destination intact. This paper treats the challenges that Big Data creates.

  10. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    Science.gov (United States)

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  11. Does Implementation of Big Data Analytics Improve Firms’ Market Value? Investors’ Reaction in Stock Market

    Directory of Open Access Journals (Sweden)

    Hansol Lee

    2017-06-01

    Full Text Available Recently, due to the development of social media, multimedia, and the Internet of Things (IoT, various types of data have increased. As the existing data analytics tools cannot cover this huge volume of data, big data analytics becomes one of the emerging technologies for business today. Considering that big data analytics is an up-to-date term, in the present study, we investigated the impact of implementing big data analytics in the short-term perspective. We used an event study methodology to investigate the changes in stock price caused by announcements on big data analytics solution investment. A total of 54 investment announcements of firms publicly traded in NASDAQ and NYSE from 2010 to 2015 were collected. Our results empirically demonstrate that announcement of firms’ investment on big data solution leads to positive stock market reactions. In addition, we also found that investments on small vendors’ solution with industry-oriented functions tend to result in higher abnormal returns than those on big vendors’ solution with general functions. Finally, our results also suggest that stock market investors highly evaluate big data analytics investments of big firms as compared to those of small firms.

  12. The Design of Intelligent Repair Welding Mechanism and Relative Control System of Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available Effective repair of worn big gear has large influence on ensuring safety production and enhancing economic benefits. A kind of intelligent repair welding method was put forward mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. Big gear repair welding mechanism was designed in this paper. The work principle and part selection of big gear repair welding mechanism was introduced. The three dimensional mode of big gear repair welding mechanism was constructed by Pro/E three dimensional design software. Three dimensional motions can be realized by motor controlling ball screw. According to involute gear feature, the complicated curve motion on curved gear surface can be transformed to linear motion by orientation. By this way, the repair welding on worn gear area can be realized. In the design of big gear repair welding mechanism control system, Siemens S7-200 series hardware was chosen. Siemens STEP7 programming software was chosen as system design tool. The entire repair welding process was simulated by experiment simulation. It provides a kind of practical and feasible method for the intelligent repair welding of big worn gear.

  13. Water availability change in central Belgium for the late 21st century

    Science.gov (United States)

    Tabari, Hossein; Taye, Meron Teferi; Willems, Patrick

    2015-08-01

    We investigate the potential impact of climate change on water availability in central Belgium. Two water balance components being precipitation and potential evapotranspiration are initially projected for the late 21st century (2071-2100) based on 30 Coupled Models Intercomparison Project phase 5 (CMIP5) models relative to a baseline period of 1961-1990, assuming forcing by four representative concentration pathway emission scenarios (RCP2.6, RCP4.5, RCP6.0, RCP8.5). The future available water is then estimated as the difference between precipitation and potential evapotranspiration projections. The number of wet days and mean monthly precipitation for summer season is projected to decrease in most of the scenarios, while the projections show an increase in those variables for the winter months. Potential evapotranspiration is expected to increase during both winter and summer seasons. The results show a decrease in water availability for summer and an increase for winter, suggesting drier summers and wetter winters for the late 21st century in central Belgium.

  14. Data Management and Preservation Planning for Big Science

    Directory of Open Access Journals (Sweden)

    Juan Bicarregui

    2013-06-01

    Full Text AvailableBig Science’ - that is, science which involves large collaborations with dedicated facilities, and involving large data volumes and multinational investments – is often seen as different when it comes to data management and preservation planning. Big Science handles its data differently from other disciplines and has data management problems that are qualitatively different from other disciplines. In part, these differences arise from the quantities of data involved, but possibly more importantly from the cultural, organisational and technical distinctiveness of these academic cultures. Consequently, the data management systems are typically and rationally bespoke, but this means that the planning for data management and preservation (DMP must also be bespoke.These differences are such that ‘just read and implement the OAIS specification’ is reasonable Data Management and Preservation (DMP advice, but this bald prescription can and should be usefully supported by a methodological ‘toolkit’, including overviews, case-studies and costing models to provide guidance on developing best practice in DMP policy and infrastructure for these projects, as well as considering OAIS validation, audit and cost modelling.In this paper, we build on previous work with the LIGO collaboration to consider the role of DMP planning within these big science scenarios, and discuss how to apply current best practice. We discuss the result of the MaRDI-Gross project (Managing Research Data Infrastructures – Big Science, which has been developing a toolkit to provide guidelines on the application of best practice in DMP planning within big science projects. This is targeted primarily at projects’ engineering managers, but intending also to help funders collaborate on DMP plans which satisfy the requirements imposed on them.

  15. Big Data Technologies: New Opportunities for Diabetes Management.

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-04-24

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient's care processes and of single patient's behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. © 2015 Diabetes Technology Society.

  16. Big Data and Data Science in Critical Care.

    Science.gov (United States)

    Sanchez-Pinto, L Nelson; Luo, Yuan; Churpek, Matthew M

    2018-05-09

    The digitalization of the healthcare system has resulted in a deluge of clinical Big Data and has prompted the rapid growth of data science in medicine. Data science, which is the field of study dedicated to the principled extraction of knowledge from complex data, is particularly relevant in the critical care setting. The availability of large amounts of data in the intensive care unit, the need for better evidence-based care, and the complexity of critical illness makes the use of data science techniques and data-driven research particularly appealing to intensivists. Despite the increasing number of studies and publications in the field, so far there have been few examples of data science projects that have resulted in successful implementations of data-driven systems in the intensive care unit. However, given the expected growth in the field, intensivists should be familiar with the opportunities and challenges of Big Data and data science. In this paper, we review the definitions, types of algorithms, applications, challenges, and future of Big Data and data science in critical care. Copyright © 2018. Published by Elsevier Inc.

  17. Big data based fraud risk management at Alibaba

    Directory of Open Access Journals (Sweden)

    Jidong Chen

    2015-12-01

    Full Text Available With development of mobile internet and finance, fraud risk comes in all shapes and sizes. This paper is to introduce the Fraud Risk Management at Alibaba under big data. Alibaba has built a fraud risk monitoring and management system based on real-time big data processing and intelligent risk models. It captures fraud signals directly from huge amount data of user behaviors and network, analyzes them in real-time using machine learning, and accurately predicts the bad users and transactions. To extend the fraud risk prevention ability to external customers, Alibaba also built up a big data based fraud prevention product called AntBuckler. AntBuckler aims to identify and prevent all flavors of malicious behaviors with flexibility and intelligence for online merchants and banks. By combining large amount data of Alibaba and customers', AntBuckler uses the RAIN score engine to quantify risk levels of users or transactions for fraud prevention. It also has a user-friendly visualization UI with risk scores, top reasons and fraud connections.

  18. Understanding Big Data for Industrial Innovation and Design: The Missing Information Systems Perspective

    Directory of Open Access Journals (Sweden)

    Miguel Baptista Nunes

    2017-12-01

    Full Text Available This paper identifies a need to complement the current rich technical and mathematical research agenda on big data with a more information systems and information science strand, which focuses on the business value of big data. An agenda of research for information systems would explore motives for using big data in real organizational contexts, and consider proposed benefits, such as increased effectiveness and efficiency, production of high-quality products/services, creation of added business value, and stimulation of innovation and design. Impacts of such research on the academic community, the industrial and business world, and policy-makers are discussed.

  19. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  20. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  1. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  2. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  3. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  4. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  5. Collaborative Approaches Needed to Close the Big Data Skills Gap

    Directory of Open Access Journals (Sweden)

    Steven Miller

    2014-04-01

    Full Text Available The big data and analytics talent discussion has largely focused on a single role – the data scientist. However, the need is much broader than data scientists. Data has become a strategic business asset. Every professional occupation must adapt to this new mindset. Universities in partnership with industry must move quickly to ensure that the graduates they produce have the required skills for the age of big data. Existing curricula should be reviewed and adapted to ensure relevance. New curricula and degree programs are needed to meet the needs of industry.

  6. Big-deep-smart data in imaging for guiding materials design

    Science.gov (United States)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  7. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  8. Global monthly water scarcity: blue water footprints versus blue water availability.

    Science.gov (United States)

    Hoekstra, Arjen Y; Mekonnen, Mesfin M; Chapagain, Ashok K; Mathews, Ruth E; Richter, Brian D

    2012-01-01

    Freshwater scarcity is a growing concern, placing considerable importance on the accuracy of indicators used to characterize and map water scarcity worldwide. We improve upon past efforts by using estimates of blue water footprints (consumptive use of ground- and surface water flows) rather than water withdrawals, accounting for the flows needed to sustain critical ecological functions and by considering monthly rather than annual values. We analyzed 405 river basins for the period 1996-2005. In 201 basins with 2.67 billion inhabitants there was severe water scarcity during at least one month of the year. The ecological and economic consequences of increasing degrees of water scarcity--as evidenced by the Rio Grande (Rio Bravo), Indus, and Murray-Darling River Basins--can include complete desiccation during dry seasons, decimation of aquatic biodiversity, and substantial economic disruption.

  9. MACHINE LEARNING TECHNIQUES USED IN BIG DATA

    Directory of Open Access Journals (Sweden)

    STEFANIA LOREDANA NITA

    2016-07-01

    Full Text Available The classical tools used in data analysis are not enough in order to benefit of all advantages of big data. The amount of information is too large for a complete investigation, and the possible connections and relations between data could be missed, because it is difficult or even impossible to verify all assumption over the information. Machine learning is a great solution in order to find concealed correlations or relationships between data, because it runs at scale machine and works very well with large data sets. The more data we have, the more the machine learning algorithm is useful, because it “learns” from the existing data and applies the found rules on new entries. In this paper, we present some machine learning algorithms and techniques used in big data.

  10. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  11. A High-Order CFS Algorithm for Clustering Big Data

    Directory of Open Access Journals (Sweden)

    Fanyu Bu

    2016-01-01

    Full Text Available With the development of Internet of Everything such as Internet of Things, Internet of People, and Industrial Internet, big data is being generated. Clustering is a widely used technique for big data analytics and mining. However, most of current algorithms are not effective to cluster heterogeneous data which is prevalent in big data. In this paper, we propose a high-order CFS algorithm (HOCFS to cluster heterogeneous data by combining the CFS clustering algorithm and the dropout deep learning model, whose functionality rests on three pillars: (i an adaptive dropout deep learning model to learn features from each type of data, (ii a feature tensor model to capture the correlations of heterogeneous data, and (iii a tensor distance-based high-order CFS algorithm to cluster heterogeneous data. Furthermore, we verify our proposed algorithm on different datasets, by comparison with other two clustering schemes, that is, HOPCM and CFS. Results confirm the effectiveness of the proposed algorithm in clustering heterogeneous data.

  12. BIG DATA IN SUPPLY CHAIN MANAGEMENT: AN EXPLORATORY STUDY

    Directory of Open Access Journals (Sweden)

    Gheorghe MILITARU

    2015-12-01

    Full Text Available The objective of this paper is to set a framework for examining the conditions under which the big data can create long-term profitability through developing dynamic operations and digital supply networks in supply chain. We investigate the extent to which big data analytics has the power to change the competitive landscape of industries that could offer operational, strategic and competitive advantages. This paper is based upon a qualitative study of the convergence of predictive analytics and big data in the field of supply chain management. Our findings indicate a need for manufacturers to introduce analytics tools, real-time data, and more flexible production techniques to improve their productivity in line with the new business model. By gathering and analysing vast volumes of data, analytics tools help companies to resource allocation and capital spends more effectively based on risk assessment. Finally, implications and directions for future research are discussed.

  13. Identify too big to fail banks and capital insurance: An equilibrium approach

    Directory of Open Access Journals (Sweden)

    Katerina Ivanov

    2017-09-01

    Full Text Available The objective of this paper is develop a rational expectation equilibrium model of capital insurance to identify too big to fail banks. The main results of this model include (1 too big to fail banks can be identified explicitly by a systemic risk measure, loss betas, of all banks in the entire financial sector; (2 the too big to fail feature can be largely justified by a high level of loss beta; (3 the capital insurance proposal benefits market participants and reduces the systemic risk; (4 the implicit guarantee subsidy can be estimated endogenously; and lastly, (5 the capital insurance proposal can be used to resolve the moral hazard issue. We implement this model and document that the too big to fail issue has been considerably reduced in the pro-crisis period. As a result, the capital insurance proposal could be a useful macro-regulation innovation policy tool

  14. Study on LBS for Characterization and Analysis of Big Data Benchmarks

    Directory of Open Access Journals (Sweden)

    Aftab Ahmed Chandio

    2014-10-01

    Full Text Available In the past few years, most organizations are gradually diverting their applications and services to Cloud. This is because Cloud paradigm enables (a on-demand accessed and (b large data processing for their applications and users on Internet anywhere in the world. The rapid growth of urbanization in developed and developing countries leads a new emerging concept called Urban Computing, one of the application domains that is rapidly deployed to the Cloud. More precisely, in the concept of Urban Computing, sensors, vehicles, devices, buildings, and roads are used as a component to probe city dynamics. Their data representation is widely available including GPS traces of vehicles. However, their applications are more towards data processing and storage hungry, which is due to their data increment in large volume starts from few dozen of TB (Tera Bytes to thousands of PT (Peta Bytes (i.e. Big Data. To increase the development and the assessment of the applications such as LBS (Location Based Services, a benchmark of Big Data is urgently needed. This research is a novel research on LBS to characterize and analyze the Big Data benchmarks. We focused on map-matching, which is being used as pre-processing step in many LBS applications. In this preliminary work, this paper also describes current status of Big Data benchmarks and our future direction

  15. Availability of drinking water in US public school cafeterias.

    Science.gov (United States)

    Hood, Nancy E; Turner, Lindsey; Colabianchi, Natalie; Chaloupka, Frank J; Johnston, Lloyd D

    2014-09-01

    This study examined the availability of free drinking water during lunchtime in US public schools, as required by federal legislation beginning in the 2011-2012 school year. Data were collected by mail-back surveys in nationally representative samples of US public elementary, middle, and high schools from 2009-2010 to 2011-2012. Overall, 86.4%, 87.4%, and 89.4% of students attended elementary, middle, and high schools, respectively, that met the drinking water requirement. Most students attended schools with existing cafeteria drinking fountains and about one fourth attended schools with water dispensers. In middle and high schools, respondents were asked to indicate whether drinking fountains were clean, and whether they were aware of any water-quality problems at the school. The vast majority of middle and high school students (92.6% and 90.4%, respectively) attended schools where the respondent perceived drinking fountains to be clean or very clean. Approximately one in four middle and high school students attended a school where the survey respondent indicated that there were water-quality issues affecting drinking fountains. Although most schools have implemented the requirement to provide free drinking water at lunchtime, additional work is needed to promote implementation at all schools. School nutrition staff at the district and school levels can play an important role in ensuring that schools implement the drinking water requirement, as well as promote education and behavior-change strategies to increase student consumption of water at school. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  16. Dealing with variability in water availability: the case of the Verde Grande River basin, Brazil

    Directory of Open Access Journals (Sweden)

    B. Collischonn

    2014-09-01

    Full Text Available This paper presents a water resources management strategy developed by the Brazilian National Water Agency (ANA to cope with the conflicts between water users in the Verde Grande River basin, located at the southern border of the Brazilian semi-arid region. The basin is dominated by water-demanding fruit irrigation agriculture, which has grown significantly and without adequate water use control, over the last 30 years. The current water demand for irrigation exceeds water availability (understood as a 95 % percentile of the flow duration curve in a ratio of three to one, meaning that downstream water users are experiencing more frequent water shortages than upstream ones. The management strategy implemented in 2008 has the objective of equalizing risk for all water users and consists of a set of rules designed to restrict water withdrawals according to current river water level (indicative of water availability and water demand. Under that rule, larger farmers have proportionally larger reductions in water use, preserving small subsistence irrigators. Moreover, dry season streamflow is forecasted at strategic points by the end of every rainy season, providing evaluation of shortage risk. Thus, water users are informed about the forecasts and corresponding restrictions well in advance, allowing for anticipated planning of irrigated areas and practices. In order to enforce restriction rules, water meters were installed in all larger water users and inefficient farmers were obligated to improve their irrigation systems’ performance. Finally, increases in irrigated area are only allowed in the case of annual crops and during months of higher water availability (November to June. The strategy differs from convectional approached based only on water use priority and has been successful in dealing with natural variability of water availability, allowing more water to be used in wet years and managing risk in an isonomic manner during dry years.

  17. Neuroblastoma, a Paradigm for Big Data Science in Pediatric Oncology

    Directory of Open Access Journals (Sweden)

    Brittany M. Salazar

    2016-12-01

    Full Text Available Pediatric cancers rarely exhibit recurrent mutational events when compared to most adult cancers. This poses a challenge in understanding how cancers initiate, progress, and metastasize in early childhood. Also, due to limited detected driver mutations, it is difficult to benchmark key genes for drug development. In this review, we use neuroblastoma, a pediatric solid tumor of neural crest origin, as a paradigm for exploring “big data” applications in pediatric oncology. Computational strategies derived from big data science–network- and machine learning-based modeling and drug repositioning—hold the promise of shedding new light on the molecular mechanisms driving neuroblastoma pathogenesis and identifying potential therapeutics to combat this devastating disease. These strategies integrate robust data input, from genomic and transcriptomic studies, clinical data, and in vivo and in vitro experimental models specific to neuroblastoma and other types of cancers that closely mimic its biological characteristics. We discuss contexts in which “big data” and computational approaches, especially network-based modeling, may advance neuroblastoma research, describe currently available data and resources, and propose future models of strategic data collection and analyses for neuroblastoma and other related diseases.

  18. Burnable absorber-integrated Guide Thimble (BigT) - 1. Design concepts and neutronic characterization on the fuel assembly benchmarks

    International Nuclear Information System (INIS)

    Yahya, Mohd-Syukri; Yu, Hwanyeal; Kim, Yonghee

    2016-01-01

    This paper presents the conceptual designs of a new burnable absorber (BA) for the pressurized water reactor (PWR), which is named 'Burnable absorber-integrated Guide Thimble' (BigT). The BigT integrates BA materials into standard guide thimble in a PWR fuel assembly. Neutronic sensitivities and practical design considerations of the BigT concept are points of highlight in the first half of the paper. Specifically, the BigT concepts are characterized in view of its BA material and spatial self-shielding variations. In addition, the BigT replaceability requirement, bottom-end design specifications and thermal-hydraulic considerations are also deliberated. Meanwhile, much of the second half of the paper is devoted to demonstrate practical viability of the BigT absorbers via comparative evaluations against the conventional BA technologies in representative 17x17 and 16x16 fuel assembly lattices. For the 17x17 lattice evaluations, all three BigT variants are benchmarked against Westinghouse's existing BA technologies, while in the 16x16 assembly analyses, the BigT designs are compared against traditional integral gadolinia-urania rod design. All analyses clearly show that the BigT absorbers perform as well as the commercial BA technologies in terms of reactivity and power peaking management. In addition, it has been shown that sufficiently high control rod worth can be obtained with the BigT absorbers in place. All neutronic simulations were completed using the Monte Carlo Serpent code with ENDF/B-VII.0 library. (author)

  19. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  20. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  1. Effects of soil water and nitrogen availability on photosynthesis and water use efficiency of Robinia pseudoacacia seedlings.

    Science.gov (United States)

    Liu, Xiping; Fan, Yangyang; Long, Junxia; Wei, Ruifeng; Kjelgren, Roger; Gong, Chunmei; Zhao, Jun

    2013-03-01

    The efficient use of water and nitrogen (N) to promote growth and increase yield of fruit trees and crops is well studied. However, little is known about their effects on woody plants growing in arid and semiarid areas with limited water and N availability. To examine the effects of water and N supply on early growth and water use efficiency (WUE) of trees on dry soils, one-year-old seedlings of Robinia pseudoacacia were exposed to three soil water contents (non-limiting, medium drought, and severe drought) as well as to low and high N levels, for four months. Photosynthetic parameters, leaf instantaneous WUE (WUEi) and whole tree WUE (WUEb) were determined. Results showed that, independent of N levels, increasing soil water content enhanced the tree transpiration rate (Tr), stomatal conductance (Gs), intercellular CO2 concentration (Ci), maximum net assimilation rate (Amax), apparent quantum yield (AQY), the range of photosynthetically active radiation (PAR) due to both reduced light compensation point and enhanced light saturation point, and dark respiration rate (Rd), resulting in a higher net photosynthetic rate (Pn) and a significantly increased whole tree biomass. Consequently, WUEi and WUEb were reduced at low N, whereas WUEi was enhanced at high N levels. Irrespective of soil water availability, N supply enhanced Pn in association with an increase of Gs and Ci and a decrease of the stomatal limitation value (Ls), while Tr remained unchanged. Biomass and WUEi increased under non-limiting water conditions and medium drought, as well as WUEb under all water conditions; but under severe drought, WUEi and biomass were not affected by N application. In conclusion, increasing soil water availability improves photosynthetic capacity and biomass accumulation under low and high N levels, but its effects on WUE vary with soil N levels. N supply increased Pn and WUE, but under severe drought, N supply did not enhance WUEi and biomass.

  2. Assessment of Suitable Areas for Home Gardens for Irrigation Potential, Water Availability, and Water-Lifting Technologies

    Directory of Open Access Journals (Sweden)

    Tewodros Assefa

    2018-04-01

    Full Text Available The study was conducted in Lake Tana Basin of Ethiopia to assess potentially irrigable areas for home gardens, water availability, and feasibility of water-lifting technologies. A GIS-based Multi-Criteria Evaluation (MCE technique was applied to access the potential of surface and groundwater sources for irrigation. The factors affecting irrigation practice were identified and feasibility of water-lifting technologies was evaluated. Pairwise method and expert’s opinion were used to assign weights for each factor. The result showed that about 345,000 ha and 135,000 ha of land were found suitable for irrigation from the surface and groundwater sources, respectively. The rivers could address about 1–1.2% of the irrigable land during dry season without water storage structure whereas groundwater could address about 2.2–2.4% of the irrigable land, both using conventional irrigation techniques. If the seven major dams within the basin were considered, surface water potential would be increased and satisfy about 21% of the irrigable land. If rainwater harvesting techniques were used, about 76% of the basin would be suitable for irrigation. The potential of surface and groundwater was evaluated with respect to water requirements of dominant crops in the region. On the other hand, rope pump and deep well piston hand pump were found with relatively the most (26% and the least (9% applicable low-cost water-lifting technologies in the basin.

  3. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    Science.gov (United States)

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  4. Global monthly water scarcity: Blue water footprints versus blue water availability

    NARCIS (Netherlands)

    Hoekstra, Arjen Ysbert; Mekonnen, Mesfin; Chapagain, Ashok; Mathews, R.E.; Richter, B.D.

    2012-01-01

    Freshwater scarcity is a growing concern, placing considerable importance on the accuracy of indicators used to characterize and map water scarcity worldwide. We improve upon past efforts by using estimates of blue water footprints (consumptive use of ground- and surface water flows) rather than

  5. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  6. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  7. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  8. Big Data technology in traffic: A case study of automatic counters

    Directory of Open Access Journals (Sweden)

    Janković Slađana R.

    2016-01-01

    Full Text Available Modern information and communication technologies together with intelligent devices provide a continuous inflow of large amounts of data that are used by traffic and transport systems. Collecting traffic data does not represent a challenge nowadays, but the issues remains in relation to storing and processing increasing amounts of data. In this paper we have investigated the possibilities of using Big Data technology to store and process data in the transport domain. The term Big Data refers to a large volume of information resource, its velocity and variety, far beyond the capabilities of commonly used software for storing, processing and data management. In our case study, Apache™ Hadoop® Big Data was used for processing data collected from 10 automatic traffic counters set up in Novi Sad and its surroundings. Indicators of traffic load which were calculated using the Big Data platforms were presented using tables and graphs in Microsoft Office Excel tool. The visualization and geolocation of the obtained indicators were performed using the Microsoft Business Intelligence (BI tools such as: Excel Power View and Excel Power Map. This case study showed that Big Data technologies combined with the BI tools can be used as a reliable support in monitoring of the traffic management systems.

  9. Agrupamentos epistemológicos de artigos publicados sobre big data analytics

    Directory of Open Access Journals (Sweden)

    Patricia Kuzmenko FURLAN

    Full Text Available Resumo A era do big data já é realidade para empresas e indivíduos, e a literatura acadêmica sobre o tema tem crescido rapidamente nos últimos anos. Neste artigo, pretendeu-se identificar quais são os principais nichos e vertentes de publicação sobre o big data analytics. A opção metodológica foi realizar pesquisa bibliométrica na base de dados ISI Web of Science, utilizando-se aquele termo para focar as práticas de gestão de big data. Foi possível identificar cinco grupos distintos dentre os artigos encontrados: evolução do big data; gestão, negócios e estratégia; comportamento humano e aspectos socioculturais; mineração dos dados (data mining e geração de conhecimento; e Internet das Coisas. Concluiu-se que o tema é emergente e pouco consolidado, apresentando grande variação nos termos empregados, o que influencia nas buscas bibliográficas. Como resultado complementar da pesquisa, foram identificadas as principais palavras-chave empregadas nas publicações sobre big data analytics, o que contribui para as pesquisas bibliográficas de estudos futuros.

  10. Adoption of geodemographic and ethno-cultural taxonomies for analysing Big Data

    Directory of Open Access Journals (Sweden)

    Richard James Webber

    2015-05-01

    Full Text Available This paper is intended to contribute to the discussion of the differential level of adoption of Big Data among research communities. Recognising the impracticality of conducting an audit across all forms and uses of Big Data, we have restricted our enquiry to one very specific form of Big Data, namely general purpose taxonomies, of which Mosaic, Acorn and Origins are examples, that rely on data from a variety of Big Data feeds. The intention of these taxonomies is to enable the records of consumers and citizens held on Big Data datasets to be coded according to type of residential neighbourhood or ethno-cultural heritage without any use of questionnaires. Based on our respective experience in the academic social sciences, in government and in the design and marketing of these taxonomies, we identify the features of these classifications which appear to render them attractive or problematic to different categories of potential user or researcher depending on how the relationship is conceived. We conclude by identifying seven classifications of user or potential user who, on account of their background, current position and future career expectations, tend to respond in different ways to the opportunity to adopt these generic systems as aids for understanding social processes.

  11. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Science.gov (United States)

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  12. Artificial Intelligence in Public Health Prevention of Legionelosis in Drinking Water Systems

    Directory of Open Access Journals (Sweden)

    Peter Sinčak

    2014-08-01

    Full Text Available Good quality water supplies and safe sanitation in urban areas are a big challenge for governments throughout the world. Providing adequate water quality is a basic requirement for our lives. The colony forming units of the bacterium Legionella pneumophila in potable water represent a big problem which cannot be overlooked for health protection reasons. We analysed several methods to program a virtual hot water tank with AI (artificial intelligence tools including neuro-fuzzy systems as a precaution against legionelosis. The main goal of this paper is to present research which simulates the temperature profile in the water tank. This research presents a tool for a water management system to simulate conditions which are able to prevent legionelosis outbreaks in a water system. The challenge is to create a virtual water tank simulator including the water environment which can simulate a situation which is common in building water distribution systems. The key feature of the presented system is its adaptation to any hot water tank. While respecting the basic parameters of hot water, a water supplier and building maintainer are required to ensure the predefined quality and water temperature at each sampling site and avoid the growth of Legionella. The presented system is one small contribution how to overcome a situation when legionelosis could find good conditions to spread and jeopardize human lives.

  13. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  14. Stover removal effects on seasonal soil water availability under full and deficit irrigation

    Science.gov (United States)

    Removing corn (Zea mays L.) stover for livestock feed or bioenergy feedstock may impact water availability in the soil profile to support crop growth. The role of stover in affecting soil profile water availability will depend on annual rainfall inputs as well as irrigation level. To assess how res...

  15. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  16. The Need for a Definition of Big Data for Nursing Science: A Case Study of Disaster Preparedness

    Directory of Open Access Journals (Sweden)

    Ho Ting Wong

    2016-10-01

    Full Text Available The rapid development of technology has made enormous volumes of data available and achievable anytime and anywhere around the world. Data scientists call this change a data era and have introduced the term “Big Data”, which has drawn the attention of nursing scholars. Nevertheless, the concept of Big Data is quite fuzzy and there is no agreement on its definition among researchers of different disciplines. Without a clear consensus on this issue, nursing scholars who are relatively new to the concept may consider Big Data to be merely a dataset of a bigger size. Having a suitable definition for nurse researchers in their context of research and practice is essential for the advancement of nursing research. In view of the need for a better understanding on what Big Data is, the aim in this paper is to explore and discuss the concept. Furthermore, an example of a Big Data research study on disaster nursing preparedness involving six million patient records is used for discussion. The example demonstrates that a Big Data analysis can be conducted from many more perspectives than would be possible in traditional sampling, and is superior to traditional sampling. Experience gained from the process of using Big Data in this study will shed light on future opportunities for conducting evidence-based nursing research to achieve competence in disaster nursing.

  17. Changing water availability during the African maize-growing season, 1979–2010

    International Nuclear Information System (INIS)

    Estes, Lyndon D; Chaney, Nathaniel W; Herrera-Estrada, Julio; Sheffield, Justin; Caylor, Kelly K; Wood, Eric F

    2014-01-01

    Understanding how global change is impacting African agriculture requires a full physical accounting of water supply and demand, but accurate, gridded data on key drivers (e.g., humidity) are generally unavailable. We used a new bias-corrected meteorological dataset to analyze changes in precipitation (supply), potential evapotranspiration (E p , demand), and water availability (expressed as the ratio P/E p ) in 20 countries (focusing on their maize-growing regions and seasons), between 1979 and 2010, and the factors driving changes in E p . Maize-growing areas in Southern Africa, particularly South Africa, benefitted from increased water availability due in large part to demand declines driven primarily by declining net radiation, increasing vapor pressure, and falling temperatures (with no effect from changing windspeed), with smaller increases in supply. Sahelian zone countries in West Africa, as well as Ethiopia in East Africa, had strong increases in availability driven primarily by rainfall rebounding from the long-term Sahelian droughts, with little change or small reductions in demand. However, intra-seasonal supply variability generally increased in West and East Africa. Across all three regions, declining net radiation contributed downwards pressure on demand, generally over-riding upwards pressure caused by increasing temperatures, the regional effects of which were largest in East Africa. A small number of countries, mostly in or near East Africa (Tanzania and Malawi) experienced declines in water availability primarily due to decreased rainfall, but exacerbated by increasing demand. Much of the reduced water availability in East Africa occurred during the more sensitive middle part of the maize-growing season, suggesting negative consequences for maize production. (paper)

  18. Big Data and central banks

    Directory of Open Access Journals (Sweden)

    David Bholat

    2015-04-01

    Full Text Available This commentary recaps a Centre for Central Banking Studies event held at the Bank of England on 2–3 July 2014. The article covers three main points. First, it situates the Centre for Central Banking Studies event within the context of the Bank’s Strategic Plan and initiatives. Second, it summarises and reflects on major themes from the event. Third, the article links central banks’ emerging interest in Big Data approaches with their broader uptake by other economic agents.

  19. The impact of big data and business analytics on supply chain management

    Directory of Open Access Journals (Sweden)

    Hans W. Ittmann

    2015-05-01

    Objective: This article endeavours to highlight the evolving nature of the supply chain management (SCM environment, to identify how the two major trends (‘big data’ and analytics will impact SCM in future, to show the benefits that can be derived if these trends are embraced and to make recommendations to supply chain managers. Method: The importance of extracting value from the huge amounts of data available in the SCM area is stated. ‘Big data’ and analytics are defined and the impact of these in various SCM applications clearly illustrated. Results: It is shown, through examples, how the SCM area can be impacted by these new trends and developments. In these examples ‘big data’ analytics have already been embraced, used and implemented successfully. Big data is a reality and using analytics to extract value from the data has the potential to make a huge impact. Conclusion: It is strongly recommended that supply chain managers take note of these two trends, since better use of ‘big data’ analytics can ensure that they keep abreast with developments and changes which can assist in enhancing business competitiveness.

  20. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  1. On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.

    Science.gov (United States)

    Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N

    2016-04-01

    An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.

  2. Big Data and Dementia: Charting the Route Ahead for Research, Ethics, and Policy

    Directory of Open Access Journals (Sweden)

    Marcello Ienca

    2018-02-01

    Full Text Available Emerging trends in pervasive computing and medical informatics are creating the possibility for large-scale collection, sharing, aggregation and analysis of unprecedented volumes of data, a phenomenon commonly known as big data. In this contribution, we review the existing scientific literature on big data approaches to dementia, as well as commercially available mobile-based applications in this domain. Our analysis suggests that big data approaches to dementia research and care hold promise for improving current preventive and predictive models, casting light on the etiology of the disease, enabling earlier diagnosis, optimizing resource allocation, and delivering more tailored treatments to patients with specific disease trajectories. Such promissory outlook, however, has not materialized yet, and raises a number of technical, scientific, ethical, and regulatory challenges. This paper provides an assessment of these challenges and charts the route ahead for research, ethics, and policy.

  3. 75 FR 71431 - Clean Water Act Section 303(d): Availability of List Decisions Correction

    Science.gov (United States)

    2010-11-23

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9230-1] Clean Water Act Section 303(d): Availability of List... Availability. SUMMARY: This action corrects a Federal Register notice that published on November 9, 2010 at 75 FR 68783 announcing the availability of EPA decisions identifying water quality limited segments and...

  4. CSIR Technologies and Interventions to maximise the availability of water for Scenarios of Industrial Growth

    CSIR Research Space (South Africa)

    Harrison, Pienaar

    2017-10-01

    Full Text Available disasters (WEF, 2017). 6 (UN Water Report, 2016 - McKinsey Global Institute). 7 Power source: hydro, wave cooling Carrier: steam turbines Hydraulic tool: fracking Growth requirement: biofuels New treatment: desalination Waste water... treatment Raw water treatment Distribution Abstraction WATER ENERGY At the same time, climate change is likely to result in reduction of surface water availability, shifts in the seasonality of rainfall and runoff, growing water use demands...

  5. Identifying The Purchasing Power Parity of Indonesia Rupiah (IDR based on BIG MAC Index

    Directory of Open Access Journals (Sweden)

    Tongam Sihol Nababan

    2016-12-01

    Full Text Available The aim of this study is to identify : (1 profile of exchange rate and purchasing power parity of IDR against US $ based on Big Mac Index compared to the exchange rate of other countries, and (2 the position of the Big Mac Affordability of  Indonesia compared to other ASEAN countries. The results showed that based on Big Mac index during the period April 1998 up to January 2015, IDR exchange rate tends to be undervalued against the USA dollar. The cause of the currency tends to be in a position of undervalued due to the components of non-tradable have not been included in Big Mac index. The index of Big Mac Affordability indicates that there is a great disparity of income between Singapore and five other ASEAN countries. The purchasing power of the real income of the people in Singapore is nearly five times the real income of the people in Indonesia.

  6. The Coupling of Ecosystem Productivity and Water Availability in Dryland Regions

    Science.gov (United States)

    Scott, R. L.; Biederman, J. A.; Barron-Gafford, G.

    2014-12-01

    Land cover and climatic change will alter biosphere-atmosphere exchanges of water vapor and carbon dioxide depending, in part, on feedbacks between biotic activity and water availability. Eddy covariance observations allow us to estimate ecosystem-scale productivity and respiration, and these datasets are now becoming sufficiently mature to advance understanding of these ecohydrological interactions. Here we use a network of sites in semiarid western North America representing gradients of water availability and functional plant type. We examine how precipitation (P) controls evapotranspiration (ET), net ecosystem production (NEP), and its component fluxes of ecosystem respiration (Reco) and gross ecosystem production (GEP). Despite the high variability in seasonal and annual precipitation timing and amounts that we expect to influence ecosystem function, we find persistent overall relationships between P or ET and the fluxes of NEP, Reco and GEP across the network, indicating a commonality and resilience in ecosystem soil and plant response to water availability. But we also observe several important site differences such as prior seasonal legacy effects on subsequent fluxes which vary depending on dominant plant functional type. For example, multiyear droughts, episodic cool-season droughts, and hard winter freezes seem to affect the herbaceous species differently than the woody ones. Nevertheless, the overall, strong coupling between hydrologic and ecologic processes at these sites bolsters our ability to predict the response of dryland ecosystems to future precipitation change.

  7. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  8. Epidemiology in wonderland: Big Data and precision medicine.

    Science.gov (United States)

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  9. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  10. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  11. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  12. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  13. BIG DATA IN TAMIL: OPPORTUNITIES, BENEFITS AND CHALLENGES

    OpenAIRE

    R.S. Vignesh Raj; Babak Khazaei; Ashik Ali

    2015-01-01

    This paper gives an overall introduction on big data and has tried to introduce Big Data in Tamil. It discusses the potential opportunities, benefits and likely challenges from a very Tamil and Tamil Nadu perspective. The paper has also made original contribution by proposing the ‘big data’s’ terminology in Tamil. The paper further suggests a few areas to explore using big data Tamil on the lines of the Tamil Nadu Government ‘vision 2023’. Whilst, big data has something to offer everyone, it ...

  14. Optimizing Regional Food and Energy Production under Limited Water Availability through Integrated Modeling

    Directory of Open Access Journals (Sweden)

    Junlian Gao

    2018-05-01

    Full Text Available Across the world, human activity is approaching planetary boundaries. In northwest China, in particular, the coal industry and agriculture are competing for key limited inputs of land and water. In this situation, the traditional approach to planning the development of each sector independently fails to deliver sustainable solutions, as solutions made in sectorial ‘silos’ are often suboptimal for the entire economy. We propose a spatially detailed cost-minimizing model for coal and agricultural production in a region under constraints on land and water availability. We apply the model to the case study of Shanxi province, China. We show how such an integrated optimization, which takes maximum advantage of the spatial heterogeneity in resource abundance, could help resolve the conflicts around the water–food–energy (WFE nexus and assist in its management. We quantify the production-possibility frontiers under different water-availability scenarios and demonstrate that in water-scarce regions, like Shanxi, the production capacity and corresponding production solutions are highly sensitive to water constraints. The shadow prices estimated in the model could be the basis for intelligent differentiated water pricing, not only to enable the water-resource transfer between agriculture and the coal industry, and across regions, but also to achieve cost-effective WFE management.

  15. Advances in Risk Analysis with Big Data.

    Science.gov (United States)

    Choi, Tsan-Ming; Lambert, James H

    2017-08-01

    With cloud computing, Internet-of-things, wireless sensors, social media, fast storage and retrieval, etc., organizations and enterprises have access to unprecedented amounts and varieties of data. Current risk analysis methodology and applications are experiencing related advances and breakthroughs. For example, highway operations data are readily available, and making use of them reduces risks of traffic crashes and travel delays. Massive data of financial and enterprise systems support decision making under risk by individuals, industries, regulators, etc. In this introductory article, we first discuss the meaning of big data for risk analysis. We then examine recent advances in risk analysis with big data in several topic areas. For each area, we identify and introduce the relevant articles that are featured in the special issue. We conclude with a discussion on future research opportunities. © 2017 Society for Risk Analysis.

  16. Sustainable Investment in a Supply Chain in the Big Data Era: An Information Updating Approach

    Directory of Open Access Journals (Sweden)

    Yanping Cheng

    2018-02-01

    Full Text Available We are now living in the big data era, where firms can improve their decision makings by adopting big data technology to utilize mass information. To explore the effects of the big data technology, we build an analytical model to study the sustainable investment in a supply chain, consisting of one manufacturer and one retailer, by using Bayesian information updating approach. We derive the optimal sustainable investment level for the manufacturer and the optimal order quantity for the retailer. Comparing the results with and without the big data technology, we find that whether the manufacturer should make more sustainable investment when the retailer adopts the big data technology depends on the service level at the retailer side. Interestingly, it is not always optimal for the retailer to adopt the big data technology. We identify the conditions under which the manufacturer and retailer are better off with the big data technology. In addition, we investigate the impact of the number of observations regarding the market information and find that the optimal decisions and profits increase in the number of the observations, if and only if the service level is low.

  17. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  18. Understanding the allure of big infrastructure: Jakarta’s Great Garuda Sea Wall Project

    Directory of Open Access Journals (Sweden)

    Emma Colven

    2017-06-01

    Full Text Available In response to severe flooding in Jakarta, a consortium of Dutch firms in collaboration with the Indonesian government has designed the 'Great Garuda Sea Wall' project. The master plan proposes to construct a sea wall to enclose Jakarta Bay. A new waterfront city will be built on over 1000 hectares (ha of reclaimed land in the shape of the Garuda, Indonesia’s national symbol. By redeveloping North Jakarta, the project promises to realise the world-class city aspirations of Indonesia’s political elites. Heavily reliant on hydrological engineering, hard infrastructure and private capital, the project has been presented by proponents as the optimum way to protect the city from flooding. The project retains its allure among political elites despite not directly addressing land subsidence, understood to be a primary cause of flooding. I demonstrate how this project is driven by a techno-political network that brings together political and economic interests, world-class city discourses, engineering expertise, colonial histories, and postcolonial relations between Jakarta and the Netherlands. Due in part to this network, big infrastructure has long constituted the preferred state response to flooding in Jakarta. I thus make a case for provincialising narratives that claim we are witnessing a return to big infrastructure in water management.

  19. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  20. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. Toxic metals' concentration in water of Kriveljska Reka and its tributaries and influence of water there

    International Nuclear Information System (INIS)

    Lukic, D.; Zlatkovic, S.; Vuckovic, M.; Jovanovic, R.

    2002-01-01

    Kriveljska reka is near Bor, a big mining basin in East Serbia. This river is formed from two not so big rivers: Cerova reka and Valja Mare. Kriveljska reka flow past village Veliki Krivelj. Veliki Krivelj is one of the most important mining strip in Bor area. Therefore, Kriveljska reka is the reception for waste waters of some sections of Mining Basin Bor, situated on its banks. We will present to you concentrations of 7 toxic metals, pH-value and chemical oxygen demand in 8 points at Kriveljska reka and waste waters' influence on quality of this river's water. Based on our results, we can conclude that waste waters from Mining Basin Bor contaminate Kriveljska reka and at last we have a dead river. (author)

  2. Reinventing Radiology: Big Data and the Future of Medical Imaging.

    Science.gov (United States)

    Morris, Michael A; Saboury, Babak; Burkett, Brian; Gao, Jackson; Siegel, Eliot L

    2018-01-01

    Today, data surrounding most of our lives are collected and stored. Data scientists are beginning to explore applications that could harness this information and make sense of it. In this review, the topic of Big Data is explored, and applications in modern health care are considered. Big Data is a concept that has evolved from the modern trend of "scientism." One of the primary goals of data scientists is to develop ways to discover new knowledge from the vast quantities of increasingly available information. Current and future opportunities and challenges with respect to radiology are provided with emphasis on cardiothoracic imaging.

  3. BLAM (Benthic Light Availability Model): A Proposed Model of Hydrogeomorphic Controls on Light in Rivers

    Science.gov (United States)

    Julian, J. P.; Doyle, M. W.; Stanley, E. H.

    2006-12-01

    Light is vital to the dynamics of aquatic ecosystems. It drives photosynthesis and photochemical reactions, affects thermal structure, and influences behavior of aquatic biota. Despite the fundamental role of light to riverine ecosystems, light studies in rivers have been mostly neglected because i) boundary conditions (e.g., banks, riparian vegetation) make ambient light measurements difficult, and ii) the optical water quality of rivers is highly variable and difficult to characterize. We propose a benthic light availability model (BLAM) that predicts the percent of incoming photosynthetically active radiation (PAR) available at the river bed. BLAM was developed by quantifying light attenuation of the five hydrogeomorphic controls that dictate riverine light availability: topography, riparian vegetation, channel geometry, optical water quality, and water depth. BLAM was calibrated using hydrogeomorphic data and light measurements from two rivers: Deep River - a 5th-order, turbid river in central North Carolina, and Big Spring Creek - a 2nd-order, optically clear stream in central Wisconsin. We used a series of four PAR sensors to measure i) above-canopy PAR, ii) PAR above water surface, iii) PAR below water surface, and iv) PAR on stream bed. These measurements were used to develop empirical light attenuation coefficients, which were then used in combination with optical water quality measurements, shading analyses, channel surveys, and flow records to quantify the spatial and temporal variability in riverine light availability. Finally, we apply BLAM to the Baraboo River - a 6th-order, 120-mile, unimpounded river in central Wisconsin - in order to characterize light availability along the river continuum (from headwaters to mouth).

  4. Water availability and vulnerability of 225 large cities in the United States

    Science.gov (United States)

    Padowski, Julie C.; Jawitz, James W.

    2012-12-01

    This study presents a quantitative national assessment of urban water availability and vulnerability for 225 U.S. cities with population greater than 100,000. Here, the urban assessments account for not only renewable water flows, but also the extracted, imported, and stored water that urban systems access through constructed infrastructure. These sources represent important hydraulic components of the urban water supply, yet are typically excluded from water scarcity assessments. Results from this hydraulic-based assessment were compared to those obtained using a more conventional method that estimates scarcity solely based on local renewable flows. The inclusion of hydraulic components increased the mean availability to cities, leading to a significantly lower portion of the total U.S. population considered "at risk" for water scarcity (17%) than that obtained from the runoff method (47%). Water vulnerability was determined based on low-flow conditions, and smaller differences were found for this metric between at-risk populations using the runoff (66%) and hydraulic-based (54%) methods. The large increase in the susceptible population between the scarcity measures evaluated using the hydraulic method may better reconcile the seeming contradiction in the United States between perceptions of natural water abundance and widespread water scarcity. Additionally, urban vulnerability measures developed here were validated using a media text analysis. Vulnerability assessments that included hydraulic components were found to correlate with the frequency of urban water scarcity reports in the popular press while runoff-based measures showed no significant correlation, suggesting that hydraulic-based assessments provide better context for understanding the nature and severity of urban water scarcity issues.

  5. Big Geo Data Management: AN Exploration with Social Media and Telecommunications Open Data

    Science.gov (United States)

    Arias Munoz, C.; Brovelli, M. A.; Corti, S.; Zamboni, G.

    2016-06-01

    The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed) made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.

  6. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  7. Big Data solutions on a small scale: Evaluating accessible high-performance computing for social research

    Directory of Open Access Journals (Sweden)

    Dhiraj Murthy

    2014-11-01

    Full Text Available Though full of promise, Big Data research success is often contingent on access to the newest, most advanced, and often expensive hardware systems and the expertise needed to build and implement such systems. As a result, the accessibility of the growing number of Big Data-capable technology solutions has often been the preserve of business analytics. Pay as you store/process services like Amazon Web Services have opened up possibilities for smaller scale Big Data projects. There is high demand for this type of research in the digital humanities and digital sociology, for example. However, scholars are increasingly finding themselves at a disadvantage as available data sets of interest continue to grow in size and complexity. Without a large amount of funding or the ability to form interdisciplinary partnerships, only a select few find themselves in the position to successfully engage Big Data. This article identifies several notable and popular Big Data technologies typically implemented using large and extremely powerful cloud-based systems and investigates the feasibility and utility of development of Big Data analytics systems implemented using low-cost commodity hardware in basic and easily maintainable configurations for use within academic social research. Through our investigation and experimental case study (in the growing field of social Twitter analytics, we found that not only are solutions like Cloudera’s Hadoop feasible, but that they can also enable robust, deep, and fruitful research outcomes in a variety of use-case scenarios across the disciplines.

  8. Big Data Research in Italy: A Perspective

    Directory of Open Access Journals (Sweden)

    Sonia Bergamaschi

    2016-06-01

    Full Text Available The aim of this article is to synthetically describe the research projects that a selection of Italian universities is undertaking in the context of big data. Far from being exhaustive, this article has the objective of offering a sample of distinct applications that address the issue of managing huge amounts of data in Italy, collected in relation to diverse domains.

  9. THE 2H(alpha, gamma6LI REACTION AT LUNA AND BIG BANG NUCLEOSYNTHETIS

    Directory of Open Access Journals (Sweden)

    Carlo Gustavino

    2013-12-01

    Full Text Available The 2H(α, γ6Li reaction is the leading process for the production of 6Li in standard Big Bang Nucleosynthesis. Recent observations of lithium abundance in metal-poor halo stars suggest that there might be a 6Li plateau, similar to the well-known Spite plateau of 7Li. This calls for a re-investigation of the standard production channel for 6Li. As the 2H(α, γ6Li cross section drops steeply at low energy, it has never before been studied directly at Big Bang energies. For the first time the reaction has been studied directly at Big Bang energies at the LUNA accelerator. The preliminary data and their implications for Big Bang nucleosynthesis and the purported 6Li problem will be shown.

  10. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    Science.gov (United States)

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  11. Long term growth responses of loblolly pine to optimal nutrient and water resource availability

    Science.gov (United States)

    Timothy J. Albaugh; H. Lee Allen; Phillip M. Dougherty; Kurt H. Johnsen

    2004-01-01

    A factorial combination of four treatments (control (CW), optimal growing season water availability (IW), optimum nutrient availability (FW), and combined optimum water and nutrient availability (FIW)) in four replications were initiated in an 8-year- old Pinus taeda stand growing on a droughty, nutrient-poor, sandy site in Scotland County, NC and...

  12. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  13. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  14. Big Data for Infectious Disease Surveillance and Modeling.

    Science.gov (United States)

    Bansal, Shweta; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro; Viboud, Cécile

    2016-12-01

    We devote a special issue of the Journal of Infectious Diseases to review the recent advances of big data in strengthening disease surveillance, monitoring medical adverse events, informing transmission models, and tracking patient sentiments and mobility. We consider a broad definition of big data for public health, one encompassing patient information gathered from high-volume electronic health records and participatory surveillance systems, as well as mining of digital traces such as social media, Internet searches, and cell-phone logs. We introduce nine independent contributions to this special issue and highlight several cross-cutting areas that require further research, including representativeness, biases, volatility, and validation, and the need for robust statistical and hypotheses-driven analyses. Overall, we are optimistic that the big-data revolution will vastly improve the granularity and timeliness of available epidemiological information, with hybrid systems augmenting rather than supplanting traditional surveillance systems, and better prospects for accurate infectious diseases models and forecasts. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  15. Big Data in Cloud Computing: A Resource Management Perspective

    Directory of Open Access Journals (Sweden)

    Saeed Ullah

    2018-01-01

    Full Text Available The modern day advancement is increasingly digitizing our lives which has led to a rapid growth of data. Such multidimensional datasets are precious due to the potential of unearthing new knowledge and developing decision-making insights from them. Analyzing this huge amount of data from multiple sources can help organizations to plan for the future and anticipate changing market trends and customer requirements. While the Hadoop framework is a popular platform for processing larger datasets, there are a number of other computing infrastructures, available to use in various application domains. The primary focus of the study is how to classify major big data resource management systems in the context of cloud computing environment. We identify some key features which characterize big data frameworks as well as their associated challenges and issues. We use various evaluation metrics from different aspects to identify usage scenarios of these platforms. The study came up with some interesting findings which are in contradiction with the available literature on the Internet.

  16. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  17. Physiological responses of Theobroma cacao L. to water soil available in nursery stage

    Directory of Open Access Journals (Sweden)

    Jairo Garcia Lozano

    2016-01-01

    Full Text Available In the locality of El Espinal, Tolima, the effect of water stress on leaf water potential and gas exchange of plants three clones of cacao (Theobroma cacao L was evaluated. The experiment was established in a split plot design in randomized block arrangement. The main plot was four levels of available soil water, subplot grafted seedlings to three months of three clones with five repetitions. The results showed highly significant differences (P <0.01 in content of soil water, but no differences between the materials evaluated. The loss of water in the soil decreases leaf water potential (Ψf and causes stomatal closure altering gas exchange and vapor pressure deficit (DPV accentuates mainly at noon with increasing evapotranspiration. The magnitude of impact of water deficit depends on climatic variations throughout the day. The climatic variables that affect plant development, are temperature and relative humidity in the form of DPV. Net photosynthesis and growth of cocoa seedlings are physiological variables very sensitive to excess and especially to water deficit.

  18. Big rock point restoration project BWR major component removal, packaging and shipping - planning and experience

    International Nuclear Information System (INIS)

    Milner, T.; Dam, S.; Papp, M.; Slade, J.; Slimp, B.; Nurden, P.

    2001-01-01

    The Big Rock Point boiling water reactor (BWR) at Charlevoix, MI was permanently shut down on August 29th 1997. In 1999 BNFL Inc.'s Reactor Decommissioning Group (RDG) was awarded a contract by Consumers Energy (CECo) for the Big Rock Point (BRP) Major Component Removal (MCR) project. BNFL Inc. RDG has teamed with MOTA, Sargent and Lundy and MDM Services to plan and execute MCR in support of the facility restoration project. The facility restoration project will be completed by 2005. Key to the success of the project has been the integration of best available demonstrated technology into a robust and responsive project management approach, which places emphasis on safety and quality assurance in achieving project milestones linked to time and cost. To support decommissioning of the BRP MCR activities, a reactor vessel (RV) shipping container is required. Discussed in this paper is the design and fabrication of a 10 CFR Part 71 Type B container necessary to ship the BRP RV. The container to be used for transportation of the RV to the burial site was designed as an Exclusive Use Type B package for shipment and burial at the Barnwell, South Carolina (SC) disposal facility. (author)

  19. Metabolomic response of Calotropis procera growing in the desert to changes in water availability.

    Science.gov (United States)

    Ramadan, Ahmed; Sabir, Jamal S M; Alakilli, Saleha Y M; Shokry, Ahmed M; Gadalla, Nour O; Edris, Sherif; Al-Kordy, Magdy A; Al-Zahrani, Hassan S; El-Domyati, Fotouh M; Bahieldin, Ahmed; Baker, Neil R; Willmitzer, Lothar; Irgang, Susann

    2014-01-01

    Water availability is a major limitation for agricultural productivity. Plants growing in severe arid climates such as deserts provide tools for studying plant growth and performance under extreme drought conditions. The perennial species Calotropis procera used in this study is a shrub growing in many arid areas which has an exceptional ability to adapt and be productive in severe arid conditions. We describe the results of studying the metabolomic response of wild C procera plants growing in the desert to a one time water supply. Leaves of C. procera plants were taken at three time points before and 1 hour, 6 hours and 12 hours after watering and subjected to a metabolomics and lipidomics analysis. Analysis of the data reveals that within one hour after watering C. procera has already responded on the metabolic level to the sudden water availability as evidenced by major changes such as increased levels of most amino acids, a decrease in sucrose, raffinose and maltitol, a decrease in storage lipids (triacylglycerols) and an increase in membrane lipids including photosynthetic membranes. These changes still prevail at the 6 hour time point after watering however 12 hours after watering the metabolomics data are essentially indistinguishable from the prewatering state thus demonstrating not only a rapid response to water availability but also a rapid response to loss of water. Taken together these data suggest that the ability of C. procera to survive under the very harsh drought conditions prevailing in the desert might be associated with its rapid adjustments to water availability and losses.

  20. Sustainability of small reservoirs and large scale water availability under current conditions and climate change

    OpenAIRE

    Krol, Martinus S.; de Vries, Marjella J.; van Oel, P.R.; Carlos de Araújo, José

    2011-01-01

    Semi-arid river basins often rely on reservoirs for water supply. Small reservoirs may impact on large-scale water availability both by enhancing availability in a distributed sense and by subtracting water for large downstream user communities, e.g. served by large reservoirs. Both of these impacts of small reservoirs are subject to climate change. Using a case-study on North-East Brazil, this paper shows that climate change impacts on water availability may be severe, and impacts on distrib...

  1. Quantum nature of the big bang.

    Science.gov (United States)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  2. Future Water Availability from Hindukush-Karakoram-Himalaya upper Indus Basin under Conflicting Climate Change Scenarios

    Directory of Open Access Journals (Sweden)

    Shabeh ul Hasson

    2016-08-01

    Full Text Available Future of the crucial Himalayan water supplies has generally been assessed under the anthropogenic warming, typically consistent amid observations and climate model projections. However, conflicting mid-to-late melt-season cooling within the upper Indus basin (UIB suggests that the future of its melt-dominated hydrological regime and the subsequent water availability under changing climate has yet been understood only indistinctly. Here, the future water availability from the UIB is presented under both observed and projected—though likely but contrasting—climate change scenarios. Continuation of prevailing climatic changes suggests decreased and delayed glacier melt but increased and early snowmelt, leading to reduction in the overall water availability and profound changes in the overall seasonality of the hydrological regime. Hence, initial increases in the water availability due to enhanced glacier melt under typically projected warmer climates, and then abrupt decrease upon vanishing of the glaciers, as reported earlier, is only true given the UIB starts following uniformly the global warming signal. Such discordant future water availability findings caution the impact assessment communities to consider the relevance of likely (near-future climate change scenarios—consistent to prevalent climatic change patterns—in order to adequately support the water resource planning in Pakistan.

  3. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    Science.gov (United States)

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  4. Water availability and genetic effects on wood properties of loblolly pine (Pinus taeda)

    Science.gov (United States)

    C. A. Gonzalez-Benecke; T. A. Martin; Alexander Clark; G. F. Peter

    2010-01-01

    We studied the effect of water availability on basal area growth and wood properties of 11-year-old loblolly pine (Pinus taeda L.) trees from contrasting Florida (FL) (a mix of half-sib families) and South Carolina coastal plain (SC) (a single, half-sib family) genetic material. Increasing soil water availability via irrigation increased average wholecore specific...

  5. Earth Science Data Analysis in the Era of Big Data

    Science.gov (United States)

    Kuo, K.-S.; Clune, T. L.; Ramachandran, R.

    2014-01-01

    Anyone with even a cursory interest in information technology cannot help but recognize that "Big Data" is one of the most fashionable catchphrases of late. From accurate voice and facial recognition, language translation, and airfare prediction and comparison, to monitoring the real-time spread of flu, Big Data techniques have been applied to many seemingly intractable problems with spectacular successes. They appear to be a rewarding way to approach many currently unsolved problems. Few fields of research can claim a longer history with problems involving voluminous data than Earth science. The problems we are facing today with our Earth's future are more complex and carry potentially graver consequences than the examples given above. How has our climate changed? Beside natural variations, what is causing these changes? What are the processes involved and through what mechanisms are these connected? How will they impact life as we know it? In attempts to answer these questions, we have resorted to observations and numerical simulations with ever-finer resolutions, which continue to feed the "data deluge." Plausibly, many Earth scientists are wondering: How will Big Data technologies benefit Earth science research? As an example from the global water cycle, one subdomain among many in Earth science, how would these technologies accelerate the analysis of decades of global precipitation to ascertain the changes in its characteristics, to validate these changes in predictive climate models, and to infer the implications of these changes to ecosystems, economies, and public health? Earth science researchers need a viable way to harness the power of Big Data technologies to analyze large volumes and varieties of data with velocity and veracity. Beyond providing speedy data analysis capabilities, Big Data technologies can also play a crucial, albeit indirect, role in boosting scientific productivity by facilitating effective collaboration within an analysis environment

  6. Big data processing in the cloud - Challenges and platforms

    Science.gov (United States)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  7. Localization of Ca2+ -activated big-conductance K+ channels in rabbit distal colon

    DEFF Research Database (Denmark)

    Hay-Schmidt, Anders; Grunnet, Morten; Abrahamse, Salomon L

    2003-01-01

    Big-conductance Ca(2+)-activated K(+) channels (BK channels) may play an important role in the regulation of epithelial salt and water transport, but little is known about the expression level and the precise localization of BK channels in epithelia. The aim of the present study was to quantify a...

  8. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  9. Victoria Stodden: Scholarly Communication in the Era of Big Data and Big Computation

    OpenAIRE

    Stodden, Victoria

    2015-01-01

    Victoria Stodden gave the keynote address for Open Access Week 2015. "Scholarly communication in the era of big data and big computation" was sponsored by the University Libraries, Computational Modeling and Data Analytics, the Department of Computer Science, the Department of Statistics, the Laboratory for Interdisciplinary Statistical Analysis (LISA), and the Virginia Bioinformatics Institute. Victoria Stodden is an associate professor in the Graduate School of Library and Information Scien...

  10. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  11. Polder Effects on Sediment-to-Soil Conversion: Water Table, Residual Available Water Capacity, and Salt Stress Interdependence

    Directory of Open Access Journals (Sweden)

    Raymond Tojo Radimy

    2013-01-01

    Full Text Available The French Atlantic marshlands, reclaimed since the Middle Age, have been successively used for extensive grazing and more recently for cereal cultivation from 1970. The soils have acquired specific properties which have been induced by the successive reclaiming and drainage works and by the response of the clay dominant primary sediments, that is, structure, moisture, and salinity profiles. Based on the whole survey of the Marais Poitevin and Marais de Rochefort and in order to explain the mechanisms of marsh soil behavior, the work focuses on two typical spots: an undrained grassland since at least 1964 and a drained cereal cultivated field. The structure-hydromechanical profiles relationships have been established thanks to the clay matrix shrinkage curve. They are confronted to the hydraulic functioning including the fresh-to-salt water transfers and to the recording of tensiometer profiles. The CE1/5 profiles supply the water geochemical and geophysical data by their better accuracy. Associated to the available water capacity calculation they allow the representation of the parallel evolution of the residual available water capacity profiles and salinity profiles according to the plant growing and rooting from the mesophile systems of grassland to the hygrophile systems of drained fields.

  12. Human factors in Big Data

    NARCIS (Netherlands)

    Boer, J. de

    2016-01-01

    Since 2014 I am involved in various (research) projects that try to make the hype around Big Data more concrete and tangible for the industry and government. Big Data is about multiple sources of (real-time) data that can be analysed, transformed to information and be used to make 'smart' decisions.

  13. Water Availability and Use Pilot-A multiscale assessment in the U.S. Great Lakes Basin

    Science.gov (United States)

    Reeves, Howard W.

    2011-01-01

    Beginning in 2005, water availability and use were assessed for the U.S. part of the Great Lakes Basin through the Great Lakes Basin Pilot of a U.S. Geological Survey (USGS) national assessment of water availability and use. The goals of a national assessment of water availability and use are to clarify our understanding of water-availability status and trends and improve our ability to forecast the balance between water supply and demand for future economic and environmental uses. This report outlines possible approaches for full-scale implementation of such an assessment. As such, the focus of this study was on collecting, compiling, and analyzing a wide variety of data to define the storage and dynamics of water resources and quantify the human demands on water in the Great Lakes region. The study focused on multiple spatial and temporal scales to highlight not only the abundant regional availability of water but also the potential for local shortages or conflicts over water. Regional studies provided a framework for understanding water resources in the basin. Subregional studies directed attention to varied aspects of the water-resources system that would have been difficult to assess for the whole region because of either data limitations or time limitations for the project. The study of local issues and concerns was motivated by regional discussions that led to recent legislative action between the Great Lakes States and regional cooperation with the Canadian Great Lakes Provinces. The multiscale nature of the study findings challenges water-resource managers and the public to think about regional water resources in an integrated way and to understand how future changes to the system-driven by human uses, climate variability, or land-use change-may be accommodated by informed water-resources management.

  14. Research on taxi software policy based on big data

    Directory of Open Access Journals (Sweden)

    Feng Daoming

    2017-01-01

    Full Text Available Through big data analysis, statistical analysis of a large number of factors affect the establishment of the rally car index set, By establishing a mathematical model to analyze the different space-time taxi resource “to match supply and demand” degree, combined with intelligent deployment to solve the “taxi difficult” this hot social issues. This article takes Shanghai as an example, the central park, Lu Xun park, century park three areas as the object of study. From the “sky drops fast travel intelligence platform” big data, Extracted passenger demand and the number of taxi Kongshi data. Then demand and supply of taxis to establish indicators matrix, get the degree of matching supply needs of the region. Then through the big data relevant policies of each taxi company. Using the method of cluster analysis, to find the decisive role of the three aspects of the factors, using principal component analysis, compare the advantages and disadvantages of the existing company’s programs. Finally, according to the above research to develop a reasonable taxi software related policies.

  15. Influences of climate change on water resources availability in Jinjiang Basin, China.

    Science.gov (United States)

    Sun, Wenchao; Wang, Jie; Li, Zhanjie; Yao, Xiaolei; Yu, Jingshan

    2014-01-01

    The influences of climate change on water resources availability in Jinjiang Basin, China, were assessed using the Block-wise use of the TOPmodel with the Muskingum-Cunge routing method (BTOPMC) distributed hydrological model. The ensemble average of downscaled output from sixteen GCMs (General Circulation Models) for A1B emission scenario (medium CO2 emission) in the 2050s was adopted to build regional climate change scenario. The projected precipitation and temperature data were used to drive BTOPMC for predicting hydrological changes in the 2050s. Results show that evapotranspiration will increase in most time of a year. Runoff in summer to early autumn exhibits an increasing trend, while in the rest period of a year it shows a decreasing trend, especially in spring season. From the viewpoint of water resource availability, it is indicated that it has the possibility that water resources may not be sufficient to fulfill irrigation water demand in the spring season and one possible solution is to store more water in the reservoir in previous summer.

  16. Big Data Analytics for Smart Manufacturing: Case Studies in Semiconductor Manufacturing

    Directory of Open Access Journals (Sweden)

    James Moyne

    2017-07-01

    Full Text Available Smart manufacturing (SM is a term generally applied to the improvement in manufacturing operations through integration of systems, linking of physical and cyber capabilities, and taking advantage of information including leveraging the big data evolution. SM adoption has been occurring unevenly across industries, thus there is an opportunity to look to other industries to determine solution and roadmap paths for industries such as biochemistry or biology. The big data evolution affords an opportunity for managing significantly larger amounts of information and acting on it with analytics for improved diagnostics and prognostics. The analytics approaches can be defined in terms of dimensions to understand their requirements and capabilities, and to determine technology gaps. The semiconductor manufacturing industry has been taking advantage of the big data and analytics evolution by improving existing capabilities such as fault detection, and supporting new capabilities such as predictive maintenance. For most of these capabilities: (1 data quality is the most important big data factor in delivering high quality solutions; and (2 incorporating subject matter expertise in analytics is often required for realizing effective on-line manufacturing solutions. In the future, an improved big data environment incorporating smart manufacturing concepts such as digital twin will further enable analytics; however, it is anticipated that the need for incorporating subject matter expertise in solution design will remain.

  17. Shifting species interactions in terrestrial dryland ecosystems under altered water availability and climate change

    Science.gov (United States)

    McCluney, Kevin E.; Belnap, Jayne; Collins, Scott L.; González, Angélica L.; Hagen, Elizabeth M.; Holland, J. Nathaniel; Kotler, Burt P.; Maestre, Fernando T.; Smith, Stanley D.; Wolf, Blair O.

    2012-01-01

    Species interactions play key roles in linking the responses of populations, communities, and ecosystems to environmental change. For instance, species interactions are an important determinant of the complexity of changes in trophic biomass with variation in resources. Water resources are a major driver of terrestrial ecology and climate change is expected to greatly alter the distribution of this critical resource. While previous studies have documented strong effects of global environmental change on species interactions in general, responses can vary from region to region. Dryland ecosystems occupy more than one-third of the Earth's land mass, are greatly affected by changes in water availability, and are predicted to be hotspots of climate change. Thus, it is imperative to understand the effects of environmental change on these globally significant ecosystems. Here, we review studies of the responses of population-level plant-plant, plant-herbivore, and predator-prey interactions to changes in water availability in dryland environments in order to develop new hypotheses and predictions to guide future research. To help explain patterns of interaction outcomes, we developed a conceptual model that views interaction outcomes as shifting between (1) competition and facilitation (plant-plant), (2) herbivory, neutralism, or mutualism (plant-herbivore), or (3) neutralism and predation (predator-prey), as water availability crosses physiological, behavioural, or population-density thresholds. We link our conceptual model to hypothetical scenarios of current and future water availability to make testable predictions about the influence of changes in water availability on species interactions. We also examine potential implications of our conceptual model for the relative importance of top-down effects and the linearity of patterns of change in trophic biomass with changes in water availability. Finally, we highlight key research needs and some possible broader impacts

  18. Shifting species interactions in terrestrial dryland ecosystems under altered water availability and climate change.

    Science.gov (United States)

    McCluney, Kevin E; Belnap, Jayne; Collins, Scott L; González, Angélica L; Hagen, Elizabeth M; Nathaniel Holland, J; Kotler, Burt P; Maestre, Fernando T; Smith, Stanley D; Wolf, Blair O

    2012-08-01

    Species interactions play key roles in linking the responses of populations, communities, and ecosystems to environmental change. For instance, species interactions are an important determinant of the complexity of changes in trophic biomass with variation in resources. Water resources are a major driver of terrestrial ecology and climate change is expected to greatly alter the distribution of this critical resource. While previous studies have documented strong effects of global environmental change on species interactions in general, responses can vary from region to region. Dryland ecosystems occupy more than one-third of the Earth's land mass, are greatly affected by changes in water availability, and are predicted to be hotspots of climate change. Thus, it is imperative to understand the effects of environmental change on these globally significant ecosystems. Here, we review studies of the responses of population-level plant-plant, plant-herbivore, and predator-prey interactions to changes in water availability in dryland environments in order to develop new hypotheses and predictions to guide future research. To help explain patterns of interaction outcomes, we developed a conceptual model that views interaction outcomes as shifting between (1) competition and facilitation (plant-plant), (2) herbivory, neutralism, or mutualism (plant-herbivore), or (3) neutralism and predation (predator-prey), as water availability crosses physiological, behavioural, or population-density thresholds. We link our conceptual model to hypothetical scenarios of current and future water availability to make testable predictions about the influence of changes in water availability on species interactions. We also examine potential implications of our conceptual model for the relative importance of top-down effects and the linearity of patterns of change in trophic biomass with changes in water availability. Finally, we highlight key research needs and some possible broader impacts

  19. Big Data

    OpenAIRE

    Bútora, Matúš

    2017-01-01

    Cieľom bakalárskej práca je popísať problematiku Big Data a agregačné operácie OLAP pre podporu rozhodovania, ktoré sú na ne aplikované pomocou technológie Apache Hadoop. Prevažná časť práce je venovaná popisu práve tejto technológie. Posledná kapitola sa zaoberá spôsobom aplikovania agregačných operácií a problematikou ich realizácie. Nasleduje celkové zhodnotenie práce a možnosti využitia výsledného systému do budúcna. The aim of the bachelor thesis is to describe the Big Data issue and ...

  20. The big data potential of epidemiological studies for criminology and forensics.

    Science.gov (United States)

    DeLisi, Matt

    2018-07-01

    Big data, the analysis of original datasets with large samples ranging from ∼30,000 to one million participants to mine unexplored data, has been under-utilized in criminology. However, there have been recent calls for greater synthesis between epidemiology and criminology and a small number of scholars have utilized epidemiological studies that were designed to measure alcohol and substance use to harvest behavioral and psychiatric measures that relate to the study of crime. These studies have been helpful in producing knowledge about the most serious, violent, and chronic offenders, but applications to more pathological forensic populations is lagging. Unfortunately, big data relating to crime and justice are restricted and limited to criminal justice purposes and not easily available to the research community. Thus, the study of criminal and forensic populations is limited in terms of data volume, velocity, and variety. Additional forays into epidemiology, increased use of available online judicial and correctional data, and unknown new frontiers are needed to bring criminology up to speed in the big data arena. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.