WorldWideScience

Sample records for lakes allequash big

  1. Ecological Health and Water Quality Assessments in Big Creek Lake, AL

    Science.gov (United States)

    Childs, L. M.; Frey, J. W.; Jones, J. B.; Maki, A. E.; Brozen, M. W.; Malik, S.; Allain, M.; Mitchell, B.; Batina, M.; Brooks, A. O.

    2008-12-01

    Big Creek Lake (aka J.B. Converse Reservoir) serves as the water supply for the majority of residents in Mobile County, Alabama. The area surrounding the reservoir serves as a gopher tortoise mitigation bank and is protected from further development, however, impacts from previous disasters and construction have greatly impacted the Big Creek Lake area. The Escatawpa Watershed drains into the lake, and of the seven drainage streams, three have received a 303 (d) (impaired water bodies) designation in the past. In the adjacent ecosystem, the forest is experiencing major stress from drought and pine bark beetle infestations. Various agencies are using control methods such as pesticide treatment to eradicate the beetles. There are many concerns about these control methods and the run-off into the ecosystem. In addition to pesticide control methods, the Highway 98 construction projects cross the north area of the lake. The community has expressed concern about both direct and indirect impacts of these construction projects on the lake. This project addresses concerns about water quality, increasing drought in the Southeastern U.S., forest health as it relates to vegetation stress, and state and federal needs for improved assessment methods supported by remotely sensed data to determine coastal forest susceptibility to pine bark beetles. Landsat TM, ASTER, MODIS, and EO-1/ALI imagery was employed in Normalized Difference Vegetation Index (NDVI) and Normalized Difference Moisture Index (NDMI), as well as to detect concentration of suspended solids, chlorophyll and water turbidity. This study utilizes NASA Earth Observation Systems to determine how environmental conditions and human activity relate to pine tree stress and the onset of pine beetle invasion, as well as relate current water quality data to community concerns and gain a better understanding of human impacts upon water resources.

  2. Comparative growth models of big-scale sand smelt (Atherina boyeri Risso, 1810 sampled from Hirfanll Dam Lake, Klrsehir, Ankara, Turkey

    Directory of Open Access Journals (Sweden)

    S. Benzer

    2017-06-01

    Full Text Available In this current publication the growth characteristics of big-scale sand smelt data were compared for population dynamics within artificial neural networks and length-weight relationships models. This study aims to describe the optimal decision of the growth model of big-scale sand smelt by artificial neural networks and length-weight relationships models at Hirfanll Dam Lake, Klrsehir, Turkey. There were a total of 1449 samples collected from Hirfanll Dam Lake between May 2015 and May 2016. Both model results were compared with each other and the results were also evaluated with MAPE (mean absolute percentage error, MSE (mean squared error and r2 (coefficient correlation data as a performance criterion. The results of the current study show that artificial neural networks is a superior estimation tool compared to length-weight relationships models of big-scale sand smelt in Hirfanll Dam Lake.

  3. Scalable Architecture for Personalized Healthcare Service Recommendation using Big Data Lake

    OpenAIRE

    Rangarajan, Sarathkumar; Liu, Huai; Wang, Hua; Wang, Chuan-Long

    2018-01-01

    The personalized health care service utilizes the relational patient data and big data analytics to tailor the medication recommendations. However, most of the health care data are in unstructured form and it consumes a lot of time and effort to pull them into relational form. This study proposes a novel data lake architecture to reduce the data ingestion time and improve the precision of healthcare analytics. It also removes the data silos and enhances the analytics by allowing the connectiv...

  4. Water-quality effects on phytoplankton species and density and trophic state indices at Big Base and Little Base Lakes, Little Rock Air Force Base, Arkansas, June through August, 2015

    Science.gov (United States)

    Driver, Lucas; Justus, Billy

    2016-01-01

    Big Base and Little Base Lakes are located on Little Rock Air Force Base, Arkansas, and their close proximity to a dense residential population and an active military/aircraft installation make the lakes vulnerable to water-quality degradation. The U.S. Geological Survey (USGS) conducted a study from June through August 2015 to investigate the effects of water quality on phytoplankton species and density and trophic state in Big Base and Little Base Lakes, with particular regard to nutrient concentrations. Nutrient concentrations, trophic-state indices, and the large part of the phytoplankton biovolume composed of cyanobacteria, indicate eutrophic conditions were prevalent for Big Base and Little Base Lakes, particularly in August 2015. Cyanobacteria densities and biovolumes measured in this study likely pose a low to moderate risk of adverse algal toxicity, and the high proportion of filamentous cyanobacteria in the lakes, in relation to other algal groups, is important from a fisheries standpoint because these algae are a poor food source for many aquatic taxa. In both lakes, total nitrogen to total phosphorus (N:P) ratios declined over the sampling period as total phosphorus concentrations increased relative to nitrogen concentrations. The N:P ratios in the August samples (20:1 and 15:1 in Big Base and Little Base Lakes, respectively) and other indications of eutrophic conditions are of concern and suggest that exposure of the two lakes to additional nutrients could cause unfavorable dissolved-oxygen conditions and increase the risk of cyanobacteria blooms and associated cyanotoxin issues.

  5. Spatial distribution of radionuclides in Lake Michigan biota near the Big Rock Point Nuclear Plant

    International Nuclear Information System (INIS)

    Wahlgren, M.A.; Yaguchi, E.M.; Nelson, D.M.; Marshall, J.S.

    1974-01-01

    A survey was made of four groups of biota in the vicinity of the Big Rock Point Nuclear Plant near Charlevoix, Michigan, to determine their usefulness in locating possible sources of plutonium and other radionuclides to Lake Michigan. This 70 MW boiling-water reactor, located on the Lake Michigan shoreline, was chosen because its fuel contains recycled plutonium, and because it routinely discharges very low-level radioactive wastes into the lake. Samples of crayfish (Orconectes sp.), green algae (Chara sp. and Cladophora sp.), and an aquatic macrophyte (Potamogeton sp.) were collected in August 1973, at varying distances from the discharge and analyzed for 239 240 Pu, 90 Sr, and five gamma-emitting radionuclides. Comparison samples of reactor waste solution have also been analyzed for these radionuclides. Comparisons of the spatial distributions of the extremely low radionuclide concentrations in biota clearly indicated that 137 Cs, 134 Cs, 65 Zn, and 60 Co were released from the reactor; their concentrations decreased exponentially with increasing distance from the discharge. Conversely, concentrations of 239 240 Pu, 95 Zr, and 90 Sr showed no correlation with distance, suggesting any input from Big Rock was insignificant with respect to the atmospheric origin of these isotopes. The significance of these results is discussed, particularly with respect to current public debate over the possibility of local environmental hazards associated with the use of plutonium as a nuclear fuel. (U.S.)

  6. Isotopic Survey of Lake Davis and the Local Groundwater

    Energy Technology Data Exchange (ETDEWEB)

    Ridley, M N; Moran, J E; Singleton, M J

    2007-08-21

    In September 2007, California Fish and Game (CAFG) plans to eradicate the northern pike from Lake Davis. As a result of the eradication treatment, local residents have concerns that the treatment might impact the local groundwater quality. To address the concerns of the residents, Lawrence Livermore National Laboratory (LLNL) recommended measuring the naturally occurring stable oxygen isotopes in local groundwater wells, Lake Davis, and the Lake Davis tributaries. The purpose of these measurements is to determine if the source of the local groundwater is either rain/snowmelt, Lake Davis/Big Grizzly Creek water or a mixture of Lake Davis/Big Grizzly Creek and rain/snowmelt. As a result of natural evaporation, Lake Davis and the water flowing into Big Grizzly Creek are naturally enriched in {sup 18}oxygen ({sup 18}O), and if a source of a well's water is Lake Davis or Big Grizzly Creek, the well water will contain a much higher concentration of {sup 18}O. This survey will allow for the identification of groundwater wells whose water source is Lake Davis or Big Grizzly Creek. The results of this survey will be useful in the development of a water-quality monitoring program for the upcoming Lake Davis treatment. LLNL analyzed 167 groundwater wells (Table 1), 12 monthly samples from Lake Davis (Table 2), 3 samples from Lake Davis tributaries (Table 2), and 8 Big Grizzly Creek samples (Table 2). Of the 167 groundwater wells sampled and analyzed, only 2 wells contained a significant component of evaporated water, with an isotope composition similar to Lake Davis water. The other 163 groundwater wells have isotope compositions which indicate that their water source is rain/snowmelt.

  7. ARSENIC REMOVAL FROM DRINKING WATER BY IRON REMOVAL USEPA DEMONSTRATION PROJECT AT BIG SAUK LAKE MOBILE HOME PARK IN SAUK CENTRE, MN. SIX MONTH EVALUATION REPORT

    Science.gov (United States)

    This report documents the activities performed and the results obtained from the first six months of the arsenic removal treatment technology demonstration project at the Big Sauk Lake Mobile Home Park (BSLMHP) in Sauk Centre, MN. The objectives of the project are to evaluate the...

  8. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  9. Arsenic Removal from Drinking Water by Iron Removal - U.S. EPA Demonstration Project at Big Sauk Lake Mobile Home Park in Sauk Centre, MN Final Performance Evaluation Report

    Science.gov (United States)

    This report documents the activities performed and the results obtained from the one-year arsenic removal treatment technology demonstration project at the Big Sauk Lake Mobile Home Park (BSLMHP) in Sauk Centre, MN. The objectives of the project are to evaluate (1) the effective...

  10. External Nutrient Inputs into Lake Kivu: Rivers and Atmospheric ...

    African Journals Online (AJOL)

    Quantifying the external nutrients inputs is a key factor for understanding the formation of methane in Lake Kivu. This tectonic lake located between Rwanda and DRC contains a big quantity of dissolved gases predominated by carbon dioxide, methane and sulphide. The CH4 is most probably produced in the lake, mainly in ...

  11. Geohydrology of Big Bear Valley, California: phase 1--geologic framework, recharge, and preliminary assessment of the source and age of groundwater

    Science.gov (United States)

    Flint, Lorraine E.; Brandt, Justin; Christensen, Allen H.; Flint, Alan L.; Hevesi, Joseph A.; Jachens, Robert; Kulongoski, Justin T.; Martin, Peter; Sneed, Michelle

    2012-01-01

    The Big Bear Valley, located in the San Bernardino Mountains of southern California, has increased in population in recent years. Most of the water supply for the area is pumped from the alluvial deposits that form the Big Bear Valley groundwater basin. This study was conducted to better understand the thickness and structure of the groundwater basin in order to estimate the quantity and distribution of natural recharge to Big Bear Valley. A gravity survey was used to estimate the thickness of the alluvial deposits that form the Big Bear Valley groundwater basin. This determined that the alluvial deposits reach a maximum thickness of 1,500 to 2,000 feet beneath the center of Big Bear Lake and the area between Big Bear and Baldwin Lakes, and decrease to less than 500 feet thick beneath the eastern end of Big Bear Lake. Interferometric Synthetic Aperture Radar (InSAR) was used to measure pumping-induced land subsidence and to locate structures, such as faults, that could affect groundwater movement. The measurements indicated small amounts of land deformation (uplift and subsidence) in the area between Big Bear Lake and Baldwin Lake, the area near the city of Big Bear Lake, and the area near Sugarloaf, California. Both the gravity and InSAR measurements indicated the possible presence of subsurface faults in subbasins between Big Bear and Baldwin Lakes, but additional data are required for confirmation. The distribution and quantity of groundwater recharge in the area were evaluated by using a regional water-balance model (Basin Characterization Model, or BCM) and a daily rainfall-runoff model (INFILv3). The BCM calculated spatially distributed potential recharge in the study area of approximately 12,700 acre-feet per year (acre-ft/yr) of potential in-place recharge and 30,800 acre-ft/yr of potential runoff. Using the assumption that only 10 percent of the runoff becomes recharge, this approach indicated there is approximately 15,800 acre-ft/yr of total recharge in

  12. Archeological Investigations at Big Hill Lake, Southeastern Kansas, 1980.

    Science.gov (United States)

    1982-09-01

    settled primarily along the Neosho river and Labette, Big Hill, and Pumpkin creeks. One of the first settlers in Osage township, in which Big Hill...slabs is not known at present. About 10 years later, in 1876, materials were reported- ly collected from an aboriginal site along Pumpkin creek...and length- ening its lifetime of use. As would therefore be expected, cracks are present between each of the paired holes on both of the two restored

  13. Kokanee Stocking and Monitoring, Flathead Lake, 1993-1994 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Deleray, Mark (Montana Department of Fish, Wildlife and Parks, Kalispell, MT); Fredenberg, Wade (US Fish and Wildlife Service, Bozeman, MT); Hansen, Barry (Confederated Salish and Kootenai Tribes of the Flathead Nation, Pablo, MT)

    1995-07-01

    One mitigation goal of the Hungry Horse Dam fisheries mitigation program, funded by the Bonneville Power Administration, is to replace lost production of 100,000 adult kokanee in Flathead Lake. The mitigation program calls for a five-year test to determine if kokanee can be reestablished in Flathead Lake. The test consists. of annual stocking of one million hatchery-raised yearling kokanee. There are three benchmarks for judging the success of the kokanee reintroduction effort: (1) Post-stocking survival of 30 percent of planted kokanee one year after stocking; (2) Yearling to adult survival of 10 percent (100,000 adult salmon); (3) Annual kokanee harvest of 50,000 or more fish per year by 1998, with an average length of 11 inches or longer for harvested fish, and fishing pressure of 100,000 angler hours or more. Kokanee were the primary sport fish species in the Flathead Lake fishery in the early 1900s, and up until the late 1980s when the population rapidly declined in numbers and then disappeared. Factors identified which influenced the decline of kokanee are the introduction of opossum shrimp (Mysis relicta), hydroelectric operations, overharvest through angling, and competition and/or predation by lake trout (Salvelinus namaycush) and lake whitefish (Coregonur clupeaformis). The purpose of this report was to summarize the stocking program and present monitoring results from the 1993 and 1994 field seasons. In June 1993, roughly 210,000 yearling kokanee were stocked into two bays on the east shore of Flathead Lake. Following stocking, we observed a high incidence of stocked kokanee in stomach samples from lake trout captured in areas adjacent to the stocking sites and a high percentage of captured lake trout containing kokanee. Subsequent monitoring concluded that excessive lake trout predation precluded significant survival of kokanee stocked in 1993. In June 1994, over 802,000 kokanee were stocked into Big Arm Bay. The combination of near optimum water

  14. Big Bear Exploration Ltd. 1998 annual report

    International Nuclear Information System (INIS)

    1999-01-01

    During the first quarter of 1998 Big Bear completed a purchase of additional assets in the Rainbow Lake area of Alberta in which light oil purchase was financed with new equity and bank debt. The business plan was to immediately exploit these light oil assets, the result of which would be increased reserves, production and cash flow. Although drilling results in the first quarter on the Rainbow Lake properties was mixed, oil prices started to free fall and drilling costs were much higher than expected. As a result, the company completed a reduced program which resulted in less incremental loss and cash flow than it budgeted for. On April 29, 1998, Big Bear entered into agreement with Belco Oil and Gas Corp. and Moan Investments Ltd. for the issuance of convertible preferred shares at a gross value of $15,750,000, which shares were eventually converted at 70 cents per share to common equity. As a result of the continued plunge in oil prices, the lending value of the company's assets continued to fall, requiring it to take action in order to meet its financial commitments. Late in the third quarter Big Bear issued equity for proceeds of $11,032,000 which further reduced the company's debt. Although the company has been extremely active in identifying and pursuing acquisition opportunities, it became evident that Belco Oil and Gas Corp. and Big Bear did nor share common criteria for acquisitions, which resulted in the restructuring of their relationship in the fourth quarter. With the future of oil prices in question, Big Bear decided that it would change its focus to that of natural gas and would refocus ts efforts to acquire natural gas assets to fuel its growth. The purchase of Blue Range put Big Bear in a difficult position in terms of the latter's growth. In summary, what started as a difficult year ended in disappointment

  15. LAKE BAIKAL: Underwater neutrino detector

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    A new underwater detector soon to be deployed in Lake Baikal in Siberia, the world's deepest lake with depths down to 1.7 kilometres, could help probe the deepest mysteries of physics. One of the big unsolved problems of astrophysics is the origin of very energetic cosmic rays. However there are many ideas on how particles could be accelerated by exotic concentrations of matter and provide the majority of the Galaxy's high energy particles. Clarification would come from new detectors picking up the energetic photons and neutrinos from these sources

  16. 78 FR 33433 - Bear Lake National Wildlife Refuge, Bear Lake County, ID, and Oxford Slough Waterfowl Production...

    Science.gov (United States)

    2013-06-04

    ... Lake NWR, with enhancements to improve access. Hunting of waterfowl, small game, upland game birds, big..., including opportunities for hunting, fishing, wildlife observation and photography, and environmental... feasibility of, and make recommendations on, techniques to exclude carp and non-native game fish within the...

  17. Creating value in health care through big data: opportunities and policy implications.

    Science.gov (United States)

    Roski, Joachim; Bo-Linn, George W; Andrews, Timothy A

    2014-07-01

    Big data has the potential to create significant value in health care by improving outcomes while lowering costs. Big data's defining features include the ability to handle massive data volume and variety at high velocity. New, flexible, and easily expandable information technology (IT) infrastructure, including so-called data lakes and cloud data storage and management solutions, make big-data analytics possible. However, most health IT systems still rely on data warehouse structures. Without the right IT infrastructure, analytic tools, visualization approaches, work flows, and interfaces, the insights provided by big data are likely to be limited. Big data's success in creating value in the health care sector may require changes in current polices to balance the potential societal benefits of big-data approaches and the protection of patients' confidentiality. Other policy implications of using big data are that many current practices and policies related to data use, access, sharing, privacy, and stewardship need to be revised. Project HOPE—The People-to-People Health Foundation, Inc.

  18. Forest blowdown and lake acidification

    International Nuclear Information System (INIS)

    Dobson, J.E.; Rush, R.M.; Peplies, R.W.

    1990-01-01

    The authors examine the role of forest blowdown in lake acidification. The approach combines geographic information systems (GIS) and digital remote sensing with traditional field methods. The methods of analysis consist of direct observation, interpretation of satellite imagery and aerial photographs, and statistical comparison of two geographical distributions-one representing forest blow-down and another representing lake chemistry. Spatial and temporal associations between surface water pH and landscape disturbance are strong and consistent in the Adirondack Mountains of New York. In 43 Adirondack Mountain watersheds, lake pH is associated with the percentage of the watershed area blown down and with hydrogen ion deposition (Spearman rank correlation coefficients of -0.67 and -0.73, respectively). Evidence of a temporal association is found at Big Moose Lake and Jerseyfield Lake in New York and the Lygners Vider Plateau of Sweden. They conclude that forest blowdown facilities the acidification of some lakes by altering hydrologic pathways so that waters (previously acidified by acid deposition and/or other sources) do not experience the neutralization normally available through contact with subsurface soils and bedrock. Increased pipeflow is suggested as a mechanism that may link the biogeochemical impacts of forest blowdown to lake chemistry

  19. Mixed stock analysis of Lake Michigan's Lake Whitefish Coregonus clupeaformis commercial fishery

    Science.gov (United States)

    Andvik, Ryan; Sloss, Brian L.; VanDeHey, Justin A.; Claramunt, Randall M.; Hansen, Scott P.; Isermann, Daniel A.

    2016-01-01

    Lake whitefish (Coregonus clupeaformis) support the primary commercial fishery in Lake Michigan. Discrete genetic stocks of lake whitefish have been identified and tagging data suggest stocks are mixed throughout much of the year. Our objectives were to determine if (1) differential stock harvest occurs in the commercial catch, (2) spatial differences in genetic composition of harvested fish were present, and (3) seasonal differences were present in the harvest by commercial fisheries that operate in management zones WI-2 and WFM-01 (Green Bay, Lake Michigan). Mixed stock analysis was conducted on 17 commercial harvest samples (n = 78–145/sample) collected from various ports lake-wide during 2009–2010. Results showed significant mixing with variability in stock composition across most samples. Samples consisted of two to four genetic stocks each accounting for ≥ 10% the catch. In 10 of 17 samples, the stock contributing the largest proportion made up differences existed in the proportional stock contribution at a single capture location. Samples from Wisconsin's primary commercial fishing management zone (WI-2) were composed predominately of fish from the Big Bay de Noc (Michigan) stock as opposed to the geographically proximate, North–Moonlight Bay (Wisconsin) stock. These findings have implications for management and allocation of fish to various quotas. Specifically, geographic location of harvest, the current means of allocating harvest quotas, is not the best predictor of genetic stock harvest.

  20. Hellsgate Big Game Winter Range Wildlife Mitigation Project : Annual Report 2008.

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Richard P.; Berger, Matthew T.; Rushing, Samuel; Peone, Cory

    2009-01-01

    The Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) was proposed by the Confederated Tribes of the Colville Reservation (CTCR) as partial mitigation for hydropower's share of the wildlife losses resulting from Chief Joseph and Grand Coulee Dams. At present, the Hellsgate Project protects and manages 57,418 acres (approximately 90 miles2) for the biological requirements of managed wildlife species; most are located on or near the Columbia River (Lake Rufus Woods and Lake Roosevelt) and surrounded by Tribal land. To date we have acquired about 34,597 habitat units (HUs) towards a total 35,819 HUs lost from original inundation due to hydropower development. In addition to the remaining 1,237 HUs left unmitigated, 600 HUs from the Washington Department of Fish and Wildlife that were traded to the Colville Tribes and 10 secure nesting islands are also yet to be mitigated. This annual report for 2008 describes the management activities of the Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) during the past year.

  1. Fishing Business Arrangements and Sustainability in Lake Victoria ...

    African Journals Online (AJOL)

    In this article an attempt is made to analyse the existing production relations between the owners of the vessels and the crewmembers and the concern for sustainability. Our results found that the existing sharing system in Lake Victoria poses a big challenge in as far as sustainability is concerned. Some of the system such ...

  2. Simulation and assessment of groundwater flow and groundwater and surface-water exchanges in lakes of the northeast Twin Cities Metropolitan Area, Minnesota, 2003 through 2013: Chapter B of Water levels and groundwater and surface-water exchanges in lakes of the northeast Twin Cities Metropolitan Area, Minnesota, 2002 through 2015

    Science.gov (United States)

    Jones, Perry M.; Roth, Jason L.; Trost, Jared J.; Christenson, Catherine A.; Diekoff, Aliesha L.; Erickson, Melinda L.

    2017-09-05

    Water levels during 2003 through 2013 were less than mean water levels for the period 1925–2013 for several lakes in the northeast Twin Cities Metropolitan Area in Minnesota. Previous periods of low lake-water levels generally were correlated with periods with less than mean precipitation. Increases in groundwater withdrawals and land-use changes have brought into question whether or not recent (2003–13) lake-water-level declines are solely caused by decreases in precipitation. A thorough understanding of groundwater and surface-water exchanges was needed to assess the effect of water-management decisions on lake-water levels. To address this need, the U.S. Geological Survey, in cooperation with the Metropolitan Council and the Minnesota Department of Health, developed and calibrated a three-dimensional, steady-state groundwater-flow model representing 2003–13 mean hydrologic conditions to assess groundwater and lake-water exchanges, and the effects of groundwater withdrawals and precipitation on water levels of 96 lakes in the northeast Twin Cities Metropolitan Area.Lake-water budgets for the calibrated groundwater-flow model indicated that groundwater is flowing into lakes in the northeast Twin Cities Metropolitan Area and lakes are providing water to underlying aquifers. Lake-water outflow to the simulated groundwater system was a major outflow component for Big Marine Lake, Lake Elmo, Snail Lake, and White Bear Lake, accounting for 45 to 64 percent of the total outflows from the lakes. Evaporation and transpiration from the lake surface ranged from 19 to 52 percent of the total outflow from the four lakes. Groundwater withdrawals and precipitation were varied from the 2003‒13 mean values used in the calibrated model (30-percent changes in groundwater withdrawals and 5-percent changes in precipitation) for hypothetical scenarios to assess the effects of groundwater withdrawals and precipitation on water budgets and levels in Big Marine Lake, Snail Lake

  3. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  4. What caused the decline of China's largest freshwater lake? Attribution analysis on Poyang Lake water level variations in recent years

    Science.gov (United States)

    Ye, Xuchun; Xu, Chong-Yu; Zhang, Qi

    2017-04-01

    In recent years, dramatic decline of water level of the Poyang Lake, China's largest freshwater lake, has raised wide concerns about the water security and wetland ecosystem. This remarkable hydrological change coincided with several factors like the initial operation of the Three Gorges Dam (TGD) in 2003, the big change of lake bottom topography due to extensive sand mining in the lake since 2000, and also climate change and other human activities in the Yangtze River basin may add to this complexity. Questions raised to what extent that the lake hydrological changes is caused by climate change and/or human activities. In this study, quantitative assessment was conducted to clarify the magnitude and mechanism of specific influencing factors on recent lake decline (2003-2014), with reference to the period of 1980-1999. The attempts were achieved through the reconstruction of lake water level scenarios by the framework of neural network. Major result indicates that the effect of lake bottom topography change due to sand mining activities has became the dominant factor for the recent lake decline, especially in winter season with low water level. However, the effect of TGD regulation shows strong seasonal features, its effect can accounts for 33%-42% of the average water level decline across the lake during the impoundment period of September-October. In addition, the effect of climate change and other human activities over the Yangtze River basin needs to be highly addressed, which is particularly prominent on reducing lake water level during the summer flood season and autumn recession period. The result also revealed that due to different mechanism, the responses of the lake water level to the three influencing factors are not consistent and show great spatial and temporal differences.

  5. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  6. 78 FR 20544 - Proposed Establishment of the Big Valley District-Lake County and Kelsey Bench-Lake County...

    Science.gov (United States)

    2013-04-05

    ... Lake warms more slowly than the adjacent land during the day and also holds its heat longer at night... formations are comprised of chert, greywacke, shale, metasedimentary rocks, and metavolcanic rocks thrown... included information on the wind, growing degree days, frost-free days, and precipitation within the...

  7. Technologies for lake restoration

    Directory of Open Access Journals (Sweden)

    Helmut KLAPPER

    2003-09-01

    Full Text Available Lakes are suffering from different stress factors and need to be restored using different approaches. The eutrophication remains as the main water quality management problem for inland waters: both lakes and reservoirs. The way to curb the degradation is to stop the nutrient sources and to accelerate the restoration with help of in-lake technologies. Especially lakes with a long retention time need (eco- technological help to decrease the nutrient content in the free water. The microbial and other organic matter from sewage and other autochthonous biomasses, causes oxygen depletion, which has many adverse effects. In less developed countries big reservoirs function as sewage treatment plants. Natural aeration solves problems only partly and many pollutants tend to accumulate in the sediments. The acidification by acid rain and by pyrite oxidation has to be controlled by acid neutralizing technologies. Addition of alkaline chemicals is useful only for soft waters, and technologies for (microbial alkalinization of very acidic hardwater mining lakes are in development. The corrective measures differ from those in use for eutrophication control. The salinization and water shortage mostly occurs if more water is used than available. L. Aral, L. Tschad, the Dead Sea or L. Nasser belong to waters with most severe environmental problems on a global scale. Their hydrologic regime needs to be evaluated. The inflow of salt water at the bottom of some mining lakes adds to stability of stratification, and thus accumulation of hydrogen sulphide in the monimolimnion of the meromictic lakes. Destratification, which is the most used technology, is only restricted applicable because of the dangerous concentrations of the byproducts of biological degradation. The contamination of lakes with hazardous substances from industry and agriculture require different restoration technologies, including subhydric isolation and storage, addition of nutrients for better self

  8. Diet Overlap and Predation between Smallmouth Bass and Walleye in a North Temperate Lake

    Science.gov (United States)

    Aaron P. Frey; Michael A. Bozek; Clayton J. Edwards; Steve P. Newman

    2003-01-01

    Walleye (Stizostedion vitreum vitreum) and smallmouth bass (Micropterus dolomieu) diets from Big Crooked Lake, Wisconsin were examined to assess the degree of diet overlap and predation occurring between these species in an attempt to deternine whether walleye influence smallmouth bass recruitment, which is consistently low...

  9. 78 FR 60686 - Establishment of the Big Valley District-Lake County and Kelsey Bench-Lake County Viticultural...

    Science.gov (United States)

    2013-10-02

    ... viticultural areas. Definition Section 4.25(e)(1)(i) of the TTB regulations (27 CFR 4.25(e)(1)(i)) defines a... to the road's intersection with Manning Creek, northern boundary of section 6, T13N/R9W; then (23) Proceed northwesterly (downstream) along Manning Creek to the shore of Clear Lake, section 30, T14N/R9W...

  10. Water quality and trend analysis of Colorado--Big Thompson system reservoirs and related conveyances, 1969 through 2000

    Science.gov (United States)

    Stevens, Michael R.

    2003-01-01

    The U.S. Geological Survey, in an ongoing cooperative monitoring program with the Northern Colorado Water Conservancy District, Bureau of Reclamation, and City of Fort Collins, has collected water-quality data in north-central Colorado since 1969 in reservoirs and conveyances, such as canals and tunnels, related to the Colorado?Big Thompson Project, a water-storage, collection, and distribution system. Ongoing changes in water use among agricultural and municipal users on the eastern slope of the Rocky Mountains in Colorado, changing land use in reservoir watersheds, and other water-quality issues among Northern Colorado Water Conservancy District customers necessitated a reexamination of water-quality trends in the Colorado?Big Thompson system reservoirs and related conveyances. The sampling sites are on reservoirs, canals, and tunnels in the headwaters of the Colorado River (on the western side of the transcontinental diversion operations) and the headwaters of the Big Thompson River (on the eastern side of the transcontinental diversion operations). Carter Lake Reservoir and Horsetooth Reservoir are off-channel water-storage facilities, located in the foothills of the northern Colorado Front Range, for water supplied from the Colorado?Big Thompson Project. The length of water-quality record ranges from approximately 3 to 30 years depending on the site and the type of measurement or constituent. Changes in sampling frequency, analytical methods, and minimum reporting limits have occurred repeatedly over the period of record. The objective of this report was to complete a retrospective water-quality and trend analysis of reservoir profiles, nutrients, major ions, selected trace elements, chlorophyll-a, and hypolimnetic oxygen data from 1969 through 2000 in Lake Granby, Shadow Mountain Lake, and the Granby Pump Canal in Grand County, Colorado, and Horsetooth Reservoir, Carter Lake, Lake Estes, Alva B. Adams Tunnel, and Olympus Tunnel in Larimer County, Colorado

  11. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  12. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  13. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  14. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  15. Vertical distributions of PAHs in the sediments of four lakes in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Fukushima, Takehiko; Watanabe, Seigo; Kamiya, Koichi [Tsukuba Univ., Ibaraki (Japan). Graduate School of Life and Environment; Ozaki, Noriatsu [Hiroshima Univ., Higashihiroshima (Japan). Graduate School of Engineering

    2012-12-15

    Purpose: The purpose of this study was to elucidate historical trends, spatial variations, and the sources of polycyclic aromatic hydrocarbons (PAHs) pollution in several Japanese lakes. Materials and methods: The vertical distributions of PAHs in the core samples of sediments taken at several points in lakes Kasumigaura, Suwa, Kizaki, and Shinji were determined using a gas chromatograph equipped with a mass selective detector and combined with chronological information and the physical/elemental properties of the sediment. Results and discussion: Seventeen related compounds (congeners) typically had concentration peaks at sediment depths corresponding to the 1960s to 1970s. In Lake Shinji and one bay of Lake Kasumigaura, there was a tendency for PAH concentrations to increase downstream; in contrast, another bay of Lake Kasumigaura showed the reverse trend. During big flood events, the fluxes of PAHs increased due to large inputs of particulate matter, although PAH concentrations were reduced. For the four study lakes and other similar lakes, PAH concentrations of surface sediments were approximately proportional to population densities in the respective watersheds, while the total input of PAHs to the lakes were correlated with their population and watershed area. The source apportionment analysis using isomer ratios for the congener profiles indicated that the principal sources of the PAHs in the lake sediments were gasoline and/or diesel engine exhausts and biomass burning. Conclusions: The observed concentration peaks showed a deterioration of the chemical quality of atmospheric conditions around 1960-1970 and a recent tendency for their amelioration. Between-lake differences suggest that the influence of human activity in the watersheds influences sediment PAH concentrations. The PAH sources were identified to be of pyrogenic origin. (orig.)

  16. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  17. Small values in big data: The continuing need for appropriate metadata

    Science.gov (United States)

    Stow, Craig A.; Webster, Katherine E.; Wagner, Tyler; Lottig, Noah R.; Soranno, Patricia A.; Cha, YoonKyung

    2018-01-01

    Compiling data from disparate sources to address pressing ecological issues is increasingly common. Many ecological datasets contain left-censored data – observations below an analytical detection limit. Studies from single and typically small datasets show that common approaches for handling censored data — e.g., deletion or substituting fixed values — result in systematic biases. However, no studies have explored the degree to which the documentation and presence of censored data influence outcomes from large, multi-sourced datasets. We describe left-censored data in a lake water quality database assembled from 74 sources and illustrate the challenges of dealing with small values in big data, including detection limits that are absent, range widely, and show trends over time. We show that substitutions of censored data can also bias analyses using ‘big data’ datasets, that censored data can be effectively handled with modern quantitative approaches, but that such approaches rely on accurate metadata that describe treatment of censored data from each source.

  18. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  19. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  20. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  1. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  2. American Fisheries Society 136th Annual Meeting Lake Placid, NY 10-14 September, 2006

    Science.gov (United States)

    Einhouse, D.; Walsh, M.G.; Keeler, S.; Long, J.M.

    2005-01-01

    The New York Chapter of the American Fisheries Society and the New York State Department of Environmental Conservation invite you to experience the beauty of New York's famous Adirondack Park as the American Fisheries Society (AFS) convenes its 136th Annual Meeting in the legendary Olympic Village of Lake Placid, NY, 10-14 September 2006. Our meeting theme "Fish in the Balance" will explore the interrelation between fish, aquatic habitats, and man, highlighting the challenges facing aquatic resource professionals and the methods that have been employed to resolve conflicts between those that use or have an interest in our aquatic resources. As fragile as it is beautiful, the Adirondack Region is the perfect location to explore this theme. Bordered by Mirror Lake and its namesake, Lake Placid, the Village of Lake Placid has small town charm, but all of the conveniences that a big city would provide. Whether its reliving the magic of the 1980 hockey team's "Miracle on Ice" at the Lake Placid Olympic Center, getting a panoramic view of the Adirondack high peaks from the top of the 90 meter ski jumps, fishing or kayaking in adjacent Mirror Lake, hiking a mountain trail, or enjoying a quiet dinner or shopping excursion in the various shops and restaurants that line Main Street, Lake Placid has something for everyone.

  3. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  4. Big Impacts and Transient Oceans on Titan

    Science.gov (United States)

    Zahnle, K. J.; Korycansky, D. G.; Nixon, C. A.

    2014-01-01

    We have studied the thermal consequences of very big impacts on Titan [1]. Titan's thick atmosphere and volatile-rich surface cause it to respond to big impacts in a somewhat Earth-like manner. Here we construct a simple globally-averaged model that tracks the flow of energy through the environment in the weeks, years, and millenia after a big comet strikes Titan. The model Titan is endowed with 1.4 bars of N2 and 0.07 bars of CH4, methane lakes, a water ice crust, and enough methane underground to saturate the regolith to the surface. We assume that half of the impact energy is immediately available to the atmosphere and surface while the other half is buried at the site of the crater and is unavailable on time scales of interest. The atmosphere and surface are treated as isothermal. We make the simplifying assumptions that the crust is everywhere as methane saturated as it was at the Huygens landing site, that the concentration of methane in the regolith is the same as it is at the surface, and that the crust is made of water ice. Heat flow into and out of the crust is approximated by step-functions. If the impact is great enough, ice melts. The meltwater oceans cool to the atmosphere conductively through an ice lid while at the base melting their way into the interior, driven down in part through Rayleigh-Taylor instabilities between the dense water and the warm ice. Topography, CO2, and hydrocarbons other than methane are ignored. Methane and ethane clathrate hydrates are discussed quantitatively but not fully incorporated into the model.

  5. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  6. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  7. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  8. LIMNOLOGY, LAKE BASINS, LAKE WATERS

    Directory of Open Access Journals (Sweden)

    Petre GÂŞTESCU

    2009-06-01

    Full Text Available Limnology is a border discipline between geography, hydrology and biology, and is also closely connected with other sciences, from it borrows research methods. Physical limnology (the geography of lakes, studies lake biotopes, and biological limnology (the biology of lakes, studies lake biocoenoses. The father of limnology is the Swiss scientist F.A. Forel, the author of a three-volume entitled Le Leman: monographie limnologique (1892-1904, which focuses on the geology physics, chemistry and biology of lakes. He was also author of the first textbook of limnology, Handbuch der Seenkunde: allgemeine Limnologie,(1901. Since both the lake biotope and its biohydrocoenosis make up a single whole, the lake and lakes, respectively, represent the most typical systems in nature. They could be called limnosystems (lacustrine ecosystems, a microcosm in itself, as the American biologist St.A. Forbes put it (1887.

  9. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  10. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  11. 76 FR 78812 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-12-20

    ... amendments may have been issued previously by the FAA in a Flight Data Center (FDC) Notice to Airmen (NOTAM... close and immediate relationship between these SIAPs, Takeoff Minimums and ODPs, and safety in air..., Amdt 13A, CANCELLED Big Lake, AK, Big Lake, RNAV (GPS) RWY 7, Amdt 1 Big Lake, AK, Big Lake, RNAV (GPS...

  12. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  13. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  14. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  15. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  16. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  17. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  18. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  19. Changes in concentrations of nickel and copper in the surface layers of sediments of the Lake Imandra the last half century

    Directory of Open Access Journals (Sweden)

    Dauvalter V.A.

    2015-06-01

    Full Text Available The analysis of the content of the polluting heavy metals Ni and Cu, priority for the region, in surface layers of sediments of the Lake Imandra, the largest reservoir of Murmansk region, has been carried out. The constant increase in the contents of Ni and Cu on all water area of the lake during researches has been determined owing to as direct intake of sewage of the enterprises of mining and metallurgical complex (Bolshaya and Yokostrovskaya Imandra, and aerial pollution of lake watershed and wind-induced currents (Babinskaya Imandra. The largest increase in the contents is noted in the Monche Bay of the Big Imandra up to 3 and 0.6 % for Ni and Cu respectively in the last years

  20. Microplastic pollution in lakes and lake shoreline sediments - A case study on Lake Bolsena and Lake Chiusi (central Italy).

    Science.gov (United States)

    Fischer, Elke Kerstin; Paglialonga, Lisa; Czech, Elisa; Tamminga, Matthias

    2016-06-01

    Rivers and effluents have been identified as major pathways for microplastics of terrestrial sources. Moreover, lakes of different dimensions and even in remote locations contain microplastics in striking abundances. This study investigates concentrations of microplastic particles at two lakes in central Italy (Lake Bolsena, Lake Chiusi). A total number of six Manta Trawls have been carried out, two of them one day after heavy winds occurred on Lake Bolsena showing effects on particle distribution of fragments and fibers of varying size categories. Additionally, 36 sediment samples from lakeshores were analyzed for microplastic content. In the surface waters 2.68 to 3.36 particles/m(3) (Lake Chiusi) and 0.82 to 4.42 particles/m(3) (Lake Bolsena) were detected, respectively. Main differences between the lakes are attributed to lake characteristics such as surface and catchment area, depth and the presence of local wind patterns and tide range at Lake Bolsena. An event of heavy winds and moderate rainfall prior to one sampling led to an increase of concentrations at Lake Bolsena which is most probable related to lateral land-based and sewage effluent inputs. The abundances of microplastic particles in sediments vary from mean values of 112 (Lake Bolsena) to 234 particles/kg dry weight (Lake Chiusi). Lake Chiusi results reveal elevated fiber concentrations compared to those of Lake Bolsena what might be a result of higher organic content and a shift in grain size distribution towards the silt and clay fraction at the shallow and highly eutrophic Lake Chiusi. The distribution of particles along different beach levels revealed no significant differences. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. [Composition and Environmental Effects of LFOM and HFOM in "Incense-Ash" Sediments of West Lake, Hangzhou, China].

    Science.gov (United States)

    Li, Jing; Zhu, Guang-wei; Zhu, Meng-yuan; Gong, Zhi-jun; Xu, Hai; Yang, Gui-jun

    2015-06-01

    To understand the organic matter pollution characteristic and its relationship with nitrogen, phosphorus and other nutrients in sediments of high organic matter type of urban shallow lakes, the organic matter content, light fraction organic matter (LFOM), heavy fraction organic matter (HFOM), and nitrogen and phosphorus contents were investigated in eight different regions of West Lake, Hangzhou. The results showed that, the organic matter content of the west lake sediment was 28-251 g x kg(-1), belonging to typical high organic matter sediment. The difference of organic matter content in different lake sediments was very big. The sediments located at the input site of water diversion engineering had significantly lower organic content than the rest regions. The LFOM content of West Lake sediment ranged 0.57-9.17 g x kg(-1), which averagely occupied 2.83% of the total organic matter, and the HFOM content ranged 5.35-347.41 g x kg(-1), which occupied more than 90% of the total organic matter. Compared to other shallow lakes located in China, sediments of West Lake had significantly high percentage of HFOM/LFOM ratio. But the HFOM content was obviously on the high side, reflecting the west lake as an urban lake with a long history, as well as high organic matter pollution load and sediment humification degree. Both the content and the ratio of LFOM/HFOM in sediment were related to nitrogen and phosphorus contents in sediment. This suggested that the composition of organic matter in West Lake sediments had potential control ability for the internal loading of N and P of the lake.

  2. Thinking like a duck: fall lake use and movement patterns of juvenile ring-necked ducks before migration.

    Science.gov (United States)

    Roy, Charlotte L; Fieberg, John; Scharenbroich, Christopher; Herwig, Christine M

    2014-01-01

    The post-fledging period is one of the least studied portions of the annual cycle in waterfowl. Yet, recruitment into the breeding population requires that young birds have sufficient resources to survive this period. We used radio-telemetry and generalized estimating equations to examine support for four hypotheses regarding the drivers of landscape scale habitat use and movements made by juvenile ring-necked ducks between the pre-fledging period and departure for migration. Our response variables included the probability of movement, distances moved, and use of different lake types: brood-rearing lakes, staging lakes, and lakes with low potential for disturbance. Birds increased their use of staging areas and lakes with low potential for disturbance (i.e., without houses or boat accesses, >100 m from roads, or big lakes with areas where birds could sit undisturbed) throughout the fall, but these changes began before the start of the hunting season and their trajectory was not changed by the onset of hunting. Males and females moved similar distances and had similar probabilities of movements each week. However, females were more likely than males to use brood-rearing lakes later in the fall. Our findings suggest juvenile ring-necked ducks require different lake types throughout the fall, and managing solely for breeding habitat will be insufficient for meeting needs during the post-fledging period. Maintaining areas with low potential for disturbance and areas suitable for staging will ensure that ring-necked ducks have access to habitat throughout the fall.

  3. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  4. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  5. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  6. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  7. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  8. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  9. High-coercivity minerals from North African Humid Period soil material deposited in Lake Yoa (Chad)

    Science.gov (United States)

    Just, J.; Kroepelin, S.; Wennrich, V.; Viehberg, F. A.; Wagner, B.; Rethemeyer, J.; Karls, J.; Melles, M.

    2015-12-01

    The Holocene is a period of fundamental climatic change in North Africa. Humid conditions during the so-called African Humid Period (AHP) have favored the formation of big lake systems. Only very few of these lakes persist until today. One of them is Lake Yoa (19°03'N/20°31'E) in the Ounianga Basin, Chad, which maintains its water level by ground water inflow. Here we present the magnetic characteristics together with proxies for lacustrine productivity and biota of a sediment core (Co1240) from Lake Yoa, retrieved in 2010 within the framework of the Collaborative Research Centre 806 - Our Way to Europe (Deutsche Forschungsgemeinschaft). Magnetic properties of AHP sediments show strong indications for reductive diagenesis. An up to ~ 80 m higher lake level is documented by lacustrine deposits in the Ounianga Basin, dating to the early phase of the AHP. The higher lake level and less strong seasonality restricted deep mixing of the lake. Development of anoxic conditions consequently lead to the dissolution of iron oxides. An exception is an interval with high concentration of high-coercivity magnetic minerals, deposited between 7800 - 8120 cal yr BP. This interval post-dates the 8.2 event, which was dry in Northern Africa and probably caused a reduced vegetation cover. We propose that the latter resulted in the destabilization of soils around Lake Yoa. After the re-establishment of humid conditions, these soil materials were eroded and deposited in the lake. Magnetic minerals appear well preserved in the varved Late Holocene sequence, indicating (sub-) oxic conditions in the lake. This is surprising, because the occurrence of varves is often interpreted as an indicator for anoxic conditions of the lake water. However, the salinity of lake water rose strongly after the AHP. We therefore hypothesize that the conservation of varves and absence of benthic organisms rather relates to the high salinity than to anoxic conditions.

  10. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  11. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  12. Terrestrial CDOM in Lakes of Yamal Peninsula: Connection to Lake and Lake Catchment Properties

    Directory of Open Access Journals (Sweden)

    Yury Dvornikov

    2018-01-01

    Full Text Available In this study, we analyze interactions in lake and lake catchment systems of a continuous permafrost area. We assessed colored dissolved organic matter (CDOM absorption at 440 nm (a(440CDOM and absorption slope (S300–500 in lakes using field sampling and optical remote sensing data for an area of 350 km2 in Central Yamal, Siberia. Applying a CDOM algorithm (ratio of green and red band reflectance for two high spatial resolution multispectral GeoEye-1 and Worldview-2 satellite images, we were able to extrapolate the a(λCDOM data from 18 lakes sampled in the field to 356 lakes in the study area (model R2 = 0.79. Values of a(440CDOM in 356 lakes varied from 0.48 to 8.35 m−1 with a median of 1.43 m−1. This a(λCDOM dataset was used to relate lake CDOM to 17 lake and lake catchment parameters derived from optical and radar remote sensing data and from digital elevation model analysis in order to establish the parameters controlling CDOM in lakes on the Yamal Peninsula. Regression tree model and boosted regression tree analysis showed that the activity of cryogenic processes (thermocirques in the lake shores and lake water level were the two most important controls, explaining 48.4% and 28.4% of lake CDOM, respectively (R2 = 0.61. Activation of thermocirques led to a large input of terrestrial organic matter and sediments from catchments and thawed permafrost to lakes (n = 15, mean a(440CDOM = 5.3 m−1. Large lakes on the floodplain with a connection to Mordy-Yakha River received more CDOM (n = 7, mean a(440CDOM = 3.8 m−1 compared to lakes located on higher terraces.

  13. Progress in study of Prespa Lake using nuclear and related techniques (IAEA Regional Project RER/8/008)

    International Nuclear Information System (INIS)

    Anovski, Todor

    2001-09-01

    One of the main objective of the IAEA - Regional project RER/8/008 entitled Study of Prespa Lake Using Nuclear and Related Techniques was to provide a scientific basis for sustainable and environmental management of the Lake Prespa (Three lakes: Ohrid, Big Prespa and Small Prespa are on the borders between Albania, Republic of Macedonia and Greece, and are separated by the Mali i Thate and Galichica, mostly Carstificated mountains), see Fig. 1. In this sense investigations connected with the hydrogeology, water quality (Physics-chemical, biological and radiological characteristics) and water balance determination by application of Environmental isotopes ( i.e. H,D,T,O-18,O-18 etc.,) distribution, artificial water tracers and other relevant analytical techniques such as: AAS, HPLC, Total α and β-activity, α and γ-spectrometry as well as ultra sonic measurements (defining of the Lake bottom profile) through regional cooperation / Scientists from Albania, Greece and Republic of Macedonia, participated in the implementation of the Project/ during one hydrological year, had been initiated and valuable results obtained, a part of which are presented in this report. This cooperation was the only way for providing necessary data for better understanding beside the other, of the water quality of the Prespa Lake and its hydrological relationship to Ohrid Lake too, representing a unique regional hydro system in the world. (Author)

  14. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  15. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  16. Sanctuaries for lake trout in the Great Lakes

    Science.gov (United States)

    Stanley, Jon G.; Eshenroder, Randy L.; Hartman, Wilbur L.

    1987-01-01

    Populations of lake trout, severely depleted in Lake Superior and virtually extirpated from the other Great Lakes because of sea lamprey predation and intense fishing, are now maintained by annual plantings of hatchery-reared fish in Lakes Michigan, Huron, and Ontario and parts of Lake Superior. The extensive coastal areas of the Great Lakes and proximity to large populations resulted in fishing pressure on planted lake trout heavy enough to push annual mortality associated with sport and commercial fisheries well above the critical level needed to reestablish self-sustaining stocks. The interagency, international program for rehabilitating lake trout includes controlling sea lamprey abundance, stocking hatchery-reared lake trout, managing the catch, and establishing sanctuaries where harvest is prohibited. Three lake trout sanctuaries have been established in Lake Michigan: the Fox Island Sanctuary of 121, 500 ha, in the Chippewa-Ottawa Treaty fishing zone in the northern region of the lake; the Milwaukee Reef Sanctuary of 160, 000 ha in midlake, in boundary waters of Michigan and Wisconsin; and Julian's Reef Sanctuary of 6, 500 ha, in Illinois waters. In northern Lake Huron, Drummond Island Sanctuary of 55, 000 ha is two thirds in Indian treaty-ceded waters in Michigan and one third in Ontario waters of Canada. A second sanctuary, Six Fathom Bank-Yankee Reef Sanctuary, in central Lake Huron contains 168, 000 ha. Sanctuary status for the Canadian areas remains to be approved by the Provincial government. In Lake Superior, sanctuaries protect the spawning grounds of Gull Island Shoal (70, 000 ha) and Devils Island Shoal (44, 000 ha) in Wisconsin's Apostle Island area. These seven sanctuaries, established by the several States and agreed upon by the States, Indian tribes, the U.S. Department of the Interior, and the Province of Ontario, contribute toward solving an interjurisdictional fishery problem.

  17. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  18. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  19. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  20. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  1. Evolution of alkaline lakes - Lake Van case study

    Science.gov (United States)

    Tillman Meyer, Felix; Viehberg, Finn; Bahroun, Sonya; Wolf, Annabel; Immenhauser, Adrian; Kwiecien, Ola

    2017-04-01

    Lake Van in Eastern Anatolia (Turkey) is the largest terminal soda lake on Earth. The lake sedimentary profile covers ca. 600 ka (Stockhecke et al. 2014) Based on lithological changes, the presence of freshwater microfossils and close-to-freshwater pH value in the pore water, members of ICDP PALEOVAN concluded that Lake Van might have started as an open lake. Here we show paleontological and geochemical evidence in favour of this idea and constrain the time, when Lake Van likely transformed into a closed lake. Additionally we provide the first conceptual model of how this closure may have happened. Our archives of choice are inorganic and biogenic carbonates, separated by wet sieving. We identified microfossil assemblages (fraction > 125 µm) and performed high-resolution oxygen isotope (delta18O) and elemental (Mg/Ca, Sr/Ca) analyses of the fraction plants growing in the photic zone as food supply. These two aspects point to an increasing salinity in a shallowing lake. The delta18O values of inorganic carbonates are relatively low during the initial phase of Lake Van and increase abruptly (ca. 7‰) after 530 ka BP. At approximately the same time combination of Sr/Ca and Mg/Ca data suggest first occurrence of aragonite. Again, these findings suggest geochemical changes of the lake water concurrent with transition documented by microfossils. Comparison between Lake Van and Lake Ohrid (Lacey et al. 2016) delta18O data, precludes regional climate change (e.g.: increased evaporation) as the main driver of observed changes. With no evidence for increased volcanic or tectonic activity (e.g.: tephra layers, deformation structures, slumping) in the Lake Van sedimentary profile around 530 ka, it seems unlikely that a pyroclastic flow blocked the outflow of the lake. Alternatively, a portion of inflow has been diverged which might have caused a change in the hydrological balance and lake level falling below its outlet. However, as no geomorphological data confirming this

  2. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  3. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  4. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  5. Bathymetry of Lake Erie and Lake Saint Clair

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetry of Lake Erie and Lake Saint Clair has been compiled as a component of a NOAA project to rescue Great Lakes lake floor geological and geophysical data and...

  6. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  7. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  8. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  9. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  10. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  11. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  12. Lake whitefish diet, condition, and energy density in Lake Champlain and the lower four Great Lakes following dreissenid invasions

    Science.gov (United States)

    Herbst, Seth J.; Marsden, J. Ellen; Lantry, Brian F.

    2013-01-01

    Lake Whitefish Coregonus clupeaformis support some of the most valuable commercial freshwater fisheries in North America. Recent growth and condition decreases in Lake Whitefish populations in the Great Lakes have been attributed to the invasion of the dreissenid mussels, zebra mussels Dreissena polymorpha and quagga mussels D. bugensis, and the subsequent collapse of the amphipod, Diporeia, a once-abundant high energy prey source. Since 1993, Lake Champlain has also experienced the invasion and proliferation of zebra mussels, but in contrast to the Great Lakes, Diporeia were not historically abundant. We compared the diet, condition, and energy density of Lake Whitefish from Lake Champlain after the dreissenid mussel invasion to values for those of Lake Whitefish from Lakes Michigan, Huron, Erie, and Ontario. Lake Whitefish were collected using gill nets and bottom trawls, and their diets were quantified seasonally. Condition was estimated using Fulton's condition factor (K) and by determining energy density. In contrast to Lake Whitefish from some of the Great Lakes, those from Lake Champlain Lake Whitefish did not show a dietary shift towards dreissenid mussels, but instead fed primarily on fish eggs in spring, Mysis diluviana in summer, and gastropods and sphaeriids in fall and winter. Along with these dietary differences, the condition and energy density of Lake Whitefish from Lake Champlain were high compared with those of Lake Whitefish from Lakes Michigan, Huron, and Ontario after the dreissenid invasion, and were similar to Lake Whitefish from Lake Erie; fish from Lakes Michigan, Huron, and Ontario consumed dreissenids, whereas fish from Lake Erie did not. Our comparisons of Lake Whitefish populations in Lake Champlain to those in the Great Lakes indicate that diet and condition of Lake Champlain Lake Whitefish were not negatively affected by the dreissenid mussel invasion.

  13. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  14. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  15. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  16. Glacial lake inventory and lake outburst potential in Uzbekistan.

    Science.gov (United States)

    Petrov, Maxim A; Sabitov, Timur Y; Tomashevskaya, Irina G; Glazirin, Gleb E; Chernomorets, Sergey S; Savernyuk, Elena A; Tutubalina, Olga V; Petrakov, Dmitriy A; Sokolov, Leonid S; Dokukin, Mikhail D; Mountrakis, Giorgos; Ruiz-Villanueva, Virginia; Stoffel, Markus

    2017-08-15

    Climate change has been shown to increase the number of mountain lakes across various mountain ranges in the World. In Central Asia, and in particular on the territory of Uzbekistan, a detailed assessment of glacier lakes and their evolution over time is, however lacking. For this reason we created the first detailed inventory of mountain lakes of Uzbekistan based on recent (2002-2014) satellite observations using WorldView-2, SPOT5, and IKONOS imagery with a spatial resolution from 2 to 10m. This record was complemented with data from field studies of the last 50years. The previous data were mostly in the form of inventories of lakes, available in Soviet archives, and primarily included localized in-situ data. The inventory of mountain lakes presented here, by contrast, includes an overview of all lakes of the territory of Uzbekistan. Lakes were considered if they were located at altitudes above 1500m and if lakes had an area exceeding 100m 2 . As in other mountain regions of the World, the ongoing increase of air temperatures has led to an increase in lake number and area. Moreover, the frequency and overall number of lake outburst events have been on the rise as well. Therefore, we also present the first outburst assessment with an updated version of well-known approaches considering local climate features and event histories. As a result, out of the 242 lakes identified on the territory of Uzbekistan, 15% are considered prone to outburst, 10% of these lakes have been assigned low outburst potential and the remainder of the lakes have an average level of outburst potential. We conclude that the distribution of lakes by elevation shows a significant influence on lake area and hazard potential. No significant differences, by contrast, exist between the distribution of lake area, outburst potential, and lake location with respect to glaciers by regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  18. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  19. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  20. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  1. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  2. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  3. Lake-level frequency analysis for Devils Lake, North Dakota

    Science.gov (United States)

    Wiche, Gregg J.; Vecchia, Aldo V.

    1996-01-01

    Two approaches were used to estimate future lake-level probabilities for Devils Lake. The first approach is based on an annual lake-volume model, and the second approach is based on a statistical water mass-balance model that generates seasonal lake volumes on the basis of seasonal precipitation, evaporation, and inflow. Autoregressive moving average models were used to model the annual mean lake volume and the difference between the annual maximum lake volume and the annual mean lake volume. Residuals from both models were determined to be uncorrelated with zero mean and constant variance. However, a nonlinear relation between the residuals of the two models was included in the final annual lakevolume model.Because of high autocorrelation in the annual lake levels of Devils Lake, the annual lake-volume model was verified using annual lake-level changes. The annual lake-volume model closely reproduced the statistics of the recorded lake-level changes for 1901-93 except for the skewness coefficient. However, the model output is less skewed than the data indicate because of some unrealistically large lake-level declines. The statistical water mass-balance model requires as inputs seasonal precipitation, evaporation, and inflow data for Devils Lake. Analysis of annual precipitation, evaporation, and inflow data for 1950-93 revealed no significant trends or long-range dependence so the input time series were assumed to be stationary and short-range dependent.Normality transformations were used to approximately maintain the marginal probability distributions; and a multivariate, periodic autoregressive model was used to reproduce the correlation structure. Each of the coefficients in the model is significantly different from zero at the 5-percent significance level. Coefficients relating spring inflow from one year to spring and fall inflows from the previous year had the largest effect on the lake-level frequency analysis.Inclusion of parameter uncertainty in the model

  4. Lake trout in northern Lake Huron spawn on submerged drumlins

    Science.gov (United States)

    Riley, Stephen C.; Binder, Thomas; Wattrus, Nigel J.; Faust, Matthew D.; Janssen, John; Menzies, John; Marsden, J. Ellen; Ebener, Mark P.; Bronte, Charles R.; He, Ji X.; Tucker, Taaja R.; Hansen, Michael J.; Thompson, Henry T.; Muir, Andrew M.; Krueger, Charles C.

    2014-01-01

    Recent observations of spawning lake trout Salvelinus namaycush near Drummond Island in northern Lake Huron indicate that lake trout use drumlins, landforms created in subglacial environments by the action of ice sheets, as a primary spawning habitat. From these observations, we generated a hypothesis that may in part explain locations chosen by lake trout for spawning. Most salmonines spawn in streams where they rely on streamflows to sort and clean sediments to create good spawning habitat. Flows sufficient to sort larger sediment sizes are generally lacking in lakes, but some glacial bedforms contain large pockets of sorted sediments that can provide the interstitial spaces necessary for lake trout egg incubation, particularly if these bedforms are situated such that lake currents can penetrate these sediments. We hypothesize that sediment inclusions from glacial scavenging and sediment sorting that occurred during the creation of bedforms such as drumlins, end moraines, and eskers create suitable conditions for lake trout egg incubation, particularly where these bedforms interact with lake currents to remove fine sediments. Further, these bedforms may provide high-quality lake trout spawning habitat at many locations in the Great Lakes and may be especially important along the southern edge of the range of the species. A better understanding of the role of glacially-derived bedforms in the creation of lake trout spawning habitat may help develop powerful predictors of lake trout spawning locations, provide insight into the evolution of unique spawning behaviors by lake trout, and aid in lake trout restoration in the Great Lakes.

  5. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  6. Lake sturgeon population characteristics in Rainy Lake, Minnesota and Ontario

    Science.gov (United States)

    Adams, W.E.; Kallemeyn, L.W.; Willis, D.W.

    2006-01-01

    Rainy Lake contains a native population of lake sturgeon Acipenser fulvescens that has been largely unstudied. The aims of this study were to document the population characteristics of lake sturgeon in Rainy Lake and to relate environmental factors to year-class strength for this population. Gill-netting efforts throughout the study resulted in the capture of 322 lake sturgeon, including 50 recaptures. Lake sturgeon in Rainy Lake was relatively plump and fast growing compared with a 32-population summary. Population samples were dominated by lake sturgeon between 110 and 150 cm total length. Age–structure analysis of the samples indicated few younger (<10 years) lake sturgeon, but the smallest gill net mesh size used for sampling was 102 mm (bar measure) and would not retain small sturgeon. Few lake sturgeon older than age 50 years were captured, and maximum age of sampled fish was 59 years. Few correlations existed between lake sturgeon year-class indices and both annual and monthly climate variables, except that mean June air temperature was positively correlated with year-class strength. Analysis of Rainy Lake water elevation and resulting lake sturgeon year-class strength indices across years yielded consistent but weak negative correlations between late April and early June, when spawning of lake sturgeon occurs. The baseline data collected in this study should allow Rainy Lake biologists to establish more specific research questions in the future.

  7. Zooplankton communities in a large prealpine lake, Lake Constance: comparison between the Upper and the Lower Lake

    Directory of Open Access Journals (Sweden)

    Gerhard MAIER

    2005-08-01

    Full Text Available The zooplankton communities of two basins of a large lake, Lake Constance, were compared during the years 2002 and 2003. The two basins differ in morphology, physical and chemical conditions. The Upper Lake basin has a surface area of 470 km2, a mean depth of 100 and a maximum depth of 250 m; the Lower Lake basin has a surface area of 62 km2, a mean depth of only 13 and a maximum depth of 40 m. Nutrient, chlorophyll-a concentrations and mean temperatures are somewhat higher in the Lower than in the Upper Lake. Total abundance of rotifers (number per m2 lake surface was higher and rotifer development started earlier in the year in the Lower than in the Upper Lake. Total abundance of crustaceans was higher in the Upper Lake in the year 2002; in the year 2003 no difference in abundance could be detected between the lake basins, although in summer crustacean abundance was higher in the Lower than in the Upper Lake. Crustacean communities differed significantly between lake basins while there was no apparent difference in rotifer communities. In the Lower Lake small crustaceans, like Bosmina spp., Ceriodaphnia pulchella and Thermocyclops oithonoides prevailed. Abundance (number per m2 lake surface of predatory cladocerans, large daphnids and large copepods was much lower in the Lower than in the Upper Lake, in particular during the summer months. Ordination with nonmetric multidimensional scaling (NMS separated communities of both lakes along gradients that correlated with temperature and chlorophyll a concentration. Clutches of copepods were larger in the Lower than in the Upper Lake. No difference could be detected in clutch size of large daphnids between lake basins. Our results show that zooplankton communities in different basins of Lake Constance can be very different. They further suggest that the lack of large crustaceans in particular the lack of large predatory cladocerans in the Lower Lake can have negative effects on growth and

  8. Lake trout rehabilitation in Lake Erie: a case history

    Science.gov (United States)

    Cornelius, Floyd C.; Muth, Kenneth M.; Kenyon, Roger

    1995-01-01

    Native lake trout (Salvelinus namaycush) once thrived in the deep waters of eastern Lake Erie. The impact of nearly 70 years of unregulated exploitation and over 100 years of progressively severe cultural eutrophication resulted in the elimination of lake trout stocks by 1950. Early attempts to restore lake trout by stocking were unsuccessful in establishing a self-sustaining population. In the early 1980s, New York's Department of Environmental Conservation, Pennsylvania's Fish and Boat Commission, and the U.S. Fish and Wildlife Service entered into a cooperative program to rehabilitate lake trout in the eastern basin of Lake Erie. After 11 years of stocking selected strains of lake trout in U.S. waters, followed by effective sea lamprey control, lake trout appear to be successfully recolonizing their native habitat. Adult stocks have built up significantly and are expanding their range in the lake. Preliminary investigations suggest that lake trout reproductive habitat is still adequate for natural reproduction, but natural recruitment has not been documented. Future assessments will be directed toward evaluation of spawning success and tracking age-class cohorts as they move through the fishery.

  9. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  10. Microbiology of Lonar Lake and other soda lakes

    Science.gov (United States)

    Paul Antony, Chakkiath; Kumaresan, Deepak; Hunger, Sindy; Drake, Harold L; Murrell, J Colin; Shouche, Yogesh S

    2013-01-01

    Soda lakes are saline and alkaline ecosystems that are believed to have existed throughout the geological record of Earth. They are widely distributed across the globe, but are highly abundant in terrestrial biomes such as deserts and steppes and in geologically interesting regions such as the East African Rift valley. The unusual geochemistry of these lakes supports the growth of an impressive array of microorganisms that are of ecological and economic importance. Haloalkaliphilic Bacteria and Archaea belonging to all major trophic groups have been described from many soda lakes, including lakes with exceptionally high levels of heavy metals. Lonar Lake is a soda lake that is centered at an unusual meteorite impact structure in the Deccan basalts in India and its key physicochemical and microbiological characteristics are highlighted in this article. The occurrence of diverse functional groups of microbes, such as methanogens, methanotrophs, phototrophs, denitrifiers, sulfur oxidizers, sulfate reducers and syntrophs in soda lakes, suggests that these habitats harbor complex microbial food webs that (a) interconnect various biological cycles via redox coupling and (b) impact on the production and consumption of greenhouse gases. Soda lake microorganisms harbor several biotechnologically relevant enzymes and biomolecules (for example, cellulases, amylases, ectoine) and there is the need to augment bioprospecting efforts in soda lake environments with new integrated approaches. Importantly, some saline and alkaline lake ecosystems around the world need to be protected from anthropogenic pressures that threaten their long-term existence. PMID:23178675

  11. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  12. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  13. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  14. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  15. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  16. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  17. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  18. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  19. Holocene Lake-Level Fluctuations of Lake Aricota, Southern Peru

    Science.gov (United States)

    Placzek, Christa; Quade, Jay; Betancourt, Julio L.

    2001-09-01

    Lacustrine deposits exposed around Lake Aricota, Peru (17° 22‧S), a 7.5-km2 lake dammed by debris flows, provide a middle to late Holocene record of lake-level fluctuations. Chronological context for shoreline deposits was obtained from radiocarbon dating of vascular plant remains and other datable material with minimal 14C reservoir effects (<350 yr). Diatomites associated with highstands several meters above the modern lake level indicate wet episodes. Maximum Holocene lake level was attained before 6100 14C yr B.P. and ended ∼2700 14C yr B.P. Moderately high lake levels occurred at 1700 and 1300 14C yr B.P. The highstand at Lake Aricota during the middle Holocene is coeval with a major lowstand at Lake Titicaca (16°S), which is only 130 km to the northeast and shares a similar climatology. Comparisons with other marine and terrestrial records highlight emerging contradictions over the nature of mid-Holocene climate in the central Andes.

  20. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  1. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  2. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  3. The diversity of benthic mollusks of Lake Victoria and Lake Burigi ...

    African Journals Online (AJOL)

    Molluscan diversity, abundance and distribution in sediments of Lake Victoria and its satellite lake, Lake Burigi, were investigated. The survey was carried out in January and February 2002 for Lake Victoria and in March and April 2002 for Lake Burigi. Ten genera were recorded from four zones of Lake Victoria while only ...

  4. Lake Morphometry for NHD Lakes in Great Lakes Region 4 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  5. Late Quaternary palaeoenvironmental reconstruction from Lakes Ohrid and Prespa (Macedonia/Albania border using stable isotopes

    Directory of Open Access Journals (Sweden)

    M. J. Leng

    2010-10-01

    agreement with many other records in the Mediterranean, although the uppermost sediments in one core records low δ18Ocalcite which we interpret as a result of human activity. Overall, the isotope data present here confirm that these two big lakes have captured the large scale, low frequency palaeoclimate variation that is seen in Mediterranean lakes, although in detail there is much palaeoclimate information that could be gained, especially small scale, high frequency differences between this region and the Mediterranean.

  6. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  7. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  8. Spatial and temporal genetic diversity of lake whitefish (Coregonus clupeaformis (Mitchill)) from Lake Huron and Lake Erie

    Science.gov (United States)

    Stott, Wendylee; Ebener, Mark P.; Mohr, Lloyd; Hartman, Travis; Johnson, Jim; Roseman, Edward F.

    2013-01-01

    Lake whitefish (Coregonus clupeaformis (Mitchill)) are important commercially, culturally, and ecologically in the Laurentian Great Lakes. Stocks of lake whitefish in the Great Lakes have recovered from low levels of abundance in the 1960s. Reductions in abundance, loss of habitat and environmental degradation can be accompanied by losses of genetic diversity and overall fitness that may persist even as populations recover demographically. Therefore, it is important to be able to identify stocks that have reduced levels of genetic diversity. In this study, we investigated patterns of genetic diversity at microsatellite DNA loci in lake whitefish collected between 1927 and 1929 (historical period) and between 1997 and 2005 (contemporary period) from Lake Huron and Lake Erie. Genetic analysis of lake whitefish from Lakes Huron and Erie shows that the amount of population structuring varies from lake to lake. Greater genetic divergences among collections from Lake Huron may be the result of sampling scale, migration patterns and demographic processes. Fluctuations in abundance of lake whitefish populations may have resulted in periods of increased genetic drift that have resulted in changes in allele frequencies over time, but periodic genetic drift was not severe enough to result in a significant loss of genetic diversity. Migration among stocks may have decreased levels of genetic differentiation while not completely obscuring stock boundaries. Recent changes in spatial boundaries to stocks, the number of stocks and life history characteristics of stocks further demonstrate the potential of coregonids for a swift and varied response to environmental change and emphasise the importance of incorporating both spatial and temporal considerations into management plans to ensure that diversity is preserved.

  9. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  10. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  11. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  12. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  13. Big Ship Data: Using vessel measurements to improve estimates of temperature and wind speed on the Great Lakes

    Science.gov (United States)

    Fries, Kevin; Kerkez, Branko

    2017-05-01

    The sheer size of many water systems challenges the ability of in situ sensor networks to resolve spatiotemporal variability of hydrologic processes. New sources of vastly distributed and mobile measurements are, however, emerging to potentially fill these observational gaps. This paper poses the question: How can nontraditional measurements, such as those made by volunteer ship captains, be used to improve hydrometeorological estimates across large surface water systems? We answer this question through the analysis of one of the largest such data sets: an unprecedented collection of one million unique measurements made by ships on the North American Great Lakes from 2006 to 2014. We introduce a flexible probabilistic framework, which can be used to integrate ship measurements, or other sets of irregular point measurements, into contiguous data sets. The performance of this framework is validated through the development of a new ship-based spatial data product of water temperature, air temperature, and wind speed across the Great Lakes. An analysis of the final data product suggests that the availability of measurements across the Great Lakes will continue to play a large role in the confidence with which these large surface water systems can be studied and modeled. We discuss how this general and flexible approach can be applied to similar data sets, and how it will be of use to those seeking to merge large collections of measurements with other sources of data, such as physical models or remotely sensed products.

  14. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  15. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  16. Energy density of lake whitefish Coregonus clupeaformis in Lakes Huron and Michigan

    Science.gov (United States)

    Pothoven, S.A.; Nalepa, T.F.; Madenjian, C.P.; Rediske, R.R.; Schneeberger, P.J.; He, J.X.

    2006-01-01

    We collected lake whitefish Coregonus clupeaformis off Alpena and Tawas City, Michigan, USA in Lake Huron and off Muskegon, Michigan USA in Lake Michigan during 2002–2004. We determined energy density and percent dry weight for lake whitefish from both lakes and lipid content for Lake Michigan fish. Energy density increased with increasing fish weight up to 800 g, and then remained relatively constant with further increases in fish weight. Energy density, adjusted for weight, was lower in Lake Huron than in Lake Michigan for both small (≤800 g) and large fish (>800 g). Energy density did not differ seasonally for small or large lake whitefish or between adult male and female fish. Energy density was strongly correlated with percent dry weight and percent lipid content. Based on data from commercially caught lake whitefish, body condition was lower in Lake Huron than Lake Michigan during 1981–2003, indicating that the dissimilarity in body condition between the lakes could be long standing. Energy density and lipid content in 2002–2004 in Lake Michigan were lower than data for comparable sized fish collected in 1969–1971. Differences in energy density between lakes were attributed to variation in diet and prey energy content as well as factors that affect feeding rates such as lake whitefish density and prey abundance.

  17. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  18. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  19. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  20. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  1. Effects of lake trout refuges on lake whitefish and cisco in the Apostle Islands Region of Lake Superior

    Science.gov (United States)

    Zuccarino-Crowe , Chiara M.; Taylor, William W.; Hansen, Michael J.; Seider, Michael J.; Krueger, Charles C.

    2016-01-01

    Lake trout refuges in the Apostle Islands region of Lake Superior are analogous to the concept of marine protected areas. These refuges, established specifically for lake trout (Salvelinus namaycush) and closed to most forms of recreational and commercial fishing, were implicated as one of several management actions leading to successful rehabilitation of Lake Superior lake trout. To investigate the potential significance of Gull Island Shoal and Devils Island Shoal refuges for populations of not only lake trout but also other fish species, relative abundances of lake trout, lake whitefish (Coregonus clupeaformis), and cisco (Coregonus artedi) were compared between areas sampled inside versus outside of refuge boundaries. During 1982–2010, lake trout relative abundance was higher and increased faster inside the refuges, where lake trout fishing was prohibited, than outside the refuges. Over the same period, lake whitefish relative abundance increased faster inside than outside the refuges. Both evaluations provided clear evidence that refuges protected these species. In contrast, trends in relative abundance of cisco, a prey item of lake trout, did not differ significantly between areas inside and outside the refuges. This result did not suggest indirect or cascading refuge effects due to changes in predator levels. Overall, this study highlights the potential of species-specific refuges to benefit other fish species beyond those that were the refuges' original target. Improved understanding of refuge effects on multiple species of Great Lakes fishes can be valuable for developing rationales for refuge establishment and predicting associated fish community-level effects.

  2. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  3. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  4. Research objectives to support the South Florida Ecosystem Restoration initiative-Water Conservation Areas, Lake Okeechobee, and the East/West waterways

    OpenAIRE

    Kitchens, Wiley M.

    1994-01-01

    The South Florida Ecosystem encompasses an area of approximately 28,000 km2 comprising at least 11 major physiographic provinces, including the Kissimmee River Valley, Lake Okeechobee, the Immokalee Rise, the Big Cypress, the Everglades, Florida Bay, the Atlantic Coastal Ridge, Biscayne Bay, the Florida Keys, the Florida Reef Tract, and nearshore coastal waters. South Florida is a heterogeneous system of wetlands, uplands, coastal areas, and marine areas, dominated by the watershe...

  5. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  6. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  7. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  8. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  9. Hazards of volcanic lakes: analysis of Lakes Quilotoa and Cuicocha, Ecuador

    Directory of Open Access Journals (Sweden)

    G. Gunkel

    2008-01-01

    Full Text Available Volcanic lakes within calderas should be viewed as high-risk systems, and an intensive lake monitoring must be carried out to evaluate the hazard of potential limnic or phreatic-magmatic eruptions. In Ecuador, two caldera lakesLakes Quilotoa and Cuicocha, located in the high Andean region >3000 a.s.l. – have been the focus of these investigations. Both volcanoes are geologically young or historically active, and have formed large and deep calderas with lakes of 2 to 3 km in diameter, and 248 and 148 m in depth, respectively. In both lakes, visible gas emissions of CO2 occur, and an accumulation of CO2 in the deep water body must be taken into account.

    Investigations were carried out to evaluate the hazards of these volcanic lakes, and in Lake Cuicocha intensive monitoring was carried out for the evaluation of possible renewed volcanic activities. At Lake Quilotoa, a limnic eruption and diffuse CO2 degassing at the lake surface are to be expected, while at Lake Cuicocha, an increased risk of a phreatic-magmatic eruption exists.

  10. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  11. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  13. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  14. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  15. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  16. Principles of lake sedimentology

    International Nuclear Information System (INIS)

    Janasson, L.

    1983-01-01

    This book presents a comprehensive outline on the basic sedimentological principles for lakes, and focuses on environmental aspects and matters related to lake management and control-on lake ecology rather than lake geology. This is a guide for those who plan, perform and evaluate lake sedimentological investigations. Contents abridged: Lake types and sediment types. Sedimentation in lakes and water dynamics. Lake bottom dynamics. Sediment dynamics and sediment age. Sediments in aquatic pollution control programmes. Subject index

  17. Comparison of total mercury and methylmercury cycling at five sites using the small watershed approach

    Science.gov (United States)

    Shanley, J.B.; Alisa, Mast M.; Campbell, D.H.; Aiken, G.R.; Krabbenhoft, D.P.; Hunt, R.J.; Walker, J.F.; Schuster, P.F.; Chalmers, A.; Aulenbach, Brent T.; Peters, N.E.; Marvin-DiPasquale, M.; Clow, D.W.; Shafer, M.M.

    2008-01-01

    The small watershed approach is well-suited but underutilized in mercury research. We applied the small watershed approach to investigate total mercury (THg) and methylmercury (MeHg) dynamics in streamwater at the five diverse forested headwater catchments of the US Geological Survey Water, Energy, and Biogeochemical Budgets (WEBB) program. At all sites, baseflow THg was generally less than 1 ng L-1 and MeHg was less than 0.2 ng L-1. THg and MeHg concentrations increased with streamflow, so export was primarily episodic. At three sites, THg and MeHg concentration and export were dominated by the particulate fraction in association with POC at high flows, with maximum THg (MeHg) concentrations of 94 (2.56) ng L-1 at Sleepers River, Vermont; 112 (0.75) ng L-1 at Rio Icacos, Puerto Rico; and 55 (0.80) ng L-1 at Panola Mt., Georgia. Filtered (Colorado, THg export was also episodic but was dominated by filtered THg, as POC concentrations were low. MeHg typically tracked THg so that each site had a fairly constant MeHg/THg ratio, which ranged from near zero at Andrews to 15% at the low-relief, groundwater-dominated Allequash Creek, Wisconsin. Allequash was the only site with filtered MeHg consistently above detection, and the filtered fraction dominated both THg and MeHg. Relative to inputs in wet deposition, watershed retention of THg (minus any subsequent volatilization) was 96.6% at Allequash, 60% at Sleepers, and 83% at Andrews. Icacos had a net export of THg, possibly due to historic gold mining or frequent disturbance from landslides. Quantification and interpretation of Hg dynamics was facilitated by the small watershed approach with emphasis on event sampling. ?? 2008 Elsevier Ltd. All rights reserved.

  18. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  19. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  20. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  1. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  2. Some climatological factors of pine in the lake toba catchment area

    Science.gov (United States)

    Nasution, Z.

    2018-02-01

    The article deals with climatological factors of Pine at the Lake Toba Catchment Area also called drained basin, Pinus merkusii is a plant endemic in Sumatra. A central population of Pine in North Sumatra is located in the Tapanuli region to south of Lake Toba. Junghuhn discovered the species in the mountains range of Sipirok. He provisionally named the species as Pinus sumatrana. The article presents a detail analysis of approaches to climate factors, considers rainfall, air temperature, humidity, stemflow, throughfall and Interception following calculation of regression to determine relationship between precipitation with stemflow and interception. Stemflow, it is highly significant with significance of difference between correlation coefficients and z normal distribution. Temperature and relative humidity are the important components in the climate. These components influence the evaporation process and rainfall in the catchment. Pinus merkusii has the big crown interception. Stemflow and Interception has an opposite relation. Increasing of interception capacity will decrease stemflow. This type of Pine also has rough bark however significant channels so that, it flows water even during the wet season and caused the stemflow in Pinus merkusii relatively bigger.

  3. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  4. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  5. Lake Afdera: a threatened saline lake in Ethiopia | Getahun | SINET ...

    African Journals Online (AJOL)

    Lake Afdera is a saline lake located in the Afar region, Northern Ethiopia. Because of its inaccessibility it is one of the least studied lakes of the country. It supports life including three species of fish of which two are endemic. Recently, reports are coming out that this lake is used for salt extraction. This paper gives some ...

  6. Water quality of Lake Austin and Town Lake, Austin, Texas

    Science.gov (United States)

    Andrews, Freeman L.; Wells, Frank C.; Shelby, Wanda J.; McPherson, Emma

    1988-01-01

    Lake Austin and Town Lake are located on the Colorado River in Travis County, central Texas, and serve as a source of water for municipal and industrial water supplies, electrical-power generation, and recreation for more than 500,000 people in the Austin metropolitan area. Lake Austin, located immediately downstream of Lake Travis, extends for more than 20 miles into the western edge of the city of Austin. Town Lake extends through the downtown area of the city of Austin for nearly 6 miles where the Colorado River is impounded by Longhorn Dam.

  7. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  8. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  9. Variety, State and Origin of Drained Thaw Lake Basins in West-Siberian North

    Science.gov (United States)

    Kirpotin, S.; Polishchuk, Y.; Bryksina, N.; Sugaipova, A.; Pokrovsky, O.; Shirokova, L.; Kouraev, A.; Zakharova, E.; Kolmakova, M.; Dupre, B.

    2009-04-01

    Drained thaw lake basins in Western Siberia have a local name "khasyreis" [1]. Khasyreis as well as lakes, ponds and frozen mounds are invariable element of sub-arctic frozen peat bogs - palsas and tundra landscapes. In some areas of West-Siberian sub-arctic khasyreis occupy up to 40-50% of total lake area. Sometimes their concentration is so high that we call such places ‘khasyrei's fields". Khasyreis are part of the natural cycle of palsa complex development [1], but their origin is not continuous and uniform in time and, according to our opinion, there were periods of more intensive lake drainage and khasyrei development accordingly. These times were corresponding with epochs of climatic warming and today we have faced with one of them. So, last years this process was sufficiently activated in the south part of West-Siberian sub-arctic [2]. It was discovered that in the zone of continuous permafrost thermokarst lakes have expanded their areas by about 10-12%, but in the zone of discontinuous permafrost the process of their drainage prevails. These features are connected with the thickness of peat layers which gradually decreases to the North, and thus have reduced the opportunity for lake drainage in northern areas. The most typical way of khasyrei origin is their drainage to the bigger lakes which are always situated on the lower levels and works as a collecting funnels providing drainage of smaller lakes. The lower level of the big lake appeared when the lake takes a critical mass of water enough for subsidence of the lake bottom due to the melting of underlaying rocks [2]. Another one way of lake drainage is the lake intercept by any river. Lake drainage to the subsurface (underlaying rocks) as some authors think [3, 4] is not possible in Western Siberia, because the thickness of permafrost is at list 500 m here being safe confining bed. We mark out few stages of khasyrei development: freshly drained, young, mature and old. This row reflects stages of

  10. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  11. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  12. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  13. Refuge Lake Reclassification in 620 Minnesota Cisco Lakes under Future Climate Scenarios

    Directory of Open Access Journals (Sweden)

    Liping Jiang

    2017-09-01

    Full Text Available Cisco (Coregonus artedi is the most common coldwater stenothermal fish in Minnesota lakes. Water temperature (T and dissolved oxygen (DO in lakes are important controls of fish growth and reproduction and likely change with future climate warming. Built upon a previous study, this study uses a modified method to identify which of 620 cisco lakes in Minnesota can still support cisco populations under future climate and therefore be classified as cisco refuge lakes. The previous study used oxythermal stress parameter TDO3, the temperature at DO of 3 mg/L, simulated only from deep virtual lakes to classify 620 cisco lakes. Using four categories of virtual but representative cisco lakes in modified method, a one-dimensional water quality model MINLAKE2012 was used to simulate daily T and DO profiles in 82 virtual lakes under the past (1961–2008 and two future climate scenarios. A multiyear average of 31-day largest TDO3 over variable benchmark (VB periods, AvgATDO3VB, was calculated from simulated T and DO profiles using FishHabitat2013. Contour plots of AvgATDO3VB for four categories of virtual lakes were then developed to reclassify 620 cisco lakes into Tier 1 (AvgATDO3VB < 11 °C or Tier 2 refuge lakes, and Tier 3 non-refuge lakes (AvgATDO3VB > 17 °C. About 20% of 620 cisco lakes are projected to be refuge lakes under future climate scenarios, which is a more accurate projection (improving the prediction accuracy by ~6.5% from the previous study since AvgATDO3VB was found to vary by lake categories.

  14. Watershed vs. within-lake drivers of nitrogen: phosphorus dynamics in shallow lakes.

    Science.gov (United States)

    Ginger, Luke J; Zimmer, Kyle D; Herwig, Brian R; Hanson, Mark A; Hobbs, William O; Small, Gaston E; Cotner, James B

    2017-10-01

    Research on lake eutrophication often identifies variables affecting amounts of phosphorus (P) and nitrogen (N) in lakes, but understanding factors influencing N:P ratios is important given its influence on species composition and toxin production by cyanobacteria. We sampled 80 shallow lakes in Minnesota (USA) for three years to assess effects of watershed size, proportion of watershed as both row crop and natural area, fish biomass, and lake alternative state (turbid vs. clear) on total N : total P (TN : TP), ammonium, total dissolved phosphorus (TDP), and seston stoichiometry. We also examined N:P stoichiometry in 20 additional lakes that shifted states during the study. Last, we assessed the importance of denitrification by measuring denitrification rates in sediment cores from a subset of 34 lakes, and by measuring seston δ 15 N in four additional experimental lakes before and after they were experimentally manipulated from turbid to clear states. Results showed alternative state had the largest influence on overall N:P stoichiometry in these systems, as it had the strongest relationship with TN : TP, seston C:N:P, ammonium, and TDP. Turbid lakes had higher N at given levels of P than clear lakes, with TN and ammonium 2-fold and 1.4-fold higher in turbid lakes, respectively. In lakes that shifted states, TN was 3-fold higher in turbid lakes, while TP was only 2-fold higher, supporting the notion N is more responsive to state shifts than is P. Seston δ 15 N increased after lakes shifted to clear states, suggesting higher denitrification rates may be important for reducing N levels in clear states, and potential denitrification rates in sediment cores were among the highest recorded in the literature. Overall, our results indicate lake state was a primary driver of N:P dynamics in shallow lakes, and lakes in clear states had much lower N at a given level of P relative to turbid lakes, likely due to higher denitrification rates. Shallow lakes are often

  15. Changes in Rongbuk lake and Imja lake in the Everest region of Himalaya

    Science.gov (United States)

    Chen, W.; Doko, T.; Liu, C.; Ichinose, T.; Fukui, H.; Feng, Q.; Gou, P.

    2014-12-01

    The Himalaya holds the world record in terms of range and elevation. It is one of the most extensively glacierized regions in the world except the Polar Regions. The Himalaya is a region sensitive to climate change. Changes in the glacial regime are indicators of global climate changes. Since the second half of the last century, most Himalayan glaciers have melted due to climate change. These changes directly affected the changes of glacial lakes in the Himalayan region due to the glacier retreat. New glacial lakes are formed, and a number of them have expanded in the Everest region of the Himalayas. This paper focuses on the two glacial lakes which are Imja Lake, located at the southern slope, and Rongbuk Lake, located at the northern slope in the Mt. Everest region, Himalaya to present the spatio-temporal changes from 1976 to 2008. Topographical conditions between two lakes were different (Kruskal-Wallis test, p < 0.05). Rongbuk Lake was located at 623 m higher than Imja Lake, and radiation of Rongbuk Lake was higher than the Imja Lake. Although size of Imja Lake was larger than the Rongbuk Lake in 2008, the growth speed of Rongbuk Lake was accelerating since 2000 and exceeds Imja Lake in 2000-2008. This trend of expansion of Rongbuk Lake is anticipated to be continued in the 21st century. Rongbuk Lake would be the biggest potential risk of glacial lake outburst flood (GLOF) at the Everest region of Himalaya in the future.

  16. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  17. Lake-wide distribution of Dreissena in Lake Michigan, 1999

    Science.gov (United States)

    Fleischer, Guy W.; DeSorcie, Timothy J.; Holuszko, Jeffrey D.

    2001-01-01

    The Great Lakes Science Center has conducted lake-wide bottom trawl surveys of the fish community in Lake Michigan each fall since 1973. These systematic surveys are performed at depths of 9 to 110 m at each of seven index sites around Lake Michigan. Zebra mussel (Dreissena polymorpha) populations have expanded to all survey locations and at a level to sufficiently contribute to the bottom trawl catches. The quagga (Dreissena bugensis), recently reported in Lake Michigan, was likely in the catches though not recognized. Dreissena spp. biomass ranged from about 0.6 to 15 kg/ha at the various sites in 1999. Dreissenid mussels were found at depths of 9 to 82 m, with their peak biomass at 27 to 46 m. The colonization of these exotic mussels has ecological implications as well as potential ramifications on the ability to sample fish consistently and effectively with bottom trawls in Lake Michigan.

  18. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  19. Aphanius arakensis, a new species of tooth-carp (Actinopterygii, Cyprinodontidae) from the endorheic Namak Lake basin in Iran

    OpenAIRE

    Teimori,Azad; Esmaeili,Hamid; Gholami,Zeinab; Zarei,Neda; Reichenbacher,Bettina

    2012-01-01

    A new species of tooth-carp, Aphanius arakensis sp. n., is described from the Namak Lake basin in Iran. The new species is distinguished by the congeners distributed in Iran by the following combination of characters: 10–12 anal fin rays, 28–32 lateral line scales, 10–13 caudal peduncle scales, 8–10 gill rakers, 12–19, commonly 15–16, clearly defined flank bars in males, a more prominent pigmentation along the flank added by relatively big blotches in the m...

  20. Large Lakes Dominate CO2 Evasion From Lakes in an Arctic Catchment

    Science.gov (United States)

    Rocher-Ros, Gerard; Giesler, Reiner; Lundin, Erik; Salimi, Shokoufeh; Jonsson, Anders; Karlsson, Jan

    2017-12-01

    CO2 evasion from freshwater lakes is an important component of the carbon cycle. However, the relative contribution from different lake sizes may vary, since several parameters underlying CO2 flux are size dependent. Here we estimated the annual lake CO2 evasion from a catchment in northern Sweden encompassing about 30,000 differently sized lakes. We show that areal CO2 fluxes decreased rapidly with lake size, but this was counteracted by the greater overall coverage of larger lakes. As a result, total efflux increased with lake size and the single largest lake in the catchment dominated the CO2 evasion (53% of all CO2 evaded). By contrast, the contribution from the smallest ponds (about 27,000) was minor (evasion at the landscape scale.

  1. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  2. Lake Michigan lake trout PCB model forecast post audit

    Science.gov (United States)

    Scenario forecasts for total PCBs in Lake Michigan (LM) lake trout were conducted using the linked LM2-Toxics and LM Food Chain models, supported by a suite of additional LM models. Efforts were conducted under the Lake Michigan Mass Balance Study and the post audit represents th...

  3. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  4. Using Satellite Imagery to Monitor the Major Lakes; Case Study Lake Hamun

    Science.gov (United States)

    Norouzi, H.; Islam, R.; Bah, A.; AghaKouchak, A.

    2015-12-01

    Proper lakes function can ease the impact of floods and drought especially in arid and semi-arid regions. They are important environmentally and can directly affect human lives. Better understanding of the effect of climate change and human-driven changes on lakes would provide invaluable information for policy-makers and local people. As part of a comprehensive study, we aim to monitor the land-cover/ land-use changes in the world's major lakes using satellite observations. As a case study, Hamun Lake which is a pluvial Lake, also known as shallow Lake, located on the south-east of Iran and adjacent to Afghanistan, and Pakistan borders is investigated. The Lake is the main source of resources (agriculture, fishing and hunting) for the people around it and politically important in the region since it is shared among three different countries. The purpose of the research is to find the Lake's area from 1972 to 2015 and to see if any drought or water resources management has affected the lake. Analyzing satellites imagery from Landsat shows that the area of the Lake changes seasonally and intra-annually. Significant seasonal effects are found in 1975,1977, 1987, 1993, 1996, 1998, 2000, 2009 and 2011, as well as, substantial amount of shallow water is found throughout the years. The precipitation records as well as drought historical records are studied for the lake's basin. Meteorological studies suggest that the drought, decrease of rainfalls in the province and the improper management of the Lake have caused environmental, economic and geographical consequences. The results reveal that lake has experienced at least two prolong dryings since 1972 which drought cannot solely be blamed as main forcing factor.Proper lakes function can ease the impact of floods and drought especially in arid and semi-arid regions. They are important environmentally and can directly affect human lives. Better understanding of the effect of climate change and human-driven changes on lakes

  5. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  6. A new 10,000 year pollen record from Lake Kinneret (Israel) - first results

    Science.gov (United States)

    Schiebel, V.; Litt, T.; Nowaczyk, N.; Stein, M.; Wennrich, V.

    2012-04-01

    Lake Kinneret - as part of the Jordan Rift Valley in Israel - is situated in the southern Levant, which is affected by Eastern Mediterranean climate. The present lake level is around 212 m below msl. Lake Kinneret has a surface of ca. 165 km2 and its watershed comprises the Galilee, the Golan Heights, the Hermon Range and the Anti-Lebanon Mountains. Its most important tributary is the Jordan River. The geography of the Lake Kinneret region is characterised by big differences in altitude. Steep slopes rise up to 560 m above the lake level in the west, north, and east. Mount Hermon (2814 m above mean sea level, amsl) is the highest summit of the Anti-Lebanon Range, and Mount Meron (1208 m amsl) located in the Upper Galilee encircle Lake Kinneret within a 100-km range in the northwest. Due to the pattern of average precipitation, distinct plant-geographical territories converge in the region: The Mediterranean and the Irano-Turanian biom (after Zohary). Varying ratios of characteristic pollen taxa representing certain plant associations serve as proxy data for the reconstruction of paleovegetation, paleoenvironment, and paleoclimate. We present a pollen record based on analyses of sediment cores obtained during a drilling campaign on Lake Kinneret in March 2010. A composite profile of 17.8 m length was established by correlating two parallel cores by using magnetic susceptibility data. Our record encompasses the past ca. 10,000 years of a region, which has been discussed as migration corridor of humans to Europe and, being part of the Fertile Crescent, as the cradle of agriculture in West Asia. Conclusions concerning human impact on vegetation and therefore population density can be drawn by analysing changes of ratios of certain plant taxa such as Olea europaea cultivated in this region since the Chalcolithic Period (6,500 BP). In addition, stable isotope data were produced from discrete bulk samples, and the elemental composition of the sediments was determined by

  7. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  8. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  9. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  10. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  11. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  12. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  13. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  14. Spatial distribution of seepage at a flow-through lake: Lake Hampen, Western Denmark

    DEFF Research Database (Denmark)

    Kidmose, Jacob Baarstrøm; Engesgaard, Peter Knudegaard; Nilsson, Bertel

    2011-01-01

    recharge patiern of the lake and relating these to the geologic history of the lake. Recharge of the surrounding aquifer by lake water occurs off shore in a narrow zone, as measured from lake–groundwater gradients. A 33-m-deep d18O profi le at the recharge side shows a lake d18O plume at depths...... that corroborates the interpretation of lake water recharging off shore and moving down gradient. Inclusion of lake bed heterogeneity in the model improved the comparison of simulated and observed discharge to the lake. The apparent age of the discharging groundwater to the lake was determined by CFCs, resulting...

  15. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  16. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  17. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  18. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  19. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  20. Aquatic macrophyte richness in Danish lakes in relation to alkalinity, transparency, and lake area

    DEFF Research Database (Denmark)

    Vestergaard, Ole Skafte; Sand-Jensen, Kaj

    2000-01-01

    We examined the relationship between environmental factors and the richness of submerged macrophytes species in 73 Danish lakes, which are mainly small, shallow, and have mesotrophic to hypertrophic conditions. We found that mean species richness per lake was only 4.5 in acid lakes of low...... alkalinity but 12.3 in lakes of high alkalinity due to a greater occurrence of the species-rich group of elodeids. Mean species richness per lake also increased significantly with increasing Secchi depth. No significant relationship between species richness and lake surface area was observed among the entire...... group of lakes or a subset of eutrophic lakes, as the growth of submerged macrophytes in large lakes may be restricted by wave action in shallow water and light restriction in deep water. In contrast, macrophyte species richness increased with lake surface area in transparent lakes, presumably due...

  1. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    Science.gov (United States)

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  2. Calcite growth-rate inhibition by fulvic acids isolated from Big Soda Lake, Nevada, USA, The Suwannee River, Georgia, USA and by polycarboxylic acids

    Science.gov (United States)

    Reddy, Michael M.; Leenheer, Jerry

    2011-01-01

    Calcite crystallization rates are characterized using a constant solution composition at 25°C, pH=8.5, and calcite supersaturation (Ω) of 4.5 in the absence and presence of fulvic acids isolated from Big Soda Lake, Nevada (BSLFA), and a fulvic acid from the Suwannee River, Georgia (SRFA). Rates are also measured in the presence and absence of low-molar mass, aliphatic-alicyclic polycarboxylic acids (PCA). BSLFA inhibits calcite crystal-growth rates with increasing BSLFA concentration, suggesting that BSLFA adsorbs at growth sites on the calcite crystal surface. Calcite growth morphology in the presence of BSLFA differed from growth in its absence, supporting an adsorption mechanism of calcite-growth inhibition by BSLFA. Calcite growth-rate inhibition by BSLFA is consistent with a model indicating that polycarboxylic acid molecules present in BSLFA adsorb at growth sites on the calcite crystal surface. In contrast to published results for an unfractionated SRFA, there is dramatic calcite growth inhibition (at a concentration of 1 mg/L) by a SRFA fraction eluted by pH 5 solution from XAD-8 resin, indicating that calcite growth-rate inhibition is related to specific SRFA component fractions. A cyclic PCA, 1, 2, 3, 4, 5, 6-cyclohexane hexacarboxylic acid (CHXHCA) is a strong calcite growth-rate inhibitor at concentrations less than 0.1 mg/L. Two other cyclic PCAs, 1, 1 cyclopentanedicarboxylic acid (CPDCA) and 1, 1 cyclobutanedicarboxylic acid (CBDCA) with the carboxylic acid groups attached to the same ring carbon atom, have no effect on calcite growth rates up to concentrations of 10 mg/L. Organic matter ad-sorbed from the air onto the seed crystals has no effect on the measured calcite crystal-growth rates.

  3. Deliberations on Microbial Life in the Subglacial Lake Vostok, East Antarctica

    Science.gov (United States)

    Bulat, S.; Alekhina, I.; Lipenkov, V.; Lukin, V.; Marie, D.; Petit, J.

    2004-12-01

    evidence for reasonable source for microbe contribution given highly oxygenated lake water environment. Microscopy and flow cytometry trials on strictly decontaminated ice samples gave supporting results. While microscopy failed to reveal cells because the local concentrations were below the detection limit, the flow cytometry succeeded in a preliminary estimate of 9 and 24 cells/ml for accretion 1 (3561m) and control glacial (2054m) ice samples, respectively. However, given the ratio contaminants to indigenous cells is about 10:1 (from PCR results), the genuine microbial contents for both accretion and glacial ice samples is expected to be as low as 1 cell/ml what practically means "sterile" conditions. Thus, the accretion ice from Lake Vostok contains the very low unevenly distributed biomass indicating that the water body (at least upper layer) should also be hosting a highly sparse life, if any. By this, the Lake Vostok for the first time could present the big natural "sterile" water body on Earth providing a unique test area for searching for life on icy moons and planets. The search for life in Lake Vostok is constrained by a high chance of forward-contamination which can be minimized by using of stringent decontamination procedures and comprehensive biological controls.

  4. LakeMIP Kivu: evaluating the representation of a large, deep tropical lake by a set of one-dimensional lake models

    Directory of Open Access Journals (Sweden)

    WIM Thiery

    2014-02-01

    Full Text Available The African great lakes are of utmost importance for the local economy (fishing, as well as being essential to the survival of the local people. During the past decades, these lakes experienced fast changes in ecosystem structure and functioning, and their future evolution is a major concern. In this study, for the first time a set of one-dimensional lake models are evaluated for Lake Kivu (2.28°S; 28.98°E, East Africa. The unique limnology of this meromictic lake, with the importance of salinity and subsurface springs in a tropical high-altitude climate, presents a worthy challenge to the seven models involved in the Lake Model Intercomparison Project (LakeMIP. Meteorological observations from two automatic weather stations are used to drive the models, whereas a unique dataset, containing over 150 temperature profiles recorded since 2002, is used to assess the model's performance. Simulations are performed over the freshwater layer only (60 m and over the average lake depth (240 m, since salinity increases with depth below 60 m in Lake Kivu and some lake models do not account for the influence of salinity upon lake stratification. All models are able to reproduce the mixing seasonality in Lake Kivu, as well as the magnitude and seasonal cycle of the lake enthalpy change. Differences between the models can be ascribed to variations in the treatment of the radiative forcing and the computation of the turbulent heat fluxes. Fluctuations in wind velocity and solar radiation explain inter-annual variability of observed water column temperatures. The good agreement between the deep simulations and the observed meromictic stratification also shows that a subset of models is able to account for the salinity- and geothermal-induced effects upon deep-water stratification. Finally, based on the strengths and weaknesses discerned in this study, an informed choice of a one-dimensional lake model for a given research purpose becomes possible.

  5. Behavior of plutonium and other long-lived radionuclides in Lake Michigan. I. Biological transport, seasonal cycling, and residence times in the water column

    International Nuclear Information System (INIS)

    Wahlgren, M.A.; Marshall, J.S.

    1975-01-01

    Eight operating nuclear reactors are situated on the shores of Lake Michigan, but their releases of radioactivity have been much less than that entering the lake from stratospheric fallout. Measurements of 239 , 240 Pu, 241 Am, and 137 Cs from the latter source have been made in order to study biological transport, seasonal cycling, and residence times of long-lived radionuclides in the lake. The apparent turnover times for the residual fallout 239 , 240 Pu and 137 Cs, which are present as nonfilterable, ionic forms, are about 3 to 4 y. Resuspension may be occurring at a low rate, probably through the feeding activities of benthic organisms. Transport by settling of phytodetritus and zooplankton fecal pellets is postulated to be the cause of the rapid decline of the concentration of 239 , 240 Pu in surface waters observed during summer thermal stratification of the lake, while the concentration of 137 Cs remained almost constant. Concentration factors for fallout 239 , 240 Pu, 137 Cs, and 90 Sr at various trophic levels in the food chain in Lake Michigan have been measured. Analyses of biological samples taken at various distances from the Big Rock Point Nuclear Power Plant and of plant waste discharge show that any plutonium possibly released from the recycle plutonium test fuel is too low to be detectable in the presence of fallout plutonium. Measurements of 239 , 240 Pu, 137 Cs, and 90 Sr on a comparison set of surface water and net plankton samples from all five Great Lakes indicate generally consistent behavior patterns in these lakes. (U.S.)

  6. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  7. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  8. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  9. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  10. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  11. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  12. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  13. Intelligent Test Mechanism Design of Worn Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available With the continuous development of national economy, big gear was widely applied in metallurgy and mine domains. So, big gear plays an important role in above domains. In practical production, big gear abrasion and breach take place often. It affects normal production and causes unnecessary economic loss. A kind of intelligent test method was put forward on worn big gear mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. The measure equations transformations were made on involute straight gear. Original polar coordinate equations were transformed into rectangular coordinate equations. Big gear abrasion measure principle was introduced. Detection principle diagram was given. Detection route realization method was introduced. OADM12 laser sensor was selected. Detection on big gear abrasion area was realized by detection mechanism. Tested data of unworn gear and worn gear were led in designed calculation program written by Visual Basic language. Big gear abrasion quantity can be obtained. It provides a feasible method for intelligent test and intelligent repair welding on worn big gear.

  14. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  15. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  16. Methane emissions from permafrost thaw lakes limited by lake drainage.

    NARCIS (Netherlands)

    van Huissteden, J.; Berrittella, C.; Parmentier, F.J.W.; Mi, Y.; Maximov, T.C.; Dolman, A.J.

    2011-01-01

    Thaw lakes in permafrost areas are sources of the strong greenhouse gas methane. They develop mostly in sedimentary lowlands with permafrost and a high excess ground ice volume, resulting in large areas covered with lakes and drained thaw-lake basins (DTLBs; refs,). Their expansion is enhanced by

  17. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  18. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  19. Great Lakes Bathymetry

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetry of Lakes Michigan, Erie, Saint Clair, Ontario and Huron has been compiled as a component of a NOAA project to rescue Great Lakes lake floor geological and...

  20. Lake and lake-related drainage area parameters for site investigation program

    Energy Technology Data Exchange (ETDEWEB)

    Blomqvist, P.; Brunberg, A.K. [Uppsala Univ. (Sweden). Dept. of Limnology; Brydsten, L [Umeaa Univ. (Sweden). Dept. of Ecology and Environmental Science

    2000-09-01

    In this paper, a number of parameters of importance to a preliminary determination of the ecological function of lakes are presented. The choice of parameters have been made with respect to a model for the determination of the nature conservation values of lakes which is currently being developed by the authors of this report, but is also well suited for a general description of the lake type and the functioning of the inherent ecosystem. The parameters have been divided into five groups: (1) The location of the object relative important gradients in the surrounding nature; (2) The lake catchment area and its major constituents; (3) The lake morphometry; (4) The lake ecosystem; (5) Human-induced damages to the lake ecosystem. The first two groups, principally based on the climate, hydrology, geology and vegetation of the catchment area represent parameters that can be used to establish the rarity and representativity of the lake, and will in the context of site investigation program be used as a basis for generalisation of the results. The third group, the lake morphometry parameters, are standard parameters for the outline of sampling programmes and for calculations of the physical extension of different key habitats in the system. The fourth group, the ecosystem of the lake, includes physical, chemical and biological parameters required for determination of the stratification pattern, light climate, influence from the terrestrial ecosystem of the catchment area, trophic status, distribution of key habitats, and presence of fish and rare fauna and flora in the lake. In the context of site investigation program, the parameters in these two groups will be used for budget calculations of the flow of energy and material in the system. The fifth group, finally, describes the degree on anthropogenic influence on the ecosystem and will in the context of site investigation programmes be used to judge eventual malfunctioning within the entire, or parts of, the lake

  1. Lake and lake-related drainage area parameters for site investigation program

    International Nuclear Information System (INIS)

    Blomqvist, P.; Brunberg, A.K.; Brydsten, L

    2000-09-01

    In this paper, a number of parameters of importance to a preliminary determination of the ecological function of lakes are presented. The choice of parameters have been made with respect to a model for the determination of the nature conservation values of lakes which is currently being developed by the authors of this report, but is also well suited for a general description of the lake type and the functioning of the inherent ecosystem. The parameters have been divided into five groups: 1) The location of the object relative important gradients in the surrounding nature; 2) The lake catchment area and its major constituents; 3) The lake morphometry; 4) The lake ecosystem; 5) Human-induced damages to the lake ecosystem. The first two groups, principally based on the climate, hydrology, geology and vegetation of the catchment area represent parameters that can be used to establish the rarity and representativity of the lake, and will in the context of site investigation program be used as a basis for generalisation of the results. The third group, the lake morphometry parameters, are standard parameters for the outline of sampling programmes and for calculations of the physical extension of different key habitats in the system. The fourth group, the ecosystem of the lake, includes physical, chemical and biological parameters required for determination of the stratification pattern, light climate, influence from the terrestrial ecosystem of the catchment area, trophic status, distribution of key habitats, and presence of fish and rare fauna and flora in the lake. In the context of site investigation program, the parameters in these two groups will be used for budget calculations of the flow of energy and material in the system. The fifth group, finally, describes the degree on anthropogenic influence on the ecosystem and will in the context of site investigation programmes be used to judge eventual malfunctioning within the entire, or parts of, the lake ecosystem

  2. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  3. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  4. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  5. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  6. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  7. Rubidium-strontium ages from the Oxford Lake-Knee Lake greenstone belt, northern Manitoba

    International Nuclear Information System (INIS)

    Clark, G.S.; Cheung, S.-P.

    1980-01-01

    Rb-Sr whole-rock ages have been determined for rocks from the Oxford Lake-Knee Lake-Gods Lake geenstone belt in the Superior Province of northeastern Manitoba. The age of the Magill Lake Pluton is 2455 +- 35 Ma(lambda 87 Rb = 1.42 x 10 -11 yr -1 ), with an initial 87 Sr/ 86 Sr ratio of 0.7078 +- 0.0043. This granite stock intrudes the Oxford Lake Group, so it is post-tectonic and probably related to the second, weaker stage of metamorphism. The age of the Bayly Lake Pluton is 2424 +- 74 Ma, with an initial 87 Sr/ 86 Sr ratio of 0.7029 +- 0.0001. This granodioritic batholith complex does not intrude the Oxford Lake Group. It is syn-tectonic and metamorphosed. The age of volcanic rocks of the Hayes River Group, from Goose Lake (30 km south of Gods Lake Narrows), is 2680 +- 125 Ma, with an initial 87 Sr/ 86 Sr ratio of 0.7014 +- 0.0009. The age for the Magill Lake and Bayly Lake Plutons can be interpreted as the minimum ages of granite intrusion in the area. The age for the Hayes River Group volcanic rocks is consistent with Rb-Sr ages of volcanic rocks from other Archean greenstone belts within the northwestern Superior Province. (auth)

  8. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    Science.gov (United States)

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  9. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  10. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  11. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  12. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Science.gov (United States)

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  13. Geneva Festival, 2004: Opened with the Big Bang, closed with Creation

    CERN Multimedia

    2004-01-01

    In its 50th Anniversary year, CERN had the honour of opening and closing this year's Geneva Festival. The Geneva Festival traditionally opens with a bang, but this year's was the biggest yet. On 30 July, on a warm summer's evening by Lake Geneva, several tons of fireworks replayed the early history of the Universe. Starting with the Big Bang, the display had acts representing inflation, the breaking of symmetries, the clash of antimatter and matter, hadrons and nucleosynthesis, the first atoms and the Universe becoming transparent, and the formation of stars and planets. It was a challenge to translate these very abstract ideas into more than a thousand kilograms of TNT of different colour. But, set to the music of The Matrix, Alan Parsons, and Jurassic Park, one of the most spectacular physics presentations ever staged dazzled the audience of two hundred thousand spectators. CERN physicist Rolf Landua, who scripted the narrative and worked with the pyrotechnicians on the realization, said: "From the many e...

  14. A Dynamical Downscaling study over the Great Lakes Region Using WRF-Lake: Historical Simulation

    Science.gov (United States)

    Xiao, C.; Lofgren, B. M.

    2014-12-01

    As the largest group of fresh water bodies on Earth, the Laurentian Great Lakes have significant influence on local and regional weather and climate through their unique physical features compared with the surrounding land. Due to the limited spatial resolution and computational efficiency of general circulation models (GCMs), the Great Lakes are geometrically ignored or idealized into several grid cells in GCMs. Thus, the nested regional climate modeling (RCM) technique, known as dynamical downscaling, serves as a feasible solution to fill the gap. The latest Weather Research and Forecasting model (WRF) is employed to dynamically downscale the historical simulation produced by the Geophysical Fluid Dynamics Laboratory-Coupled Model (GFDL-CM3) from 1970-2005. An updated lake scheme originated from the Community Land Model is implemented in the latest WRF version 3.6. It is a one-dimensional mass and energy balance scheme with 20-25 model layers, including up to 5 snow layers on the lake ice, 10 water layers, and 10 soil layers on the lake bottom. The lake scheme is used with actual lake points and lake depth. The preliminary results show that WRF-Lake model, with a fine horizontal resolution and realistic lake representation, provides significantly improved hydroclimates, in terms of lake surface temperature, annual cycle of precipitation, ice content, and lake-effect snowfall. Those improvements suggest that better resolution of the lakes and the mesoscale process of lake-atmosphere interaction are crucial to understanding the climate and climate change in the Great Lakes region.

  15. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig proteins.

    Directory of Open Access Journals (Sweden)

    Rajeev Raman

    Full Text Available BACKGROUND: Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. PRINCIPAL FINDINGS: We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th (Lig A9 and 10(th repeats (Lig A10; and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon. All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm, probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. CONCLUSIONS: We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  16. [Characterizing chromophoric dissolved organic matter (CDOM) in Lake Honghu, Lake Donghu and Lake Liangzihu using excitation-emission matrices (EEMs) fluorescence and parallel factor analysis (PARAFAC)].

    Science.gov (United States)

    Zhou, Yong-Qiang; Zhang, Yun-Lin; Niu, Cheng; Wang, Ming-Zhu

    2013-12-01

    Little is known about DOM characteristics in medium to large sized lakes located in the middle and lower reaches of Yangtze River, like Lake Honghu, Lake Donghu and Lake Liangzihu. Absorption, fluorescence and composition characteristics of chromophoric dissolved organic matter (CDOM) are presented using the absorption spectroscopy, the excitation-emission ma trices (EEMs) fluorescence and parallel factor analysis (PARAFAC) model based on the data collected in Sep-Oct. 2007 including 15, 9 and 10 samplings in Lake Honghu, Lake Donghu and Lake Liangzihu, respectively. CDOM absorption coefficient at 350 nm a(350) coefficient in Lake Honghu was significantly higher than those in Lake Donghu and Lake Liangzihu (t-test, pCDOM spectral slope in the wavelength range of 280-500 nm (S280-500) and a(350) (R2 =0. 781, p<0. 001). The mean value of S280-500 in Lake Honghu was significantly lower than those in Lake Donghu (t-test, pLake Liangzihu (t-test, p<0. 001). The mean value of spectral slope ratio SR in Lake Honghu was also significantly lower than those in Lake Donghu and Lake Liangzihu (t-test, p<0. 05). Two humic-like (C1, C2) and two protein-like fluorescent components (C3, C4) were identified by PARAFAC model, among which significant positive correlations were found between C1 and C2 (R2 =0. 884, p<0. 001), C3 and C4 (R2 =0. 677, p<0.001), respectively, suggesting that the sources of the two humic-like components as well as the two protein-like components were similar. However, no significant correlation has been found between those 4 fluorescent components and DOC concentration. Th e fluorescenceindices of FI255 (HIX), Fl265, FI310 (BIX) and Fl370 in Lake Donghu were all significantly higher than those in Lake Liangzihu (t-test, p <0.05) and Lake Honghu (t-test, p<0. 01), indicating that the eutrophication status in Lake Donghu was higher than Lake Honghu and Lake Liangzihu.

  17. Isotope method to study the replenishment the lakes and downstream groundwater in Badain Jaran desert

    International Nuclear Information System (INIS)

    Chen Jiansheng; Fan Zhechao; Gu Weizu; Zhao Xia; Wang Jiyang

    2003-01-01

    In the paper, the sources of spring water and well water of Qilian Mountain's north side, Longshou Mountain, Badain Jaran Desert, Gurinai, Guaizi Lake, and Ejina Basin are studied by the methods of environmental isotopes and water chemistry. The groundwater of downstream areas (such as Badain Jaran Desert) is found that it is recharged by the precipitation of Qilian Mountain, and the average recharge elevation is 3300 m. Lots of naked limestones layers exist at the mountaintop of Qilian Mountain. The snow water of Qilian Mountain melts and directly infiltrates into deep layer passing through karst stratum or Big Fault in Front of the Mountain, and directly recharges into Badain Jaran Desert and its downstream areas passing through Longshou Mountain. The calcareous cementation and travertine, found in the lakes of the desert, approve that the groundwater passed the limestone layer. Confined water recharges shallow aquifer by means of leakage. The groundwater recharge volume is six hundreds millions cubic meters per year by calculating the evaporation amount, and the age of confined groundwater is 20-30 years. (authors)

  18. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  19. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  20. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  1. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  2. Relationship between natural radioactivity and rock type in the Van lake basin - Turkey

    International Nuclear Information System (INIS)

    Tolluoglu, A. U.; Eral, M.; Aytas, S.

    2004-01-01

    The Van Lake basin located at eastern part of Turkey. The Van lake basin essentially comprises two province, these are namely Van and Bitlis. The former geochemistry research indicated that the uranium concentrations of Van lake water and deep sediments are 78-116 ppb and 0.1-0.5 ppm respectively. Uranium was transported to Van Lake by rivers and streams, flow through to outcrops of Paleozoic Bitlis Massive, and young Pleistocene alkaline/calkalkaline volcanic rocks. This study focused on the revealing natural radioactivity and secondary dispersion of radioactivity related to rock types surface environments in the Van Lake Basin. The Van Lake Basin essentially subdivided into three different parts; the Eastern parts characterized by Mesozoic basic and ultra basic rocks, southern parts dominated by metamorphic rocks of Bitlis Massive, Western and Northwestern parts covered by volcanic rocks of Pleistocene. Volcanic rocks can be subdivided into two different types. The first type is mafic rocks mainly composed of basalts. The second type is felsic rocks represented by rhyolites, dacites and pumice tuff. Surface gamma measurements (cps) and dose rate measurements (μR/h) show different values according to rock type. Surface gamma measurement and surface dose rate values in the basaltic rocks are slightly higher than the average values (130 cps, 11 μR/h). In the felsic volcanic rocks such as rhyolites and dacites surface gamma measurement values and surface dose rate values, occasionally exceed the background. Highest values were obtained in the pumice tuffs. Rhyolitic eruptions related to Quaternary volcanic activity formed thick pumice (natural glassy froth related to felsic volcanic rocks and exhibit spongy texture) sequences Northern and Western part of Van Lake basin. The dose rate of pumice rocks was measured mean 15 μR/h. The highest value for surface gamma measurements was recorded as 200 cps. The pumice has very big water capacity, due to porous texture of

  3. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  4. Epidemiology in wonderland: Big Data and precision medicine.

    Science.gov (United States)

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  5. Big Data and Analytics in Healthcare.

    Science.gov (United States)

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  6. Big Data for Business Ecosystem Players

    Directory of Open Access Journals (Sweden)

    Perko Igor

    2016-06-01

    Full Text Available In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use-namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.

  7. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  8. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  9. Deglaciation, lake levels, and meltwater discharge in the Lake Michigan basin

    Science.gov (United States)

    Colman, Steven M.; Clark, J.A.; Clayton, L.; Hansel, A.K.; Larsen, C.E.

    1994-01-01

    The deglacial history of the Lake Michigan basin, including discharge and routing of meltwater, is complex because of the interaction among (1) glacial retreats and re-advances in the basin (2) the timing of occupation and the isostatic adjustment of lake outlets and (3) the depositional and erosional processes that left evidence of past lake levels. In the southern part of the basin, a restricted area little affected by differential isostasy, new studies of onshore and offshore areas allow refinement of a lake-level history that has evolved over 100 years. Important new data include the recognition of two periods of influx of meltwater from Lake Agassiz into the basin and details of the highstands gleaned from sedimentological evidence. Major disagreements still persist concerning the exact timing and lake-level changes associated with the Algonquin phase, approximately 11,000 BP. A wide variety of independent data suggests that the Lake Michigan Lobe was thin, unstable, and subject to rapid advances and retreats. Consequently, lake-level changes were commonly abrupt and stable shorelines were short-lived. The long-held beliefs that the southern part of the basin was stable and separated from deformed northern areas by a hinge-line discontinuity are becoming difficult to maintain. Numerical modeling of the ice-earth system and empirical modeling of shoreline deformation are both consistent with observed shoreline tilting in the north and with the amount and pattern of modern deformation shown by lake-level gauges. New studies of subaerial lacustrine features suggest the presence of deformed shorelines higher than those originally ascribed to the supposed horizontal Glenwood level. Finally, the Lake Michigan region as a whole appears to behave in a similar manner to other areas, both local (other Great Lakes) and regional (U.S. east coast), that have experienced major isostatic changes. Detailed sedimentological and dating studies of field sites and additional

  10. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  11. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  12. BIG DATA IN TAMIL: OPPORTUNITIES, BENEFITS AND CHALLENGES

    OpenAIRE

    R.S. Vignesh Raj; Babak Khazaei; Ashik Ali

    2015-01-01

    This paper gives an overall introduction on big data and has tried to introduce Big Data in Tamil. It discusses the potential opportunities, benefits and likely challenges from a very Tamil and Tamil Nadu perspective. The paper has also made original contribution by proposing the ‘big data’s’ terminology in Tamil. The paper further suggests a few areas to explore using big data Tamil on the lines of the Tamil Nadu Government ‘vision 2023’. Whilst, big data has something to offer everyone, it ...

  13. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  15. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  16. Great Lakes

    Science.gov (United States)

    Edsall, Thomas A.; Mac, Michael J.; Opler, Paul A.; Puckett Haecker, Catherine E.; Doran, Peter D.

    1998-01-01

    The Great Lakes region, as defined here, includes the Great Lakes and their drainage basins in Minnesota, Wisconsin, Illinois, Indiana, Ohio, Pennsylvania, and New York. The region also includes the portions of Minnesota, Wisconsin, and the 21 northernmost counties of Illinois that lie in the Mississippi River drainage basin, outside the floodplain of the river. The region spans about 9º of latitude and 20º of longitude and lies roughly halfway between the equator and the North Pole in a lowland corridor that extends from the Gulf of Mexico to the Arctic Ocean.The Great Lakes are the most prominent natural feature of the region (Fig. 1). They have a combined surface area of about 245,000 square kilometers and are among the largest, deepest lakes in the world. They are the largest single aggregation of fresh water on the planet (excluding the polar ice caps) and are the only glacial feature on Earth visible from the surface of the moon (The Nature Conservancy 1994a).The Great Lakes moderate the region’s climate, which presently ranges from subarctic in the north to humid continental warm in the south (Fig. 2), reflecting the movement of major weather masses from the north and south (U.S. Department of the Interior 1970; Eichenlaub 1979). The lakes act as heat sinks in summer and heat sources in winter and are major reservoirs that help humidify much of the region. They also create local precipitation belts in areas where air masses are pushed across the lakes by prevailing winds, pick up moisture from the lake surface, and then drop that moisture over land on the other side of the lake. The mean annual frost-free period—a general measure of the growing-season length for plants and some cold-blooded animals—varies from 60 days at higher elevations in the north to 160 days in lakeshore areas in the south. The climate influences the general distribution of wild plants and animals in the region and also influences the activities and distribution of the human

  17. Lakes, Lagerstaetten, and Evolution

    Science.gov (United States)

    Kordesch, E. G.; Park, L. E.

    2001-12-01

    The diversity of terrestrial systems is estimated to be greater than in the marine realm. However no hard data yet exists to substantiate this claim. Ancient lacustrine deposits may preserve an exceptionally diverse fossil fauna and aid in determining continental faunal diversities. Fossils preserved in lake deposits, especially those with exceptional preservation (i.e. Konservat Lagerstaetten), may represent a dependable method for determining species diversity changes in the terrestrial environment because of their faunal completeness. Important Konservat Lagerstaetten, such as the Green River Formation (US) and Messel (Germany), both Eocene in age, are found in lake sediments and show a remarkable faunal diversity for both vertebrates and invertebrates. To date information from nearly 25 lake lagerstaetten derived from different types of lake basins from the Carboniferous to the Miocene have been collected and described. Carboniferous sites derive from the cyclothems of Midcontinent of the US while many Cenozoic sites have been described from North and South America as well as Europe and Australia. Asian sites contain fossils from the Mesozoic and Cenozoic. With this data, insight into the evolutionary processes associated with lake systems can be examined. Do lakes act as unique evolutionary crucibles in contrast to marine systems? The speciation of cichlid fishes in present-day African lakes appears to be very high and is attributed to the diversity of environments found in large rift lakes. Is this true of all ancient lakes or just large rift lakes? The longevity of a lake system may be an important factor in allowing speciation and evolutionary processes to occur; marine systems are limited only in the existence of environments as controlled by tectonics and sea level changes, on the order of tens of millions of years. Rift lakes are normally the longest lived in the millions of years. Perhaps there are only certain types of lakes in which speciation of

  18. Suspended-sediment budget, flow distribution, and lake circulation for the Fox Chain of Lakes in Lake and McHenry Counties, Illinois, 1997-99

    Science.gov (United States)

    Schrader, David L.; Holmes, Robert R.

    2000-01-01

    The Fox Chain of Lakes is a glacial lake system in McHenry and Lake Counties in northern Illinois and southern Wisconsin. Sedimentation and nutrient overloading have occurred in the lake system since the first dam was built (1907) in McHenry to raise water levels in the lake system. Using data collected from December 1, 1997, to June 1, 1999, suspended-sediment budgets were constructed for the most upstream lake in the system, Grass Lake, and for the lakes downstream from Grass Lake. A total of 64,900 tons of suspended sediment entered Grass Lake during the study, whereas a total of 70,600 tons of suspended sediment exited the lake, indicating a net scour of 5,700 tons of sediment. A total of 44,100 tons of suspended sediment was measured exiting the Fox Chain of Lakes at Johnsburg, whereas 85,600 tons entered the system downstream from Grass Lake. These suspended-sediment loads indicate a net deposition of 41,500 tons downstream from Grass Lake, which represents a trapping efficiency of 48.5 percent. A large amount of recreational boating takes place on the Fox Chain of Lakes during summer months, and suspended-sediment load was observed to rise from 110 tons per day to 339 tons per day during the 1999 Memorial Day weekend (May 26 ?31, 1999). Presumably, this rise was the result of the boating traffic because no other hydrologic event is known to have occurred that might have caused the rise. This study covers a relatively short period and may not represent the long-term processes of the Fox Chain of Lakes system, although the sediment transport was probably higher than an average year. The bed sediments found on the bottom of the lakes are composed of mainly fine particles in the silt-clay range. The Grass Lake sediments were characterized as black peat with an organic content of between 9 and 18 percent, and the median particle size ranged from 0.000811 to 0.0013976 inches. Other bed material samples were collected at streamflow-gaging stations on the

  19. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  20. 33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...

  1. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. First evidence of successful natural reproduction by planted lake trout in Lake Huron

    Science.gov (United States)

    Nester, Robert T.; Poe, Thomas P.

    1984-01-01

    Twenty-two lake trout (Salvelinus namaycush) swim-up fry, 24-27 mm long, were captured with emergent fry traps and a tow net in northwestern Lake Huron on a small nearshore reef off Alpena, Michigan, between May 10 and June 1, 1982. These catches represent the first evidence of successful production of swim-up fry by planted, hatchery-reared lake trout in Lake Huron since the lake trout rehabilitation program began in 1973.

  3. Estimation of lake water - groundwater interactions in meromictic mining lakes by modelling isotope signatures of lake water.

    Science.gov (United States)

    Seebach, Anne; Dietz, Severine; Lessmann, Dieter; Knoeller, Kay

    2008-03-01

    A method is presented to assess lake water-groundwater interactions by modelling isotope signatures of lake water using meteorological parameters and field data. The modelling of delta(18)O and deltaD variations offers information about the groundwater influx into a meromictic Lusatian mining lake. Therefore, a water balance model is combined with an isotope water balance model to estimate analogies between simulated and measured isotope signatures within the lake water body. The model is operated with different evaporation rates to predict delta(18)O and deltaD values in a lake that is only controlled by weather conditions with neither groundwater inflow nor outflow. Comparisons between modelled and measured isotope values show whether the lake is fed by the groundwater or not. Furthermore, our investigations show that an adaptation of the Craig and Gordon model [H. Craig, L.I. Gordon. Deuterium and oxygen-18 variations in the ocean and the marine atmosphere. In Stable Isotopes in Oceanographic Studies and Paleotemperature, Spoleto, E. Tongiorgi (Ed.), pp. 9-130, Consiglio Nazionale delle Ricerche, Laboratorio di Geologia Nucleare, Pisa (1965).] to specific conditions in temperate regions seems necessary.

  4. Areal distribution and concentration of contaminants of concern in surficial streambed and lakebed sediments, Lake St. Clair and tributaries, Michigan, 1990-2003

    Science.gov (United States)

    Rachol, Cynthia M.; Button, Daniel T.

    2006-01-01

    As part of the Lake St. Clair Regional Monitoring Project, the U.S. Geological Survey evaluated data collected from surficial streambed and lakebed sediments in the Lake Erie-Lake St. Clair drainages. This study incorporates data collected from 1990 through 2003 and focuses primarily on the U.S. part of the Lake St. Clair Basin, including Lake St. Clair, the St. Clair River, and tributaries to Lake St. Clair. Comparable data from the Canadian part of the study area are included where available. The data are compiled into 4 chemical classes and consist of 21 compounds. The data are compared to effects-based sediment-quality guidelines, where the Threshold Effect Level and Lowest Effect Level represent concentrations below which adverse effects on biota are not expected and the Probable Effect Level and Severe Effect Level represent concentrations above which adverse effects on biota are expected to be frequent.Maps in the report show the spatial distribution of the sampling locations and illustrate the concentrations relative to the selected sediment-quality guidelines. These maps indicate that sediment samples from certain areas routinely had contaminant concentrations greater than the Threshold Effect Concentration or Lowest Effect Level. These locations are the upper reach of the St. Clair River, the main stem and mouth of the Clinton River, Big Beaver Creek, Red Run, and Paint Creek. Maps also indicated areas that routinely contained sediment contaminant concentrations that were greater than the Probable Effect Concentration or Severe Effect Level. These locations include the upper reach of the St. Clair River, the main stem and mouth of the Clinton River, Red Run, within direct tributaries along Lake St. Clair and in marinas within the lake, and within the Clinton River headwaters in Oakland County.Although most samples collected within Lake St. Clair were from sites adjacent to the mouths of its tributaries, samples analyzed for trace-element concentrations

  5. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  6. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  7. Lake Sturgeon, Acipenser fulvescens, movements in Rainy Lake, Minnesota and Ontario

    Science.gov (United States)

    Adams, W.E.; Kallemeyn, L.W.; Willis, D.W.

    2006-01-01

    Rainy Lake, Minnesota-Ontario, contains a native population of Lake Sturgeon (Acipenser fulvescens) that has gone largely unstudied. The objective of this descriptive study was to summarize generalized Lake Sturgeon movement patterns through the use of biotelemetry. Telemetry data reinforced the high utilization of the Squirrel Falls geographic location by Lake Sturgeon, with 37% of the re-locations occurring in that area. Other spring aggregations occurred in areas associated with Kettle Falls, the Pipestone River, and the Rat River, which could indicate spawning activity. Movement of Lake Sturgeon between the Seine River and the South Arm of Rainy Lake indicates the likelihood of one integrated population on the east end of the South Arm. The lack of re-locations in the Seine River during the months of September and October may have been due to Lake Sturgeon moving into deeper water areas of the Seine River and out of the range of radio telemetry gear or simply moving back into the South Arm. Due to the movements between Minnesota and Ontario, coordination of management efforts among provincial, state, and federal agencies will be important.

  8. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    Science.gov (United States)

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  9. Determining lake surface water temperatures worldwide using a tuned one-dimensional lake model (FLake, v1)

    Science.gov (United States)

    Layden, Aisling; MacCallum, Stuart N.; Merchant, Christopher J.

    2016-06-01

    A tuning method for FLake, a one-dimensional (1-D) freshwater lake model, is applied for the individual tuning of 244 globally distributed large lakes using observed lake surface water temperatures (LSWTs) derived from along-track scanning radiometers (ATSRs). The model, which was tuned using only three lake properties (lake depth, snow and ice albedo and light extinction coefficient), substantially improves the measured mean differences in various features of the LSWT annual cycle, including the LSWTs of saline and high altitude lakes, when compared to the observed LSWTs. Lakes whose lake-mean LSWT persists below 1 °C for part of the annual cycle are considered to be seasonally ice-covered. For trial seasonally ice-covered lakes (21 lakes), the daily mean and standard deviation (2σ) of absolute differences between the modelled and observed LSWTs are reduced from 3.07 °C ± 2.25 °C to 0.84 °C ± 0.51 °C by tuning the model. For all other trial lakes (14 non-ice-covered lakes), the improvement is from 3.55 °C ± 3.20 °C to 0.96 °C ± 0.63 °C. The post tuning results for the 35 trial lakes (21 seasonally ice-covered lakes and 14 non-ice-covered lakes) are highly representative of the post-tuning results of the 244 lakes. For the 21 seasonally ice-covered lakes, the modelled response of the summer LSWTs to changes in snow and ice albedo is found to be statistically related to lake depth and latitude, which together explain 0.50 (R2adj, p = 0.001) of the inter-lake variance in summer LSWTs. Lake depth alone explains 0.35 (p = 0.003) of the variance. Lake characteristic information (snow and ice albedo and light extinction coefficient) is not available for many lakes. The approach taken to tune the model, bypasses the need to acquire detailed lake characteristic values. Furthermore, the tuned values for lake depth, snow and ice albedo and light extinction coefficient for the 244 lakes provide some guidance on improving FLake LSWT modelling.

  10. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  11. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  12. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  13. Geochemical monitoring of volcanic lakes. A generalized box model for active crater lakes

    Directory of Open Access Journals (Sweden)

    Franco Tassi

    2011-06-01

    Full Text Available

    In the past, variations in the chemical contents (SO42−, Cl−, cations of crater lake water have not systematically demonstrated any relationships with eruptive activity. Intensive parameters (i.e., concentrations, temperature, pH, salinity should be converted into extensive parameters (i.e., fluxes, changes with time of mass and solutes, taking into account all the internal and external chemical–physical factors that affect the crater lake system. This study presents a generalized box model approach that can be useful for geochemical monitoring of active crater lakes, as highly dynamic natural systems. The mass budget of a lake is based on observations of physical variations over a certain period of time: lake volume (level, surface area, lake water temperature, meteorological precipitation, air humidity, wind velocity, input of spring water, and overflow of the lake. This first approach leads to quantification of the input and output fluxes that contribute to the actual crater lake volume. Estimating the input flux of the "volcanic" fluid (Qf- kg/s –– an unmeasurable subsurface parameter –– and tracing its variations with time is the major focus during crater lake monitoring. Through expanding the mass budget into an isotope and chemical budget of the lake, the box model helps to qualitatively characterize the fluids involved. The (calculated Cl− content and dD ratio of the rising "volcanic" fluid defines its origin. With reference to continuous monitoring of crater lakes, the present study provides tips that allow better calculation of Qf in the future. At present, this study offers the most comprehensive and up-to-date literature review on active crater lakes.

  14. Water-quality and lake-stage data for Wisconsin lakes, water year 2014

    Science.gov (United States)

    Manteufel, S. Bridgett; Robertson, Dale M.

    2017-05-25

    IntroductionThe U.S. Geological Survey (USGS), in cooperation with local and other agencies, collects data at selected lakes throughout Wisconsin. These data, accumulated over many years, provide a database for developing an improved understanding of the water quality of lakes. To make these data available to interested parties outside the USGS, the data are published annually in this report series. The locations of water-quality and lake-stage stations in Wisconsin for water year 2014 are shown in figure 1. A water year is the 12-month period from October 1 through September 30. It is designated by the calendar year in which it ends. Thus, the periodOctober 1, 2013, through September 30, 2014, is called “water year 2014.”The purpose of this report is to provide information about the chemical and physical characteristics of Wisconsin lakes. Data that have been collected at specific lakes, and information to aid in the interpretation of those data, are included in this report. Data collected include measurements of in-lake water quality and lake stage. Time series of Secchi depths, surface total phosphorus, and chlorophyll a concentrations collected during nonfrozen periods are included for many lakes. Graphs of vertical profiles of temperature, dissolved oxygen, pH, and specific conductance are included for sites where these parameters were measured. Descriptive information for each lake includes the location of the lake, area of the lake’s watershed, period for which data are available, revisions to previously published records, and pertinent remarks. Additional data, such as streamflow and water quality in tributary and outlet streams of some of the lakes, are published online at http://nwis.waterdata.usgs.gov/wi/nwis.Water-resources data, including stage and discharge data at most streamflow-gaging stations, are available online. The Wisconsin Water Science Center’s home page is at https://www.usgs.gov/centers/wisconsin-water-science-center. Information

  15. Quantum nature of the big bang.

    Science.gov (United States)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  16. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    Science.gov (United States)

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  17. Lake Michigan lake trout PCB model forecast post audit (oral presentation)

    Science.gov (United States)

    Scenario forecasts for total PCBs in Lake Michigan (LM) lake trout were conducted using the linked LM2-Toxics and LM Food Chain models, supported by a suite of additional LM models. Efforts were conducted under the Lake Michigan Mass Balance Study and the post audit represents an...

  18. Simulating Lake-Groundwater Interactions During Decadal Climate Cycles: Accounting For Variable Lake Area In The Watershed

    Science.gov (United States)

    Virdi, M. L.; Lee, T. M.

    2009-12-01

    The volume and extent of a lake within the topo-bathymetry of a watershed can change substantially during wetter and drier climate cycles, altering the interaction of the lake with the groundwater flow system. Lake Starr and other seepage lakes in the permeable sandhills of central Florida are vulnerable to climate changes as they rely exclusively on rainfall and groundwater for inflows in a setting where annual rainfall and recharge vary widely. The groundwater inflow typically arrives from a small catchment area bordering the lake. The sinkhole origin of these lakes combined with groundwater pumping from underlying aquifers further complicate groundwater interactions. Understanding the lake-groundwater interactions and their effects on lake stage over multi-decadal climate cycles is needed to manage groundwater pumping and public expectation about future lake levels. The interdependence between climate, recharge, changing lake area and the groundwater catchment pose unique challenges to simulating lake-groundwater interactions. During the 10-year study period, Lake Starr stage fluctuated more than 13 feet and the lake surface area receded and expanded from 96 acres to 148 acres over drier and wetter years that included hurricanes, two El Nino events and a La Nina event. The recently developed Unsaturated Zone Flow (UZF1) and Lake (LAK7) packages for MODFLOW-2005 were used to simulate the changing lake sizes and the extent of the groundwater catchment contributing flow to the lake. The lake area was discretized to occupy the largest surface area at the highest observed stage and then allowed to change size. Lake cells convert to land cells and receive infiltration as receding lake area exposes the underlying unsaturated zone to rainfall and recharge. The unique model conceptualization also made it possible to capture the dynamic size of the groundwater catchment contributing to lake inflows, as the surface area and volume of the lake changed during the study

  19. Lake Granbury and Lake Whitney Assessment Initiative Final Scientific/Technical Report Summary

    Energy Technology Data Exchange (ETDEWEB)

    Harris, B. L. [Texas AgriLife Research, College Station, TX (United States); Roelke, Daniel [Texas AgriLife Research, College Station, TX (United States); Brooks, Bryan [Texas AgriLife Research, College Station, TX (United States); Grover, James [Texas AgriLife Research, College Station, TX (United States)

    2010-10-11

    A team of Texas AgriLife Research, Baylor University and University of Texas at Arlington researchers studied the biology and ecology of Prymnesium parvum (golden algae) in Texas lakes using a three-fold approach that involved system-wide monitoring, experimentation at the microcosm and mesocosm scales, and mathematical modeling. The following are conclusions, to date, regarding this organism's ecology and potential strategies for mitigation of blooms by this organism. In-lake monitoring revealed that golden algae are present throughout the year, even in lakes where blooms do not occur. Compilation of our field monitoring data with data collected by Texas Parks and Wildlife and Brazos River Authority (a period spanning a decade) revealed that inflow and salinity variables affect bloom formations. Thresholds for algae populations vary per lake, likely due to adaptations to local conditions, and also to variations in lake-basin morphometry, especially the presence of coves that may serve as hydraulic storage zones for P. parvum populations. More specifically, our in-lake monitoring showed that the highly toxic bloom that occurred in Lake Granbury in the winter of 2006/2007 was eliminated by increased river inflow events. The bloom was flushed from the system. The lower salinities that resulted contributed to golden algae not blooming in the following years. However, flushing is not an absolute requirement for bloom termination. Laboratory experiments have shown that growth of golden algae can occur at salinities ~1-2 psu but only when temperatures are also low. This helps to explain why blooms are possible during winter months in Texas lakes. Our in-lake experiments in Lake Whitney and Lake Waco, as well as our laboratory experiments, revealed that cyanobacteria, or some other bacteria capable of producing algicides, were able to prevent golden algae from blooming. Identification of this organism is a high priority as it may be a key to managing golden algae

  20. Big data processing in the cloud - Challenges and platforms

    Science.gov (United States)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  1. Geophysical investigation of sentinel lakes in Lake, Seminole, Orange, and Volusia Counties, Florida

    Science.gov (United States)

    Reich, Christopher; Flocks, James; Davis, Jeffrey

    2012-01-01

    This study was initiated in cooperation with the St. Johns River Water Management District (SJRWMD) to investigate groundwater and surface-water interaction in designated sentinel lakes in central Florida. Sentinel lakes are a SJRWMD established set of priority water bodies (lakes) for which minimum flows and levels (MFLs) are determined. Understanding both the structure and lithology beneath these lakes can ultimately lead to a better understanding of the MFLs and why water levels fluctuate in certain lakes more so than in other lakes. These sentinel lakes have become important water bodies to use as water-fluctuation indicators in the SJRWMD Minimum Flows and Levels program and will be used to define long-term hydrologic and ecologic performance measures. Geologic control on lake hydrology remains poorly understood in this study area. Therefore, the U.S. Geological Survey investigated 16 of the 21 water bodies on the SJRWMD priority list. Geologic information was obtained by the tandem use of high-resolution seismic profiling (HRSP) and direct-current (DC) resistivity profiling to isolate both the geologic framework (structure) and composition (lithology). Previous HRSP surveys from various lakes in the study area have been successful in identifying karst features, such as subsidence sinkholes. However, by using this method only, it is difficult to image highly irregular or chaotic surfaces, such as collapse sinkholes. Resistivity profiling was used to complement HRSP by detecting porosity change within fractured or collapsed structures and increase the ability to fully characterize the subsurface. Lake Saunders (Lake County) is an example of a lake composed of a series of north-south-trending sinkholes that have joined to form one lake body. HRSP shows surface depressions and deformation in the substrate. Resistivity data likewise show areas in the southern part of the lake where resistivity shifts abruptly from approximately 400 ohm meters (ohm-m) along the

  2. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  3. Victoria Stodden: Scholarly Communication in the Era of Big Data and Big Computation

    OpenAIRE

    Stodden, Victoria

    2015-01-01

    Victoria Stodden gave the keynote address for Open Access Week 2015. "Scholarly communication in the era of big data and big computation" was sponsored by the University Libraries, Computational Modeling and Data Analytics, the Department of Computer Science, the Department of Statistics, the Laboratory for Interdisciplinary Statistical Analysis (LISA), and the Virginia Bioinformatics Institute. Victoria Stodden is an associate professor in the Graduate School of Library and Information Scien...

  4. Big Data: Concept, Potentialities and Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Fernando Almeida

    2018-03-01

    Full Text Available The evolution of information systems and the growth in the use of the Internet and social networks has caused an explosion in the amount of available data relevant to the activities of the companies. Therefore, the treatment of these available data is vital to support operational, tactical and strategic decisions. This paper aims to present the concept of big data and the main technologies that support the analysis of large data volumes. The potential of big data is explored considering nine sectors of activity, such as financial, retail, healthcare, transports, agriculture, energy, manufacturing, public, and media and entertainment. In addition, the main current opportunities, vulnerabilities and privacy challenges of big data are discussed. It was possible to conclude that despite the potential for using the big data to grow in the previously identified areas, there are still some challenges that need to be considered and mitigated, namely the privacy of information, the existence of qualified human resources to work with Big Data and the promotion of a data-driven organizational culture.

  5. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  6. Human factors in Big Data

    NARCIS (Netherlands)

    Boer, J. de

    2016-01-01

    Since 2014 I am involved in various (research) projects that try to make the hype around Big Data more concrete and tangible for the industry and government. Big Data is about multiple sources of (real-time) data that can be analysed, transformed to information and be used to make 'smart' decisions.

  7. Great Lakes Literacy Principles

    Science.gov (United States)

    Fortner, Rosanne W.; Manzo, Lyndsey

    2011-03-01

    Lakes Superior, Huron, Michigan, Ontario, and Erie together form North America's Great Lakes, a region that contains 20% of the world's fresh surface water and is home to roughly one quarter of the U.S. population (Figure 1). Supporting a $4 billion sport fishing industry, plus $16 billion annually in boating, 1.5 million U.S. jobs, and $62 billion in annual wages directly, the Great Lakes form the backbone of a regional economy that is vital to the United States as a whole (see http://www.miseagrant.umich.edu/downloads/economy/11-708-Great-Lakes-Jobs.pdf). Yet the grandeur and importance of this freshwater resource are little understood, not only by people in the rest of the country but also by many in the region itself. To help address this lack of knowledge, the Centers for Ocean Sciences Education Excellence (COSEE) Great Lakes, supported by the U.S. National Science Foundation and the National Oceanic and Atmospheric Administration, developed literacy principles for the Great Lakes to serve as a guide for education of students and the public. These “Great Lakes Literacy Principles” represent an understanding of the Great Lakes' influences on society and society's influences on the Great Lakes.

  8. Human impact on lake ecosystems: the case of Lake Naivasha, Kenya

    African Journals Online (AJOL)

    Lake Naivasha is a wetland of national and international importance. However, it is under constant anthropogenic pressures, which include the quest for socioeconomic development within the lake ecosystem itself as well as other activities within the catchment. The lake is an important source of fresh water in an otherwise ...

  9. Lake whitefish and Diporeia spp. in the Great lakes: an overview

    Science.gov (United States)

    Nalepa, Thomas F.; Mohr, Lloyd C.; Henderson, Bryan A.; Madenjian, Charles P.; Schneeberger, Philip J.

    2005-01-01

    Because of growing concern in the Great Lakes over declines in abundance and growth of lake whitefish (Coregonus clupeaformis) and declines in abundance of the benthic amphipod Diporeia spp., a workshop was held to examine past and current trends, to explore trophic links, and to discuss the latest research results and needs. The workshop was divided into sessions on the status of populations in each of the lakes, bioenergetics and trophic dynamics, and exploitation and management. Abundance, growth, and condition of whitefish populations in Lakes Superior and Erie are stable and within the range of historical means, but these variables are declining in Lakes Michigan and Ontario and parts of Lake Huron. The loss of Diporeia spp., a major food item of whitefish, has been a factor in observed declines, particularly in Lake Ontario, but density-dependent factors also likely played a role in Lakes Michigan and Huron. The loss of Diporeia spp. is temporally linked to the introduction and proliferation of dreissenid mussels, but a direct cause for the negative response of Diporeia spp. has not been established. Given changes in whitefish populations, age-structured models need to be re-evaluated. Other whitefish research needs to include a better understanding of what environmental conditions lead to strong year-classes, improved aging techniques, and better information on individual population (stock) structure. Further collaborations between assessment biologists and researchers studying the lower food web would enhance an understanding of links between trophic levels.

  10. Slaves to Big Data. Or Are We?

    Directory of Open Access Journals (Sweden)

    Mireille Hildebrandt

    2013-10-01

    Full Text Available

    In this contribution, the notion of Big Data is discussed in relation to the monetisation of personal data. The claim of some proponents, as well as adversaries, that Big Data implies that ‘n = all’, meaning that we no longer need to rely on samples because we have all the data, is scrutinised and found to be both overly optimistic and unnecessarily pessimistic. A set of epistemological and ethical issues is presented, focusing on the implications of Big Data for our perception, cognition, fairness, privacy and due process. The article then looks into the idea of user-centric personal data management to investigate to what extent it provides solutions for some of the problems triggered by the Big Data conundrum. Special attention is paid to the core principle of data protection legislation, namely purpose binding. Finally, this contribution seeks to inquire into the influence of Big Data politics on self, mind and society, and asks how we can prevent ourselves from becoming slaves to Big Data.

  11. Will Organization Design Be Affected By Big Data?

    Directory of Open Access Journals (Sweden)

    Giles Slinger

    2014-12-01

    Full Text Available Computing power and analytical methods allow us to create, collate, and analyze more data than ever before. When datasets are unusually large in volume, velocity, and variety, they are referred to as “big data.” Some observers have suggested that in order to cope with big data (a organizational structures will need to change and (b the processes used to design organizations will be different. In this article, we differentiate big data from relatively slow-moving, linked people data. We argue that big data will change organizational structures as organizations pursue the opportunities presented by big data. The processes by which organizations are designed, however, will be relatively unaffected by big data. Instead, organization design processes will be more affected by the complex links found in people data.

  12. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  13. Big Data

    OpenAIRE

    Bútora, Matúš

    2017-01-01

    Cieľom bakalárskej práca je popísať problematiku Big Data a agregačné operácie OLAP pre podporu rozhodovania, ktoré sú na ne aplikované pomocou technológie Apache Hadoop. Prevažná časť práce je venovaná popisu práve tejto technológie. Posledná kapitola sa zaoberá spôsobom aplikovania agregačných operácií a problematikou ich realizácie. Nasleduje celkové zhodnotenie práce a možnosti využitia výsledného systému do budúcna. The aim of the bachelor thesis is to describe the Big Data issue and ...

  14. Sources and distribution of microplastics in China's largest inland lake - Qinghai Lake.

    Science.gov (United States)

    Xiong, Xiong; Zhang, Kai; Chen, Xianchuan; Shi, Huahong; Luo, Ze; Wu, Chenxi

    2018-04-01

    Microplastic pollution was studied in China's largest inland lake - Qinghai Lake in this work. Microplastics were detected with abundance varies from 0.05 × 10 5 to 7.58 × 10 5 items km -2 in the lake surface water, 0.03 × 10 5 to 0.31 × 10 5 items km -2 in the inflowing rivers, 50 to 1292 items m -2 in the lakeshore sediment, and 2 to 15 items per individual in the fish samples, respectively. Small microplastics (0.1-0.5 mm) dominated in the lake surface water while large microplastics (1-5 mm) are more abundant in the river samples. Microplastics were predominantly in sheet and fiber shapes in the lake and river water samples but were more diverse in the lakeshore sediment samples. Polymer types of microplastics were mainly polyethylene (PE) and polypropylene (PP) as identified using Raman Spectroscopy. Spatially, microplastic abundance was the highest in the central part of the lake, likely due to the transport of lake current. Based on the higher abundance of microplastics near the tourist access points, plastic wastes from tourism are considered as an important source of microplastics in Qinghai Lake. As an important area for wildlife conservation, better waste management practice should be implemented, and waste disposal and recycling infrastructures should be improved for the protection of Qinghai Lake. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. BigDansing

    KAUST Repository

    Khayyat, Zuhair; Ilyas, Ihab F.; Jindal, Alekh; Madden, Samuel; Ouzzani, Mourad; Papotti, Paolo; Quiané -Ruiz, Jorge-Arnulfo; Tang, Nan; Yin, Si

    2015-01-01

    of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic

  16. Leveraging Mobile Network Big Data for Developmental Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Some argue that big data and big data users offer advantages to generate evidence. ... Supported by IDRC, this research focused on transportation planning in urban ... Using mobile network big data for land use classification CPRsouth 2015.

  17. Outflows of groundwater in lakes: case study of Lake Raduńske Górne

    Directory of Open Access Journals (Sweden)

    Cieśliński Roman

    2014-12-01

    Full Text Available The aim of the study was to locate and describe groundwater outflows in a selected lake basin. The study hypothesis was based on the fact that, according to the specialist literature, one of the forms of lake water supply is through groundwater outflows. It was also assumed that the lakes of the Kashubian Lake District are characterised by such a form of lake water supply. The time scope of the work included the period from January 2011 to September 2012. The spatial scope of the work included the area of Lake Raduńskie Górne, located in the Kashubian Lake District in north Poland. The research plot was in the north-eastern part of the lake. Office works were aimed at gathering and studying source materials and maps. Cartographic materials were analysed with the use of the MapInfo Professional 9.5. The purpose of the field work was to find the groundwater outflows in the basin of Lake Raduńskie Górne. During the field research diving was carried out in the lake. During the dive audiovisual documentation was conducted using a Nikon D90 camera with Ikelite underwater housing for Nikon D90 and an Ikelite DS 161 movie substrobe, as well as a GoPro HD HERO 2 Outdoor camera. During the project, four groundwater outflows were found. In order to examine these springs audiovisual and photographic documentation was made. To systematise the typology of the discovered springs, new nomenclature was suggested, namely under-lake springs with subtypes: an under-lake slope spring and under-lake offshore spring

  18. Changing values of lake ecosystem services as a result of bacteriological contamination on Lake Trzesiecko and Lake Wielimie, Poland

    Directory of Open Access Journals (Sweden)

    Cichoń Małgorzata

    2017-12-01

    Full Text Available Lake ecosystems, on the one hand, are affected by tourism, and on the other by development for tourism. Lake ecosystem services include: water with its self-cleaning processes, air with climate control processes, as well as flora and fauna. Utilisation of services leads to interventions in the structure of ecosystems and their processes. Only to a certain extent, this is specific to each type of environmental interference, remains within the limits of ecosystem resilience and does not lead to its degradation. One of the threats is bacteriological contamination, for which the most reliable sanitation indicator is Escherichia coli. In lake water quality studies it is assumed that the lakeshore cannot be a source of bacteria. It has been hypothesised that the problem of bacterial contamination can be serious for the places that do not have any infrastructure, especially sanitation. Consequently, the purpose of the study was to determine the extent to which lakeshore sanitation, in particular the level of bacteriological contamination, has an impact on the value of services provided by the selected lake ecosystems (Lake Trzesiecko and Lake Wielimie – Szczecinek Lake District. Five selected services related to lake ecosystems are: water, control over the spread of contagious diseases, aesthetic values, tourism and recreation, as well as the hydrological cycle with its self-cleaning function. Services, as well as the criteria adopted for evaluation, allow us to conclude that the services provided by the lake ecosystems are suitable to fulfill a recreation function. However, the inclusion of quality criteria for sanitary status has shown that the value of system services has dropped by as much as 50%. Value changes are visible primarily for water and aesthetic qualities. Such a significant decrease in the value of services clearly indicates the importance of the sanitary conditions of lakes and their appropriate infrastructure. In view of the

  19. Practice Variation in Big-4 Transparency Reports

    DEFF Research Database (Denmark)

    Girdhar, Sakshi; Klarskov Jeppesen, Kim

    2018-01-01

    Purpose: The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach: The study draws...... on a qualitative research approach, in which the content of transparency reports is analyzed and semi-structured interviews are conducted with key people from the Big-4 firms who are responsible for developing the transparency reports. Findings: The findings show that the content of transparency reports...... is inconsistent and the transparency reporting practice is not uniform within the Big-4 networks. Differences were found in the way in which the transparency reporting practices are coordinated globally by the respective central governing bodies of the Big-4. The content of the transparency reports...

  20. Resilience and Restoration of Lakes

    Directory of Open Access Journals (Sweden)

    Stephen R. Carpenter

    1997-06-01

    Full Text Available Lake water quality and ecosystem services are normally maintained by several feedbacks. Among these are nutrient retention and humic production by wetlands, nutrient retention and woody habitat production by riparian forests, food web structures that cha nnel phosphorus to consumers rather than phytoplankton, and biogeochemical mechanisms that inhibit phosphorus recycling from sediments. In degraded lakes, these resilience mechanisms are replaced by new ones that connect lakes to larger, regional economi c and social systems. New controls that maintain degraded lakes include runoff from agricultural and urban areas, absence of wetlands and riparian forests, and changes in lake food webs and biogeochemistry that channel phosphorus to blooms of nuisance al gae. Economic analyses show that degraded lakes are significantly less valuable than normal lakes. Because of this difference in value, the economic benefits of restoring lakes could be used to create incentives for lake restoration.

  1. Big data and biomedical informatics: a challenging opportunity.

    Science.gov (United States)

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  2. Was the big bang hot

    International Nuclear Information System (INIS)

    Wright, E.L.

    1983-01-01

    The author considers experiments to confirm the substantial deviations from a Planck curve in the Woody and Richards spectrum of the microwave background, and search for conducting needles in our galaxy. Spectral deviations and needle-shaped grains are expected for a cold Big Bang, but are not required by a hot Big Bang. (Auth.)

  3. Passport to the Big Bang

    CERN Multimedia

    De Melis, Cinzia

    2013-01-01

    Le 2 juin 2013, le CERN inaugure le projet Passeport Big Bang lors d'un grand événement public. Affiche et programme. On 2 June 2013 CERN launches a scientific tourist trail through the Pays de Gex and the Canton of Geneva known as the Passport to the Big Bang. Poster and Programme.

  4. Yellowstone Lake Nanoarchaeota

    Directory of Open Access Journals (Sweden)

    Scott eClingenpeel

    2013-09-01

    Full Text Available Considerable Nanoarchaeota novelty and diversity were encountered in Yellowstone Lake, Yellowstone National Park, where sampling targeted lake floor hydrothermal vent fluids, streamers and sediments associated with these vents, and in planktonic photic zones in three different regions of the lake. Significant homonucleotide repeats (HR were observed in pyrosequence reads and in near full-length Sanger sequences, averaging 112 HR per 1,349 bp clone and could confound diversity estimates derived from pyrosequencing, resulting in false nucleotide insertions or deletions (indels. However, Sanger sequencing of two different sets of PCR clones (110 bp, 1349 bp demonstrated that at least some of these indels are real. The majority of the Nanoarchaeota PCR amplicons were vent associated; however, curiously, one relatively small Nanoarchaeota OTU (70 pyrosequencing reads was only found in photic zone water samples obtained from a region of the lake furthest removed from the hydrothermal regions of the lake. Extensive pyrosequencing failed to demonstrate the presence of an Ignicoccus lineage in this lake, suggesting the Nanoarchaeota in this environment are associated with novel Archaea hosts. Defined phylogroups based on near full-length PCR clones document the significant Nanoarchaeota 16S rRNA gene diversity in this lake and firmly establish a terrestrial clade distinct from the marine Nanoarcheota as well as from other geographical locations.

  5. Contaminant trends in lake trout and walleye from the Laurentian Great Lakes

    Science.gov (United States)

    DeVault, David S.; Hesselberg, Robert J.; Rodgers, Paul W.; Feist, Timothy J.

    1996-01-01

    Trends in PCBs, DDT, and other contaminants have been monitored in Great Lakes lake trout and walleye since the 1970s using composite samples of whole fish. Dramatic declines have been observed in concentrations of PCB, ΣDDT, dieldrin, and oxychlordane, with declines initially following first order loss kinetics. Mean PCB concentrations in Lake Michigan lake trout increased from 13 μg/g in 1972 to 23 μg/g in 1974, then declined to 2.6 μg/g by 1986. Between 1986 and 1992 there was little change in concentration, with 3.5 μg/g observed in 1992. ΣDDT in Lake Michigan trout followed a similar trend, decreasing from 19.2 μg/g in 1970 to 1.1 μg/g in 1986, and 1.2 μg/g in 1992. Similar trends were observed for PCBs and ΣDDT in lake trout from Lakes Superior, Huron and Ontario. Concentrations of both PCB and ΣDDT in Lake Erie walleye declined between 1977 and 1982, after which concentrations were relatively constant through 1990. When originally implemented it was assumed that trends in the mean contaminant concentrations in open-lake fish would serve as cost effective surrogates to trends in the water column. While water column data are still extremely limited it appears that for PCBs in lakes Michigan and Superior, trends in lake trout do reasonably mimic those in the water column over the long term. Hypotheses to explain the trends in contaminant concentrations are briefly reviewed. The original first order loss kinetics used to describe the initial decline do not explain the more recent leveling off of contaminant concentrations. Recent theories have examined the possibilities of multiple contaminant pools. We suggest another hypothesis, that changes in the food web may have resulted in increased bioaccumulation. However, a preliminary exploration of this hypothesis using a change point analysis was inconclusive.

  6. Keynote: Big Data, Big Opportunities

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    The enthusiasm for big data is obscuring the complexity and diversity of data in scholarship and the challenges for stewardship. Inside the black box of data are a plethora of research, technology, and policy issues. Data are not shiny objects that are easily exchanged. Rather, data are representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Data practices are local, varying from field to field, individual to indiv...

  7. Integrating R and Hadoop for Big Data Analysis

    OpenAIRE

    Bogdan Oancea; Raluca Mariana Dragoescu

    2014-01-01

    Analyzing and working with big data could be very diffi cult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Offi cial statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools ...

  8. The challenges of big data.

    Science.gov (United States)

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  9. Big³. Editorial.

    Science.gov (United States)

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  10. Pollution at Lake Mariut

    International Nuclear Information System (INIS)

    Nour ElDin, H.; Halim, S. N.; Shalby, E.

    2004-01-01

    Lake Mariut, south Alexandria, Egypt suffered in the recent decades from intensive pollution as a result of a continuous discharge of huge amounts of agriculture wastewater that contains a large concentration of the washed pesticides and fertilizers in addition to domestic and industrial untreated wastewater. The over flow from the lake is discharged directly to the sea through El-Max pumping station via EI-Umum drain. Lake Mariout is surrounded by a huge number of different industrial activities and also the desert road is cutting the lake, this means that a huge number of various pollutants cycle through the air and settle down in the lake, by the time and during different seasons these pollutants after accumulation and different chemical interactions will release again from the lake to the surrounding area affecting the surrounding zone

  11. Long-Term Variability of Satellite Lake Surface Water Temperatures in the Great Lakes

    Science.gov (United States)

    Gierach, M. M.; Matsumoto, K.; Holt, B.; McKinney, P. J.; Tokos, K.

    2014-12-01

    The Great Lakes are the largest group of freshwater lakes on Earth that approximately 37 million people depend upon for fresh drinking water, food, flood and drought mitigation, and natural resources that support industry, jobs, shipping and tourism. Recent reports have stated (e.g., the National Climate Assessment) that climate change can impact and exacerbate a range of risks to the Great Lakes, including changes in the range and distribution of certain fish species, increased invasive species and harmful algal blooms, declining beach health, and lengthened commercial navigation season. In this study, we will examine the impact of climate change on the Laurentian Great Lakes through investigation of long-term lake surface water temperatures (LSWT). We will use the ATSR Reprocessing for Climate: Lake Surface Water Temperature & Ice Cover (ARC-Lake) product over the period 1995-2012 to investigate individual and interlake variability. Specifically, we will quantify the seasonal amplitude of LSWTs, the first and last appearances of the 4°C isotherm (i.e., an important identifier of the seasonal evolution of the lakes denoting winter and summer stratification), and interpret these quantities in the context of global interannual climate variability such as ENSO.

  12. Lake Cadagno

    DEFF Research Database (Denmark)

    Tonolla, Mauro; Storelli, Nicola; Danza, Francesco

    2017-01-01

    Lake Cadagno (26 ha) is a crenogenic meromictic lake located in the Swiss Alps at 1921 m asl with a maximum depth of 21 m. The presence of crystalline rocks and a dolomite vein rich in gypsum in the catchment area makes the lake a typical “sulphuretum ” dominated by coupled carbon and sulphur...... cycles. The chemocline lies at about 12 m depth, stabilized by density differences of salt-rich water supplied by sub-aquatic springs to the monimolimnion and of electrolyte-poor surface water feeding the mixolimnion. Steep sulphide and light gradients in the chemocline support the growth of a large...... in the chemocline. Small-celled PSB together with the sulfate-reducing bacterium Desulfocapsa thiozymogenes sp. form stable aggregates in the lake, which represent small microenvironments with an internal sulphur cycle. Eukaryotic primary producers in the anoxic zones are dominated by Cryptomonas phaseolus...

  13. Cloud Based Big Data Infrastructure: Architectural Components and Automated Provisioning

    OpenAIRE

    Demchenko, Yuri; Turkmen, Fatih; Blanchet, Christophe; Loomis, Charles; Laat, Caees de

    2016-01-01

    This paper describes the general architecture and functional components of the cloud based Big Data Infrastructure (BDI). The proposed BDI architecture is based on the analysis of the emerging Big Data and data intensive technologies and supported by the definition of the Big Data Architecture Framework (BDAF) that defines the following components of the Big Data technologies: Big Data definition, Data Management including data lifecycle and data structures, Big Data Infrastructure (generical...

  14. Physics with Big Karl Brainstorming. Abstracts

    International Nuclear Information System (INIS)

    Machner, H.; Lieb, J.

    2000-08-01

    Before summarizing details of the meeting, a short description of the spectrometer facility Big Karl is given. The facility is essentially a new instrument using refurbished dipole magnets from its predecessor. The large acceptance quadrupole magnets and the beam optics are new. Big Karl has a design very similar as the focussing spectrometers at MAMI (Mainz), AGOR (Groningen) and the high resolution spectrometer (HRS) in Hall A at Jefferson Laboratory with ΔE/E = 10 -4 but at some lower maximum momentum. The focal plane detectors consisting of multiwire drift chambers and scintillating hodoscopes are similar. Unlike HRS, Big Karl still needs Cerenkov counters and polarimeters in its focal plane; detectors which are necessary to perform some of the experiments proposed during the brainstorming. In addition, BIG KARL allows emission angle reconstruction via track measurements in its focal plane with high resolution. In the following the physics highlights, the proposed and potential experiments are summarized. During the meeting it became obvious that the physics to be explored at Big Karl can be grouped into five distinct categories, and this summary is organized accordingly. (orig.)

  15. Recent lake ice-out phenology within and among lake districts of Alaska, U.S.A.

    Science.gov (United States)

    Arp, Christopher D.; Jones, Benjamin M.; Grosse, Guido

    2013-01-01

    The timing of ice-out in high latitudes is a fundamental threshold for lake ecosystems and an indicator of climate change. In lake-rich regions, the loss of ice cover also plays a key role in landscape and climatic processes. Thus, there is a need to understand lake ice phenology at multiple scales. In this study, we observed ice-out timing on 55 large lakes in 11 lake districts across Alaska from 2007 to 2012 using satellite imagery. Sensor networks in two lake districts validated satellite observations and provided comparison with smaller lakes. Over this 6 yr period, the mean lake ice-out for all lakes was 27 May and ranged from 07 May in Kenai to 06 July in Arctic Coastal Plain lake districts with relatively low inter-annual variability. Approximately 80% of the variation in ice-out timing was explained by the date of 0°C air temperature isotherm and lake area. Shoreline irregularity, watershed area, and river connectivity explained additional variation in some districts. Coherence in ice-out timing within the lakes of each district was consistently strong over this 6 yr period, ranging from r-values of 0.5 to 0.9. Inter-district analysis of coherence also showed synchronous ice-out patterns with the exception of the two arctic coastal districts where ice-out occurs later (June–July) and climatology is sea-ice influenced. These patterns of lake ice phenology provide a spatially extensive baseline describing short-term temporal variability, which will help decipher longer term trends in ice phenology and aid in representing the role of lake ice in land and climate models in northern landscapes.

  16. Seed bank and big sagebrush plant community composition in a range margin for big sagebrush

    Science.gov (United States)

    Martyn, Trace E.; Bradford, John B.; Schlaepfer, Daniel R.; Burke, Ingrid C.; Laurenroth, William K.

    2016-01-01

    The potential influence of seed bank composition on range shifts of species due to climate change is unclear. Seed banks can provide a means of both species persistence in an area and local range expansion in the case of increasing habitat suitability, as may occur under future climate change. However, a mismatch between the seed bank and the established plant community may represent an obstacle to persistence and expansion. In big sagebrush (Artemisia tridentata) plant communities in Montana, USA, we compared the seed bank to the established plant community. There was less than a 20% similarity in the relative abundance of species between the established plant community and the seed bank. This difference was primarily driven by an overrepresentation of native annual forbs and an underrepresentation of big sagebrush in the seed bank compared to the established plant community. Even though we expect an increase in habitat suitability for big sagebrush under future climate conditions at our sites, the current mismatch between the plant community and the seed bank could impede big sagebrush range expansion into increasingly suitable habitat in the future.

  17. Application and Prospect of Big Data in Water Resources

    Science.gov (United States)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  18. 33 CFR 162.132 - Connecting waters from Lake Huron to Lake Erie; communications rules.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Connecting waters from Lake Huron to Lake Erie; communications rules. 162.132 Section 162.132 Navigation and Navigable Waters COAST... NAVIGATION REGULATIONS § 162.132 Connecting waters from Lake Huron to Lake Erie; communications rules. (a...

  19. 33 CFR 162.140 - Connecting waters from Lake Huron to Lake Erie; miscellaneous rules.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Connecting waters from Lake Huron to Lake Erie; miscellaneous rules. 162.140 Section 162.140 Navigation and Navigable Waters COAST... NAVIGATION REGULATIONS § 162.140 Connecting waters from Lake Huron to Lake Erie; miscellaneous rules. (a...

  20. Big Data in food and agriculture

    Directory of Open Access Journals (Sweden)

    Kelly Bronson

    2016-06-01

    Full Text Available Farming is undergoing a digital revolution. Our existing review of current Big Data applications in the agri-food sector has revealed several collection and analytics tools that may have implications for relationships of power between players in the food system (e.g. between farmers and large corporations. For example, Who retains ownership of the data generated by applications like Monsanto Corproation's Weed I.D . “app”? Are there privacy implications with the data gathered by John Deere's precision agricultural equipment? Systematically tracing the digital revolution in agriculture, and charting the affordances as well as the limitations of Big Data applied to food and agriculture, should be a broad research goal for Big Data scholarship. Such a goal brings data scholarship into conversation with food studies and it allows for a focus on the material consequences of big data in society.

  1. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  2. Una aproximación a Big Data = An approach to Big Data

    OpenAIRE

    Puyol Moreno, Javier

    2014-01-01

    Big Data puede ser considerada como una tendencia en el avance de la tecnología que ha abierto la puerta a un nuevo enfoque para la comprensión y la toma de decisiones, que se utiliza para describir las enormes cantidades de datos (estructurados, no estructurados y semi- estructurados) que sería demasiado largo y costoso para cargar una base de datos relacional para su análisis. Así, el concepto de Big Data se aplica a toda la información que no puede ser procesada o analizada utilizando herr...

  3. Toward a Literature-Driven Definition of Big Data in Healthcare.

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  4. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    Science.gov (United States)

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  5. Big Data and Biomedical Informatics: A Challenging Opportunity

    Science.gov (United States)

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  6. Real-estate lakes

    Science.gov (United States)

    Rickert, David A.; Spieker, Andrew Maute

    1971-01-01

    Since the dawn of civilization waterfront land has been an irresistible attraction to man. Throughout history he has sought out locations fronting on oceans, rivers, and lakes. Originally sought for proximity .to water supply and transportation, such locations are now sought more for their esthetic qualities and for recreation. Usable natural waterfront property is limited, however, and the more desirable sites in many of our urban areas have already been taken. The lack of available waterfront sites has led to the creation of many artificial bodies of water. The rapid suburbanization that has characterized urban growth in America since the end of World War II, together with increasing affluence and le-isure time, has created a ready market for waterfront property. Accordingly, lake-centered subdivisions and developments dot the suburban landscape in many of our major urban areas. Literally thousands of lakes surrounded by homes have materialized during this period of rapid growth. Recently, several "new town" communities have been planned around this lake-centered concept. A lake can be either an asset or a liaoility to a community. A clean, clear, attractively landscaped lake is a definite asset, whereas a weed-choked, foul-smelling mudhole is a distinct liability. The urban environment poses both problems and imaginative opportunities in the development of lakes. Creation of a lake causes changes in all aspects of the environment. Hydrologic systems and ecological patterns are usually most severely altered. The developer should be aware of the potential changes; it is not sufficient merely to build a dam across a stream or to dig a hole in the ground. Development of Gl a successful lake requires careful planning for site selection and design, followed by thorough and cc ntinual management. The purpose of this report is to describe the characteristics of real-estate lakes, to pinpoint potential pmblems, and to suggest possible planning and management guidelines

  7. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  8. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text Available “Big Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  9. Lake Morphometry for NHD Lakes in California Region 18 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  10. Lake Morphometry for NHD Lakes in Tennessee Region 6 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  11. Lake Morphometry for NHD Lakes in Ohio Region 5 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  12. Bathymetry of Lake Ontario

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetry of Lake Ontario has been compiled as a component of a NOAA project to rescue Great Lakes lake floor geological and geophysical data and make it more...

  13. Bathymetry of Lake Michigan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetry of Lake Michigan has been compiled as a component of a NOAA project to rescue Great Lakes lake floor geological and geophysical data and make it more...

  14. Bathymetry of Lake Superior

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetry of Lake Superior has been compiled as a component of a NOAA project to rescue Great Lakes lake floor geological and geophysical data and make it more...

  15. Bathymetry of Lake Huron

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetry of Lake Huron has been compiled as a component of a NOAA project to rescue Great Lakes lake floor geological and geophysical data and make it more...

  16. Circulation and sedimentation in a tidal-influenced fjord lake: Lake McKerrow, New Zealand

    Science.gov (United States)

    Pickrill, R. A.; Irwin, J.; Shakespeare, B. S.

    1981-01-01

    Lake McKerrow is a tide-influenced fjord lake, separated from the open sea by a Holocene barrier spit. Fresh, oxygenated waters of the epilimnion overlie saline, deoxygenated waters of the hypolimnion. During winter, water from the Upper Hollyford River interflows along the pycnocline, depositing coarse silt on the steep delta and transporting finer sediment down-lake. An extensive sub-lacustrine channel system on the foreset delta slope is possibly maintained by turbidity currents. Saline waters of the hypolimnion are periodically replenished. During high tides and low lake levels saline water flows into the lake and downslope into the lake basin as a density current in a well defined channel.

  17. The Inverted Big-Bang

    OpenAIRE

    Vaas, Ruediger

    2004-01-01

    Our universe appears to have been created not out of nothing but from a strange space-time dust. Quantum geometry (loop quantum gravity) makes it possible to avoid the ominous beginning of our universe with its physically unrealistic (i.e. infinite) curvature, extreme temperature, and energy density. This could be the long sought after explanation of the big-bang and perhaps even opens a window into a time before the big-bang: Space itself may have come from an earlier collapsing universe tha...

  18. Minsky on "Big Government"

    Directory of Open Access Journals (Sweden)

    Daniel de Santana Vasconcelos

    2014-03-01

    Full Text Available This paper objective is to assess, in light of the main works of Minsky, his view and analysis of what he called the "Big Government" as that huge institution which, in parallels with the "Big Bank" was capable of ensuring stability in the capitalist system and regulate its inherently unstable financial system in mid-20th century. In this work, we analyze how Minsky proposes an active role for the government in a complex economic system flawed by financial instability.

  19. 33 CFR 162.130 - Connecting waters from Lake Huron to Lake Erie; general rules.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Connecting waters from Lake Huron to Lake Erie; general rules. 162.130 Section 162.130 Navigation and Navigable Waters COAST GUARD... REGULATIONS § 162.130 Connecting waters from Lake Huron to Lake Erie; general rules. (a) Purpose. The...

  20. Classical propagation of strings across a big crunch/big bang singularity

    International Nuclear Information System (INIS)

    Niz, Gustavo; Turok, Neil

    2007-01-01

    One of the simplest time-dependent solutions of M theory consists of nine-dimensional Euclidean space times 1+1-dimensional compactified Milne space-time. With a further modding out by Z 2 , the space-time represents two orbifold planes which collide and re-emerge, a process proposed as an explanation of the hot big bang [J. Khoury, B. A. Ovrut, P. J. Steinhardt, and N. Turok, Phys. Rev. D 64, 123522 (2001).][P. J. Steinhardt and N. Turok, Science 296, 1436 (2002).][N. Turok, M. Perry, and P. J. Steinhardt, Phys. Rev. D 70, 106004 (2004).]. When the two planes are near, the light states of the theory consist of winding M2-branes, describing fundamental strings in a particular ten-dimensional background. They suffer no blue-shift as the M theory dimension collapses, and their equations of motion are regular across the transition from big crunch to big bang. In this paper, we study the classical evolution of fundamental strings across the singularity in some detail. We also develop a simple semiclassical approximation to the quantum evolution which allows one to compute the quantum production of excitations on the string and implement it in a simplified example

  1. Great Lakes Restoration Initiative Great Lakes Mussel Watch(2009-2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Following the inception of the Great Lakes Restoration Initiative (GLRI) to address the significant environmental issues plaguing the Great Lakes region, the...

  2. 33 CFR 162.134 - Connecting waters from Lake Huron to Lake Erie; traffic rules.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Connecting waters from Lake Huron to Lake Erie; traffic rules. 162.134 Section 162.134 Navigation and Navigable Waters COAST GUARD... REGULATIONS § 162.134 Connecting waters from Lake Huron to Lake Erie; traffic rules. (a) Detroit River. The...

  3. 33 CFR 162.138 - Connecting waters from Lake Huron to Lake Erie; speed rules.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Connecting waters from Lake Huron to Lake Erie; speed rules. 162.138 Section 162.138 Navigation and Navigable Waters COAST GUARD... REGULATIONS § 162.138 Connecting waters from Lake Huron to Lake Erie; speed rules. (a) Maximum speed limit for...

  4. 33 CFR 162.136 - Connecting waters from Lake Huron to Lake Erie; anchorage grounds.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Connecting waters from Lake Huron to Lake Erie; anchorage grounds. 162.136 Section 162.136 Navigation and Navigable Waters COAST GUARD... REGULATIONS § 162.136 Connecting waters from Lake Huron to Lake Erie; anchorage grounds. (a) In the Detroit...

  5. Limnology of Botos Lake, a tropical crater lake in Costa Rica.

    Science.gov (United States)

    Umaña, G

    2001-12-01

    Botos Lake, located at the Poas Volcano complex (Costa Rica) was sampled eight times from 1994 to 1996 for physicochemical conditions of the water column and phytoplanktonic community composition. Depth was measured at fixed intervals in several transects across the lake to determine its main morphometric characteristics. The lake has an outlet to the north. It is located 2580 m above sea level and is shallow, with a mean depth of 1.8 m and a relative depth of 2.42 (surface area 10.33 ha, estimated volume 47.3 hm3). The lake showed an isothermal water column in all occasions, but it heats and cools completely according to weather fluctuations. Water transparency reached the bottom on most occasions (> 9 m). The results support the idea that the lake is polymictic and oligotrophic. The lake has at least 23 species of planktonic algae, but it was always dominated by dinoflagellates, especially Peridinium inconspicuum. The shore line is populated by a sparse population of Isoetes sp. and Eleocharis sp. mainly in the northern shore where the bottom has a gentle slope and the forest does not reach the shore.

  6. The Information Panopticon in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Martin Berner

    2014-04-01

    Full Text Available Taking advantage of big data opportunities is challenging for traditional organizations. In this article, we take a panoptic view of big data – obtaining information from more sources and making it visible to all organizational levels. We suggest that big data requires the transformation from command and control hierarchies to post-bureaucratic organizational structures wherein employees at all levels can be empowered while simultaneously being controlled. We derive propositions that show how to best exploit big data technologies in organizations.

  7. WE-H-BRB-00: Big Data in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  8. WE-H-BRB-00: Big Data in Radiation Oncology

    International Nuclear Information System (INIS)

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  9. De impact van Big Data op Internationale Betrekkingen

    NARCIS (Netherlands)

    Zwitter, Andrej

    Big Data changes our daily lives, but does it also change international politics? In this contribution, Andrej Zwitter (NGIZ chair at Groningen University) argues that Big Data impacts on international relations in ways that we only now start to understand. To comprehend how Big Data influences

  10. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  11. Big data and analytics strategic and organizational impacts

    CERN Document Server

    Morabito, Vincenzo

    2015-01-01

    This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners...

  12. Big Science and Long-tail Science

    CERN Document Server

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  13. Toward a Literature-Driven Definition of Big Data in Healthcare

    Directory of Open Access Journals (Sweden)

    Emilie Baro

    2015-01-01

    Full Text Available Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n and the number of variables (p for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n*p≥7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR data.

  14. Toward a Literature-Driven Definition of Big Data in Healthcare

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  15. 75 FR 34934 - Safety Zone; Fireworks for the Virginia Lake Festival, Buggs Island Lake, Clarksville, VA

    Science.gov (United States)

    2010-06-21

    ...-AA00 Safety Zone; Fireworks for the Virginia Lake Festival, Buggs Island Lake, Clarksville, VA AGENCY... Fireworks for the Virginia Lake Festival event. This action is intended to restrict vessel traffic movement... Virginia Lake Festival, Buggs Island Lake, Clarksville, VA (a) Regulated Area. The following area is a...

  16. Holocene lake-level fluctuations of Lake Aricota, Southern Peru

    Science.gov (United States)

    Placzek, C.; Quade, Jay; Betancourt, J.L.

    2001-01-01

    Lacustrine deposits exposed around Lake Aricota, Peru (17?? 22???S), a 7.5-km2 lake dammed by debris flows, provide a middle to late Holocene record of lake-level fluctuations. Chronological context for shoreline deposits was obtained from radiocarbon dating of vascular plant remains and other datable material with minimal 14C reservoir effects (Titicaca (16?? S), which is only 130 km to the northeast and shares a similar climatology. Comparisons with other marine and terrestrial records highlight emerging contradictions over the nature of mid-Holocene climate in the central Andes. ?? 2001 University of Washington.

  17. Big-Eyed Bugs Have Big Appetite for Pests

    Science.gov (United States)

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  18. Big Data - What is it and why it matters.

    Science.gov (United States)

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  19. Research on information security in big data era

    Science.gov (United States)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  20. Stable isotope and hydrogeochemical studies of Beaver Lake and Radok Lake, MacRobertson Land, East Antarctica

    International Nuclear Information System (INIS)

    Wand, U.; Hermichen, W.D.

    1988-01-01

    Beaver Lake and Radok Lake, the largest known epishelf lake and the deepest freshwater lake on the Antarctic continent, respectively, were isotopically (δ 2 H, δ 18 O) and hydrogeochemically studied. Radok Lake is an isothermal and nonstratified, i.e. homogeneous water body, while Beaver Lake is stratified with respect to temperature, salinity and isotopic composition. The results for the latter attest to freshwater (derived from snow and glacier melt) overlying seawater. (author)

  1. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  2. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  3. A little big history of Tiananmen

    NARCIS (Netherlands)

    Quaedackers, E.; Grinin, L.E.; Korotayev, A.V.; Rodrigue, B.H.

    2011-01-01

    This contribution aims at demonstrating the usefulness of studying small-scale subjects such as Tiananmen, or the Gate of Heavenly Peace, in Beijing - from a Big History perspective. By studying such a ‘little big history’ of Tiananmen, previously overlooked yet fundamental explanations for why

  4. Addressing big data issues in Scientific Data Infrastructure

    NARCIS (Netherlands)

    Demchenko, Y.; Membrey, P.; Grosso, P.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2013-01-01

    Big Data are becoming a new technology focus both in science and in industry. This paper discusses the challenges that are imposed by Big Data on the modern and future Scientific Data Infrastructure (SDI). The paper discusses a nature and definition of Big Data that include such features as Volume,

  5. Lake Generated Microseisms at Yellowstone Lake as a Record of Ice Phenology

    Science.gov (United States)

    Mohd Mokhdhari, A. A.; Koper, K. D.; Burlacu, R.

    2017-12-01

    It has recently been shown that wave action in lakes produces microseisms, which generate noise peaks in the period range of 0.8-1.2 s as recorded by nearby seismic stations. Such noise peaks have been observed at seven seismic stations (H17A, LKWY, B208, B944, YTP, YLA, and YLT) located within 2 km of the Yellowstone Lake shoreline. Initial work using 2016 data shows that the variations in the microseism signals at Yellowstone Lake correspond with the freezing and thawing of lake ice: the seismic noise occurs more frequently in the spring, summer, and fall, and less commonly in the winter. If this can be confirmed, then lake-generated microseisms could provide a consistent measure of the freezing and melting dates of high-latitude lakes in remote areas. The seismic data would then be useful in assessing the effects of climate change on the ice phenology of those lakes. In this work, we analyze continuous seismic data recorded by the seven seismic stations around Yellowstone Lake for the years of 1995 to 2016. We generate probability distribution functions of power spectral density for each station to observe the broad elevation of energy near a period of 1 s. The time dependence of this 1-s seismic noise energy is analyzed by extracting the power spectral density at 1 s from every processed hour. The seismic observations are compared to direct measurements of the dates of ice-out and freeze-up as reported by rangers at Yellowstone National Park. We examine how accurate the seismic data are in recording the freezing and melting of Yellowstone Lake, and how the accuracy changes as a function of the number of stations used. We also examine how sensitive the results are to the particular range of periods that are analyzed.

  6. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  7. Big Data - Smart Health Strategies

    Science.gov (United States)

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  8. About Big Data and its Challenges and Benefits in Manufacturing

    OpenAIRE

    Bogdan NEDELCU

    2013-01-01

    The aim of this article is to show the importance of Big Data and its growing influence on companies. It also shows what kind of big data is currently generated and how much big data is estimated to be generated. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. There are also shown some major influences that big data has over one major segment in the industry (manufacturing) and the challenges that appear.

  9. Big Data Management in US Hospitals: Benefits and Barriers.

    Science.gov (United States)

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  10. Big Data Strategy for Telco: Network Transformation

    OpenAIRE

    F. Amin; S. Feizi

    2014-01-01

    Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and ...

  11. Big Data in Shipping - Challenges and Opportunities

    OpenAIRE

    Rødseth, Ørnulf Jan; Perera, Lokukaluge Prasad; Mo, Brage

    2016-01-01

    Big Data is getting popular in shipping where large amounts of information is collected to better understand and improve logistics, emissions, energy consumption and maintenance. Constraints to the use of big data include cost and quality of on-board sensors and data acquisition systems, satellite communication, data ownership and technical obstacles to effective collection and use of big data. New protocol standards may simplify the process of collecting and organizing the data, including in...

  12. Monitoring climate signal transfer into the varved lake sediments of Lake Czechowskie, Poland

    Science.gov (United States)

    Groß-Schmölders, Miriam; Ott, Florian; Brykała, Dariusz; Gierszewski, Piotr; Kaszubski, Michał; Kienel, Ulrike; Brauer, Achim

    2015-04-01

    In 2012 we started a monitoring program at Lake Czechowskie, Poland, because the lake comprises a long Holocene time series of calcite varves until recent times. The aim of the program is to understand how environmental and climatic conditions influence the hydrological conditions and, ultimately, the sediment deposition processes of the lake. Lake Czechowskie is located in the north of Poland in the Pomeranian Lake District and is part of the national park Tuchola Forest. The landscape and the lake is formed by the glacier retreat after the last glaciation (Weichselian). Lake Czechowskie is a typical hardwater lake and has a length of 1.4 km, an average width of 600 m and a lake surface area of ca 4 km. The maximum depth of 32 m is reached in a rather small hollow in the eastern part of the lake. Two different types of sediment traps provide sediment samples with monthly resolution from different water depths (12m, 26m). In addition, hydrological data including water temperature in different depths, water inflow, throughflow and outflow and the depth of visibility are measured. These data allow to describe strength and duration of lake mixing in spring and autumn and its influence on sedimentation. The sediment samples were analyzed with respect to their dry weight (used to calculate mean daily sediment flux), their inorganic and organic carbon contents, the stable C- and O-isotopes of organic matter and calcite as well as N-isotopes of organic matter. For selected samples dominant diatom taxa are determined. Our first results demonstrate the strong influence of the long winter with ice cover until April in 2013 on the sedimentation. A rapid warming in only 9 days starting on April 9th from -0,3 C° to 15,2 C° resulted in fast ice break-up and a short but intensive lake mixing. In consequence of this short mixing period a strong algal bloom especially of Fragilaria and Crysophycea commenced in April and had its maximum in May. This bloom further induced biogenic

  13. [Relevance of big data for molecular diagnostics].

    Science.gov (United States)

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  14. Big data in psychology: A framework for research advancement.

    Science.gov (United States)

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. 'Big data' in pharmaceutical science: challenges and opportunities.

    Science.gov (United States)

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  16. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  17. Is Lake Chabot Eutrophic?

    Science.gov (United States)

    Pellegrini, K.; Logan, J.; Esterlis, P.; Lew, A.; Nguyen, M.

    2013-12-01

    Introduction/Abstract: Lake Chabot is an integral part of the East Bay watershed that provides habitats for animals and recreation for humans year-round. Lake Chabot has been in danger of eutrophication due to excessive dumping of phosphorous and nitrogen into the water from the fertilizers of nearby golf courses and neighboring houses. If the lake turned out to be eutrophified, it could seriously impact what is currently the standby emergency water supply for many Castro Valley residents. Eutrophication is the excessive richness of nutrients such as nitrogen and phosphorus in a lake, usually as a result of runoff. This buildup of nutrients causes algal blooms. The algae uses up most of the oxygen in the water, and when it dies, it causes the lake to hypoxify. The fish in the lake can't breathe, and consequently suffocate. Other oxygen-dependant aquatic creatures die off as well. Needless to say, the eutrophication of a lake is bad news for the wildlife that lives in or around it. The level of eutrophication in our area in Northern California tends to increase during the late spring/early summer months, so our crew went out and took samples of Lake Chabot on June 2. We focused on the area of the lake where the water enters, known on the map as Honker Bay. We also took readings a ways down in deeper water for comparison's sake. Visually, the lake looked in bad shape. The water was a murky green that glimmered with particulate matter that swirled around the boat as we went by. In the Honker Bay region where we focused our testing, there were reeds bathed in algae that coated the surface of the lake in thick, swirling patterns. Surprisingly enough, however, our test results didn't reveal any extreme levels of phosphorous or nitrogen. They were slightly higher than usual, but not by any significant amount. The levels we found were high enough to stimulate plant and algae growth and promote eutrophication, but not enough to do any severe damage. After a briefing with a

  18. Limnology of Eifel maar lakes

    National Research Council Canada - National Science Library

    Scharf, Burkhard W; Björk, Sven

    1992-01-01

    ... & morphometry - Physical & chemical characteristics - Calcite precipitation & solution in Lake Laacher See - Investigations using sediment traps in Lake Gemundener Maar - Phytoplankton of Lake Weinfelder Maar...

  19. Investigation of Residence and Travel Times in a Large Floodplain Lake with Complex Lake-River Interactions: Poyang Lake (China

    Directory of Open Access Journals (Sweden)

    Yunliang Li

    2015-04-01

    Full Text Available Most biochemical processes and associated water quality in lakes depends on their flushing abilities. The main objective of this study was to investigate the transport time scale in a large floodplain lake, Poyang Lake (China. A 2D hydrodynamic model (MIKE 21 was combined with dye tracer simulations to determine residence and travel times of the lake for various water level variation periods. The results indicate that Poyang Lake exhibits strong but spatially heterogeneous residence times that vary with its highly seasonal water level dynamics. Generally, the average residence times are less than 10 days along the lake’s main flow channels due to the prevailing northward flow pattern; whereas approximately 30 days were estimated during high water level conditions in the summer. The local topographically controlled flow patterns substantially increase the residence time in some bays with high spatial values of six months to one year during all water level variation periods. Depending on changes in the water level regime, the travel times from the pollution sources to the lake outlet during the high and falling water level periods (up to 32 days are four times greater than those under the rising and low water level periods (approximately seven days.

  20. Solution of a braneworld big crunch/big bang cosmology

    International Nuclear Information System (INIS)

    McFadden, Paul L.; Turok, Neil; Steinhardt, Paul J.

    2007-01-01

    We solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed V divided by the speed of light c. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in (V/c) 2 . At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios

  1. Sexual difference in PCB concentrations of lake trout (Salvelinus namaycush) from Lake Ontario

    Science.gov (United States)

    Madenjian, Charles P.; Keir, Michael J.; Whittle, D. Michael; Noguchi, George E.

    2010-01-01

    We determined polychlorinated biphenyl (PCB) concentrations in 61 female lake trout (Salvelinus namaycush) and 71 male lake trout from Lake Ontario (Ontario, Canada and New York, United States). To estimate the expected change in PCB concentration due to spawning, PCB concentrations in gonads and in somatic tissue of lake trout were also determined. In addition, bioenergetics modeling was applied to investigate whether gross growth efficiency (GGE) differed between the sexes. Results showed that, on average, males were 22% higher in PCB concentration than females in Lake Ontario. Results from the PCB determinations of the gonads and somatic tissues revealed that shedding of the gametes led to 3% and 14% increases in PCB concentration for males and females, respectively. Therefore, shedding of the gametes could not explain the higher PCB concentration in male lake trout. According to the bioenergetics modeling results, GGE of males was about 2% higher than adult female GGE, on average. Thus, bioenergetics modeling could not explain the higher PCB concentrations exhibited by the males. Nevertheless, a sexual difference in GGE remained a plausible explanation for the sexual difference in PCB concentrations of the lake trout.

  2. L-Lake macroinvertebrate community

    International Nuclear Information System (INIS)

    Specht, W.L.

    1996-06-01

    To characterize the present benthic macroinvertebrate community of L-Lake, Regions 5 and 7 of the reservoir were sampled in September 1995 at the same locations sampled in 1988 and 1989 during the L-Lake monitoring program. The macroinvertebrate community of 1995 is compared to that of 1988 and 1989. The species composition of L-Lake's macroinvertebrate community has changed considerably since 1988-1989, due primarily to maturation of the reservoir ecosystem. L-Lake contains a reasonably diverse macroinvertebrate community that is capable of supporting higher trophic levels, including a diverse assemblage of fish species. The L-Lake macroinvertebrate community is similar to those of many other southeastern reservoirs, and there is no indication that the macroinvertebrate community is perturbed by chemical or physical stressors

  3. L-Lake macroinvertebrate community

    Energy Technology Data Exchange (ETDEWEB)

    Specht, W.L.

    1996-06-01

    To characterize the present benthic macroinvertebrate community of L-Lake, Regions 5 and 7 of the reservoir were sampled in September 1995 at the same locations sampled in 1988 and 1989 during the L-Lake monitoring program. The macroinvertebrate community of 1995 is compared to that of 1988 and 1989. The species composition of L-Lake`s macroinvertebrate community has changed considerably since 1988-1989, due primarily to maturation of the reservoir ecosystem. L-Lake contains a reasonably diverse macroinvertebrate community that is capable of supporting higher trophic levels, including a diverse assemblage of fish species. The L-Lake macroinvertebrate community is similar to those of many other southeastern reservoirs, and there is no indication that the macroinvertebrate community is perturbed by chemical or physical stressors.

  4. [Big data and their perspectives in radiation therapy].

    Science.gov (United States)

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  5. Current applications of big data in obstetric anesthesiology.

    Science.gov (United States)

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  6. 76 FR 25232 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-05-04

    ... a Flight Data Center (FDC) Notice to Airmen (NOTAM) as an emergency action of immediate flight... existing or anticipated at the affected airports. Because of the close and immediate relationship between... Anchorage, AK, Merill Field, Takeoff Minimums and Obstacle DP, Amdt 1 Big Lake, AK, Big Lake, RNAV (GPS) RWY...

  7. Water-quality and lake-stage data for Wisconsin lakes, water years 2012–2013

    Science.gov (United States)

    Manteufel, S. Bridgett; Robertson, Dale M.

    2017-05-25

    IntroductionThe U.S. Geological Survey (USGS), in cooperation with local and other agencies, collects data at selected lakes throughout Wisconsin. These data, accumulated over many years, provide a data base for developing an improved understanding of the water quality of lakes. To make these data available to interested parties outside the USGS, the data are published annually in this report series. The locations of water-quality and lake-stage stations in Wisconsin for water year 2012 are shown in figure 1. A water year is the 12-month period from October 1 through September 30. It is designated by the calendar year in which it ends. Thus, the period October 1, 2011 through September 30, 2012, is called “water year 2012.”The purpose of this report is to provide information about the chemical and physical characteristics of Wisconsin lakes. Data that have been collected at specific lakes, and information to aid in the interpretation of those data, are included in this report. Data collected include measurements of in-lake water quality and lake stage. Time series of Secchi depths, surface total phosphorus and chlorophyll a concentrations collected during non-frozen periods are included for all lakes. Graphs of vertical profiles of temperature, dissolved oxygen, pH, and specific conductance are included for sites where these parameters were measured. Descriptive information for each lake includes: location of the lake, area of the lake’s watershed, period for which data are available, revisions to previously published records, and pertinent remarks. Additional data, such as streamflow and water quality in tributary and outlet streams of some of the lakes, are published online at http://nwis.waterdata.usgs.gov/wi/nwis.Water-resources data, including stage and discharge data at most streamflow-gaging stations, are available online. The Wisconsin Water Science Center’s home page is at https://www.usgs.gov/centers/wisconsin-water-science-center. Information on

  8. Hydrochemical determination of source water contributions to Lake Lungo and Lake Ripasottile (central Italy

    Directory of Open Access Journals (Sweden)

    Claire Archer

    2016-12-01

    Full Text Available Lake Lungo and Lake Ripasottile are two shallow (4-5 m lakes located in the Rieti Basin, central Italy, that have been described previously as surface outcroppings of the groundwater table. In this work, the two lakes as well as springs and rivers that represent their potential source waters are characterized physio-chemically and isotopically, using a combination of environmental tracers. Temperature and pH were measured and water samples were analyzed for alkalinity, major ion concentration, and stable isotope (δ2H, δ18O, δ13C of dissolved inorganic carbon, and δ34S and δ18O of sulfate composition.  Chemical data were also investigated in terms of local meteorological data (air temperature, precipitation to determine the sensitivity of lake parameters to changes in the surrounding environment. Groundwater represented by samples taken from Santa Susanna Spring was shown to be distinct with SO42- and Mg2+ content of 270 and 29 mg/L, respectively, and heavy sulfate isotopic composition (δ34S=15.2 ‰ and δ18O=10‰. Outflow from the Santa Susanna Spring enters Lake Ripasottile via a canal and both spring and lake water exhibits the same chemical distinctions and comparatively low seasonal variability. Major ion concentrations in Lake Lungo are similar to the Vicenna Riara Spring and are interpreted to represent the groundwater locally recharged within the plain. The δ13CDIC exhibit the same groupings as the other chemical parameters, providing supporting evidence of the source relationships. Lake Lungo exhibited exceptional ranges of δ13CDIC (±5 ‰ and δ2H, δ18O (±5 ‰ and ±7 ‰, respectively, attributed to sensitivity to seasonal changes. The hydrochemistry results, particularly major ion data, highlight how the two lakes, though geographically and morphologically similar, represent distinct hydrochemical facies. These data also show a different response in each lake to temperature and precipitation patterns in the basin that

  9. Lake trout (Salvelinus namaycush) suppression for bull trout (Salvelinus confluentus) recovery in Flathead Lake, Montana, North America

    Science.gov (United States)

    Hansen, Michael J.; Hansen, Barry S; Beauchamp, David A.

    2016-01-01

    Non-native lake trout Salvelinus namaycush displaced native bull trout Salvelinus confluentus in Flathead Lake, Montana, USA, after 1984, when Mysis diluviana became abundant following its introduction in upstream lakes in 1968–1976. We developed a simulation model to determine the fishing mortality rate on lake trout that would enable bull trout recovery. Model simulations indicated that suppression of adult lake trout by 75% from current abundance would reduce predation on bull trout by 90%. Current removals of lake trout through incentivized fishing contests has not been sufficient to suppress lake trout abundance estimated by mark-recapture or indexed by stratified-random gill netting. In contrast, size structure, body condition, mortality, and maturity are changing consistent with a density-dependent reduction in lake trout abundance. Population modeling indicated total fishing effort would need to increase 3-fold to reduce adult lake trout population density by 75%. We conclude that increased fishing effort would suppress lake trout population density and predation on juvenile bull trout, and thereby enable higher abundance of adult bull trout in Flathead Lake and its tributaries.

  10. Volume and Value of Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  11. Using Big Book to Teach Things in My House

    OpenAIRE

    Effrien, Intan; Lailatus, Sa’diyah; Nuruliftitah Maja, Neneng

    2017-01-01

    The purpose of this study to determine students' interest in learning using the big book media. Big book is a big book from the general book. The big book contains simple words and images that match the content of sentences and spelling. From here researchers can know the interest and development of students' knowledge. As well as train researchers to remain crative in developing learning media for students.

  12. Big Data Analytics Methodology in the Financial Industry

    Science.gov (United States)

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  13. Lake Morphometry for NHD Lakes in Upper Colorado Region 14 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  14. Lake Morphometry for NHD Lakes in North East Region 1 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  15. Lake Morphometry for NHD Lakes in Lower Colorado Region 15 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  16. Lake Morphometry for NHD Lakes in Upper Mississippi Region 7 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  17. Lake Morphometry for NHD Lakes in Rio Grande Region 13 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  18. Lake Morphometry for NHD Lakes in Pacific Northwest Region 17 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  19. Lake Morphometry for NHD Lakes in Lower Mississippi Region 8 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  20. Lake Morphometry for NHD Lakes in Texas-Gulf Region 12 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  1. 75 FR 22620 - Upper Klamath, Lower Klamath, Tule Lake, Bear Valley, and Clear Lake National Wildlife Refuges...

    Science.gov (United States)

    2010-04-29

    ...] Upper Klamath, Lower Klamath, Tule Lake, Bear Valley, and Clear Lake National Wildlife Refuges, Klamath..., Bear Valley, and Clear Lake National Wildlife Refuges (Refuges) located in Klamath County, Oregon, and..., Tule Lake, Bear Valley, and Clear Lake Refuges located in Klamath County, Oregon, and Siskiyou and...

  2. Big data: survey, technologies, opportunities, and challenges.

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  3. Big Data: Survey, Technologies, Opportunities, and Challenges

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  4. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    Science.gov (United States)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  5. The Key Lake project

    International Nuclear Information System (INIS)

    1991-01-01

    Key Lake is located in the Athabasca sand stone basin, 640 kilometers north of Saskatoon, Saskatchewan, Canada. The three sources of ore at Key Lake contain 70 100 tonnes of uranium. Features of the Key Lake Project were described under the key headings: work force, mining, mill process, tailings storage, permanent camp, environmental features, worker health and safety, and economic benefits. Appendices covering the historical background, construction projects, comparisons of western world mines, mining statistics, Northern Saskatchewan surface lease, and Key Lake development and regulatory agencies were included

  6. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  7. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...... indtil videre begrænset. Denne artikel belyser, hvordan den offentlige sundhedssektor kan genanvende og udnytte en stadig større mængde data under hensyntagen til offentlige værdier. Artiklen bygger på et casestudie af anvendelsen af store mængder sundhedsdata i Dansk AlmenMedicinsk Database (DAMD......). Analysen viser, at (gen)brug af data i nye sammenhænge er en flerspektret afvejning mellem ikke alene økonomiske rationaler og kvalitetshensyn, men også kontrol over personfølsomme data og etiske implikationer for borgeren. I DAMD-casen benyttes data på den ene side ”i den gode sags tjeneste” til...

  8. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    Science.gov (United States)

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  9. Big data analytics in healthcare: promise and potential.

    Science.gov (United States)

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  10. Data warehousing in the age of big data

    CERN Document Server

    Krishnan, Krish

    2013-01-01

    Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Expert author Krish Krishnan helps you make sense of how Big Data fits into the world of data warehousing in clear and concise detail. The book is presented in three distinct parts. Part 1 discusses Big Data, its technologies and use cases from early adopters. Part 2 addresses data warehousing, its shortcomings, and new architecture

  11. The Death of the Big Men

    DEFF Research Database (Denmark)

    Martin, Keir

    2010-01-01

    Recently Tolai people og Papua New Guinea have adopted the term 'Big Shot' to decribe an emerging post-colonial political elite. The mergence of the term is a negative moral evaluation of new social possibilities that have arisen as a consequence of the Big Shots' privileged position within a glo...

  12. Big data and software defined networks

    CERN Document Server

    Taheri, Javid

    2018-01-01

    Big Data Analytics and Software Defined Networking (SDN) are helping to drive the management of data usage of the extraordinary increase of computer processing power provided by Cloud Data Centres (CDCs). This new book investigates areas where Big-Data and SDN can help each other in delivering more efficient services.

  13. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  14. Environmental effects of the Big Rapids dam remnant removal, Big Rapids, Michigan, 2000-02

    Science.gov (United States)

    Healy, Denis F.; Rheaume, Stephen J.; Simpson, J. Alan

    2003-01-01

    The U.S. Geological Survey (USGS), in cooperation with the city of Big Rapids, investigated the environmental effects of removal of a dam-foundation remnant and downstream cofferdam from the Muskegon River in Big Rapids, Mich. The USGS applied a multidiscipline approach, which determined the water quality, sediment character, and stream habitat before and after dam removal. Continuous water-quality data and discrete water-quality samples were collected, the movement of suspended and bed sediment were measured, changes in stream habitat were assessed, and streambed elevations were surveyed. Analyses of water upstream and downstream from the dam showed that the dam-foundation remnant did not affect water quality. Dissolved-oxygen concentrations downstream from the dam remnant were depressed for a short period (days) during the beginning of the dam removal, in part because of that removal effort. Sediment transport from July 2000 through March 2002 was 13,800 cubic yards more at the downstream site than the upstream site. This increase in sediment represents the remobilized sediment upstream from the dam, bank erosion when the impoundment was lowered, and contributions from small tributaries between the sites. Five habitat reaches were monitored before and after dam-remnant removal. The reaches consisted of a reference reach (A), upstream from the effects of the impoundment; the impoundment (B); and three sites below the impoundment where habitat changes were expected (C, D, and E, in downstream order). Stream-habitat assessment reaches varied in their responses to the dam-remnant removal. Reference reach A was not affected. In impoundment reach B, Great Lakes and Environmental Assessment Section (GLEAS) Procedure 51 ratings went from fair to excellent. For the three downstream reaches, reach C underwent slight habitat degradation, but ratings remained good; reach D underwent slight habitat degradation with ratings changing from excellent to good; and, in an area

  15. Palaeolimnological evidence of vulnerability of Lake Neusiedl (Austria) toward climate related changes since the last "vanished-lake" stage.

    Science.gov (United States)

    Tolotti, Monica; Milan, Manuela; Boscaini, Adriano; Soja, Gerhard; Herzig, Alois

    2013-04-01

    The palaeolimnological reconstruction of secular evolution of Euroepan Lakes with key socio-economical relevance respect to large (climate change) and local scale (land use, tourism) environmental changes, represents one of the objectives of the project EuLakes (European Lakes Under Environmental Stressors, Supporting lake governance to mitigate the impact of climate change, Reg. N. 2CE243P3), launched in 2010 within the Central European Inititiative. The project consortium comprises lakes of different morphology and prevalent human uses, including the meso-eutrophic Lake Neusiedl, the largest Austrian lake (total area 315 km2), and the westernmost shallow (mean depth 1.2 m) steppe lake of the Euro-Asiatic continent. The volume of Lake Neusiedl can potentially change over the years, in relation with changing balance between atmospheric precipitation and lake water evapotranspiration. Changing water budget, together with high lake salinity and turbidity, have important implications over the lake ecosystem. This contribution illustrates results of the multi-proxi palaeolimnological reconstruction of ecologial changes occurred in Lake Neusiedl during the last ca. 140 years, i.e. since the end of the last "vanished-lake" stage (1865-1871). Geochemical and biological proxies anticipate the increase in lake productivity of ca. 10 years (1950s) respect to what reported in the literature. Diatom species composition indicate a biological lake recovery in the late 1980s, and suggest a second increment in lake productivity since the late 1990s, possibly in relation with the progressive increase in the nitrogen input from agriculture. Abundance of diatoms typical of brackish waters indicated no significant long-term change in lake salinity, while variations in species toleranting dessiccation confirm the vulnerability of Lake Neusiedl toward climate-driven changes in the lake water balance. This fragility is aggravated by the the semi-arid climate conditions of the catchemnt

  16. Big Data Analytics, Infectious Diseases and Associated Ethical Impacts

    OpenAIRE

    Garattini, C.; Raffle, J.; Aisyah, D. N.; Sartain, F.; Kozlakidis, Z.

    2017-01-01

    The exponential accumulation, processing and accrual of big data in healthcare are only possible through an equally rapidly evolving field of big data analytics. The latter offers the capacity to rationalize, understand and use big data to serve many different purposes, from improved services modelling to prediction of treatment outcomes, to greater patient and disease stratification. In the area of infectious diseases, the application of big data analytics has introduced a number of changes ...

  17. Evaluation of Data Management Systems for Geospatial Big Data

    OpenAIRE

    Amirian, Pouria; Basiri, Anahid; Winstanley, Adam C.

    2014-01-01

    Big Data encompasses collection, management, processing and analysis of the huge amount of data that varies in types and changes with high frequency. Often data component of Big Data has a positional component as an important part of it in various forms, such as postal address, Internet Protocol (IP) address and geographical location. If the positional components in Big Data extensively used in storage, retrieval, analysis, processing, visualization and knowledge discovery (geospatial Big Dat...

  18. Hydrological and solute budgets of Lake Qinghai, the largest lake on the Tibetan Plateau

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zhangdong [Chinese Academy of Sciences (CAS), Beijing (China); National Cheng Kung Univ., Tainan City (Taiwan); You, Chen-Feng [National Cheng Kung Univ., Tainan City (Taiwan); Wang, Yi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Shi, Yuewei [Bureau of Hydrology and Water Resources of Qinghai Province, Xining (China)

    2009-12-04

    Water level and chemistry of Lake Qinghai are sensitive to climate changes and are important for paleoclimatic implications. An accurate understanding of hydrological and chemical budgets is crucial for quantifying geochemical proxies and carbon cycle. Published results of water budget are firstly reviewed in this paper. Chemical budget and residence time of major dissolved constituents in the lake are estimated using reliable water budget and newly obtained data for seasonal water chemistry. The results indicate that carbonate weathering is the most important riverine process, resulting in dominance of Ca 2+ and DIC for river waters and groundwater. Groundwater contribution to major dissolved constituents is relatively small (4.2 ± 0.5%). Wet atmospheric deposition contributes annually 7.4–44.0% soluble flux to the lake, resulting from eolian dust throughout the seasons. Estimates of chemical budget further suggest that (1) the Buha-type water dominates the chemical components of the lake water, (2) Na+, Cl-, Mg 2+ , and K+ in lake water are enriched owing to their conservative behaviors, and (3) precipitation of authigenic carbonates (low-Mg calcite, aragonite, and dolomite) transits quickly dissolved Ca 2+ into the bottom sediments of the lake, resulting in very low Ca 2+ in the lake water. Therefore, authigenic carbonates in the sediments hold potential information on the relative contribution of different solute inputs to the lake and the lake chemistry in the past.

  19. Effects of acidity on primary productivity in lakes: phytoplankton. [Lakes Panther, Sagamore, and Woods

    Energy Technology Data Exchange (ETDEWEB)

    Hendrey, G R

    1979-01-01

    Relationships between phytoplankton communities and lake acidity are being studied at Woods Lake (pH ca. 4.9), Sagamore Lake (pH ca. 5.5), and Panther Lake (pH ca. 7.0). Numbers of phytoplankton species observed as of July 31, 1979 are Woods 27, Sagamore 38, and Panther 64, conforming to other observations that species numbers decrease with increasing acidity. Patterns of increasing biomass and productivity found in Woods Lake may be atypical of similar oligotrophic lakes in that they develop rather slowly instead of occuring very close to ice-out. Contributions of netplankton (net > 48 ..mu..m), nannoplankton (48 > nanno > 20 ..mu..m) and ultraplankton (20 > ultra >0.45 ..mu..m) to productivity per m/sup -2/ show that the smaller plankton are relatively more important in the more acid lakes. This pattern could be determined by nutrient availability (lake acidification leading to decreased availability of phosphorus). The amount of /sup 14/C-labelled dissolved photosynthate (/sup 14/C-DOM), as a percent of total productivity, is ordered Woods > Sagamore > Panther. This is consistent with a hypothesis that microbial heterotrophic activity is reduced with increasing acidity, but the smaller phytoplankton may be more leaky at low pH. (ERB)

  20. Ecology of playa lakes

    Science.gov (United States)

    Haukos, David A.; Smith, Loren M.

    1992-01-01

    Between 25,000 and 30,000 playa lakes are in the playa lakes region of the southern high plains (Fig. 1). Most playas are in west Texas (about 20,000), and fewer, in New Mexico, Oklahoma, Kansas, and Colorado. The playa lakes region is one of the most intensively cultivated areas of North America. Dominant crops range from cotton in southern areas to cereal grains in the north. Therefore, most of the native short-grass prairie is gone, replaced by crops and, recently, grasses of the Conservation Reserve Program. Playas are the predominant wetlands and major wildlife habitat of the region.More than 115 bird species, including 20 species of waterfowl, and 10 mammal species have been documented in playas. Waterfowl nest in the area, producing up to 250,000 ducklings in wetter years. Dominant breeding and nesting species are mallards and blue-winged teals. During the very protracted breeding season, birds hatch from April through August. Several million shorebirds and waterfowl migrate through the area each spring and fall. More than 400,000 sandhill cranes migrate through and winter in the region, concentrating primarily on the larger saline lakes in the southern portion of the playa lakes region.The primary importance of the playa lakes region to waterfowl is as a wintering area. Wintering waterfowl populations in the playa lakes region range from 1 to 3 million birds, depending on fall precipitation patterns that determine the number of flooded playas. The most common wintering ducks are mallards, northern pintails, green-winged teals, and American wigeons. About 500,000 Canada geese and 100,000 lesser snow geese winter in the playa lakes region, and numbers of geese have increased annually since the early 1980’s. This chapter describes the physiography and ecology of playa lakes and their attributes that benefit waterfowl.

  1. A New Look at Big History

    Science.gov (United States)

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  2. West Virginia's big trees: setting the record straight

    Science.gov (United States)

    Melissa Thomas-Van Gundy; Robert. Whetsell

    2016-01-01

    People love big trees, people love to find big trees, and people love to find big trees in the place they call home. Having been suspicious for years, my coauthor and historian Rob Whetsell, approached me with a species identification challenge. There are several photographs of giant trees used by many people to illustrate the past forests of West Virginia,...

  3. Sosiaalinen asiakassuhdejohtaminen ja big data

    OpenAIRE

    Toivonen, Topi-Antti

    2015-01-01

    Tässä tutkielmassa käsitellään sosiaalista asiakassuhdejohtamista sekä hyötyjä, joita siihen voidaan saada big datan avulla. Sosiaalinen asiakassuhdejohtaminen on terminä uusi ja monille tuntematon. Tutkimusta motivoi aiheen vähäinen tutkimus, suomenkielisen tutkimuksen puuttuminen kokonaan sekä sosiaalisen asiakassuhdejohtamisen mahdollinen olennainen rooli yritysten toiminnassa tulevaisuudessa. Big dataa käsittelevissä tutkimuksissa keskitytään monesti sen tekniseen puoleen, eikä sovellutuk...

  4. Excess unsupported sup(210)Pb in lake sediment from Rocky Mountain lakes

    International Nuclear Information System (INIS)

    Norton, S.A.; Hess, C.T.; Blake, G.M.; Morrison, M.L.; Baron, J.

    1985-01-01

    Sediment cores from four high-altitude (approximately 3200 m) lakes in Rocky Mountain National Park, Colorado, were dated by sup(210)Pb chronology. Background (supported) sup(210)Pb activities for the four cores range from 0.26 to 0.93 Beq/g dry weight, high for typical oligotrophic lakes. Integrated unsupported sup(210)Pb ranges from 0.81 (a typical value for most lakes) to 11.0 Beq/cmsup(2). The sup(210)Pb activity in the surface sediments ranges from 1.48 to 22.2 Beq/g dry weight. Sedimentation from Lake Louise, the most unusual of the four, has 22.2 Beq/g dry weight at the sediment surface, an integrated unsupported sup(210)Pb=11.0 Beq/cmsup(2), and supported sup(210)Pb=0.74 Beq/g dry weight. sup(226)Ra content of the sediment is insufficient to explain either the high unsupported sup(210)Pb or the sup(222)Rn content of the water column of Lake Louise, which averaged 96.2 Beq/L. We concluded that sup(222)Rn-rich groundwater entering the lake is the source of the high sup(222)Rn in the water column. This, in turn, is capable of supporting the unusually high sup(210)Pb flux to the sediment surface. Groundwater with high sup(222)Rn may control the sup(210)Pb budget of lakes where sediment cores have integrated unsupported sup(210)Pb greater than 2 Beq/cmsup(2)

  5. D-branes in a big bang/big crunch universe: Misner space

    International Nuclear Information System (INIS)

    Hikida, Yasuaki; Nayak, Rashmi R.; Panigrahi, Kamal L.

    2005-01-01

    We study D-branes in a two-dimensional lorentzian orbifold R 1,1 /Γ with a discrete boost Γ. This space is known as Misner or Milne space, and includes big crunch/big bang singularity. In this space, there are D0-branes in spiral orbits and D1-branes with or without flux on them. In particular, we observe imaginary parts of partition functions, and interpret them as the rates of open string pair creation for D0-branes and emission of winding closed strings for D1-branes. These phenomena occur due to the time-dependence of the background. Open string 2→2 scattering amplitude on a D1-brane is also computed and found to be less singular than closed string case

  6. D-branes in a big bang/big crunch universe: Misner space

    Energy Technology Data Exchange (ETDEWEB)

    Hikida, Yasuaki [Theory Group, High Energy Accelerator Research Organization (KEK), Tukuba, Ibaraki 305-0801 (Japan); Nayak, Rashmi R. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy); Panigrahi, Kamal L. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy)

    2005-09-01

    We study D-branes in a two-dimensional lorentzian orbifold R{sup 1,1}/{gamma} with a discrete boost {gamma}. This space is known as Misner or Milne space, and includes big crunch/big bang singularity. In this space, there are D0-branes in spiral orbits and D1-branes with or without flux on them. In particular, we observe imaginary parts of partition functions, and interpret them as the rates of open string pair creation for D0-branes and emission of winding closed strings for D1-branes. These phenomena occur due to the time-dependence of the background. Open string 2{yields}2 scattering amplitude on a D1-brane is also computed and found to be less singular than closed string case.

  7. Astroinformatics: the big data of the universe

    OpenAIRE

    Barmby, Pauline

    2016-01-01

    In astrophysics we like to think that our field was the originator of big data, back when it had to be carried around in big sky charts and books full of tables. These days, it's easier to move astrophysics data around, but we still have a lot of it, and upcoming telescope  facilities will generate even more. I discuss how astrophysicists approach big data in general, and give examples from some Western Physics & Astronomy research projects.  I also give an overview of ho...

  8. Recent big flare

    International Nuclear Information System (INIS)

    Moriyama, Fumio; Miyazawa, Masahide; Yamaguchi, Yoshisuke

    1978-01-01

    The features of three big solar flares observed at Tokyo Observatory are described in this paper. The active region, McMath 14943, caused a big flare on September 16, 1977. The flare appeared on both sides of a long dark line which runs along the boundary of the magnetic field. Two-ribbon structure was seen. The electron density of the flare observed at Norikura Corona Observatory was 3 x 10 12 /cc. Several arc lines which connect both bright regions of different magnetic polarity were seen in H-α monochrome image. The active region, McMath 15056, caused a big flare on December 10, 1977. At the beginning, several bright spots were observed in the region between two main solar spots. Then, the area and the brightness increased, and the bright spots became two ribbon-shaped bands. A solar flare was observed on April 8, 1978. At first, several bright spots were seen around the solar spot in the active region, McMath 15221. Then, these bright spots developed to a large bright region. On both sides of a dark line along the magnetic neutral line, bright regions were generated. These developed to a two-ribbon flare. The time required for growth was more than one hour. A bright arc which connects two ribbons was seen, and this arc may be a loop prominence system. (Kato, T.)

  9. Decline of the world's saline lakes

    Science.gov (United States)

    Wurtsbaugh, Wayne A.; Miller, Craig; Null, Sarah E.; Derose, R. Justin; Wilcock, Peter; Hahnenberger, Maura; Howe, Frank; Moore, Johnnie

    2017-11-01

    Many of the world's saline lakes are shrinking at alarming rates, reducing waterbird habitat and economic benefits while threatening human health. Saline lakes are long-term basin-wide integrators of climatic conditions that shrink and grow with natural climatic variation. In contrast, water withdrawals for human use exert a sustained reduction in lake inflows and levels. Quantifying the relative contributions of natural variability and human impacts to lake inflows is needed to preserve these lakes. With a credible water balance, causes of lake decline from water diversions or climate variability can be identified and the inflow needed to maintain lake health can be defined. Without a water balance, natural variability can be an excuse for inaction. Here we describe the decline of several of the world's large saline lakes and use a water balance for Great Salt Lake (USA) to demonstrate that consumptive water use rather than long-term climate change has greatly reduced its size. The inflow needed to maintain bird habitat, support lake-related industries and prevent dust storms that threaten human health and agriculture can be identified and provides the information to evaluate the difficult tradeoffs between direct benefits of consumptive water use and ecosystem services provided by saline lakes.

  10. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    CERN Multimedia

    2008-01-01

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  11. Inflated granularity: Spatial “Big Data” and geodemographics

    Directory of Open Access Journals (Sweden)

    Craig M Dalton

    2015-08-01

    Full Text Available Data analytics, particularly the current rhetoric around “Big Data”, tend to be presented as new and innovative, emerging ahistorically to revolutionize modern life. In this article, we situate one branch of Big Data analytics, spatial Big Data, through a historical predecessor, geodemographic analysis, to help develop a critical approach to current data analytics. Spatial Big Data promises an epistemic break in marketing, a leap from targeting geodemographic areas to targeting individuals. Yet it inherits characteristics and problems from geodemographics, including a justification through the market, and a process of commodification through the black-boxing of technology. As researchers develop sustained critiques of data analytics and its effects on everyday life, we must so with a grounding in the cultural and historical contexts from which data technologies emerged. This article and others (Barnes and Wilson, 2014 develop a historically situated, critical approach to spatial Big Data. This history illustrates connections to the critical issues of surveillance, redlining, and the production of consumer subjects and geographies. The shared histories and structural logics of spatial Big Data and geodemographics create the space for a continued critique of data analyses’ role in society.

  12. Big data analysis for smart farming

    NARCIS (Netherlands)

    Kempenaar, C.; Lokhorst, C.; Bleumer, E.J.B.; Veerkamp, R.F.; Been, Th.; Evert, van F.K.; Boogaardt, M.J.; Ge, L.; Wolfert, J.; Verdouw, C.N.; Bekkum, van Michael; Feldbrugge, L.; Verhoosel, Jack P.C.; Waaij, B.D.; Persie, van M.; Noorbergen, H.

    2016-01-01

    In this report we describe results of a one-year TO2 institutes project on the development of big data technologies within the milk production chain. The goal of this project is to ‘create’ an integration platform for big data analysis for smart farming and to develop a show case. This includes both

  13. Conifer density within lake catchments predicts fish mercury concentrations in remote subalpine lakes

    Science.gov (United States)

    Eagles-Smith, Collin A.; Herring, Garth; Johnson, Branden L.; Graw, Rick

    2016-01-01

    Remote high-elevation lakes represent unique environments for evaluating the bioaccumulation of atmospherically deposited mercury through freshwater food webs, as well as for evaluating the relative importance of mercury loading versus landscape influences on mercury bioaccumulation. The increase in mercury deposition to these systems over the past century, coupled with their limited exposure to direct anthropogenic disturbance make them useful indicators for estimating how changes in mercury emissions may propagate to changes in Hg bioaccumulation and ecological risk. We evaluated mercury concentrations in resident fish from 28 high-elevation, sub-alpine lakes in the Pacific Northwest region of the United States. Fish total mercury (THg) concentrations ranged from 4 to 438 ng/g wet weight, with a geometric mean concentration (±standard error) of 43 ± 2 ng/g ww. Fish THg concentrations were negatively correlated with relative condition factor, indicating that faster growing fish that are in better condition have lower THg concentrations. Across the 28 study lakes, mean THg concentrations of resident salmonid fishes varied as much as 18-fold among lakes. We used a hierarchal statistical approach to evaluate the relative importance of physiological, limnological, and catchment drivers of fish Hg concentrations. Our top statistical model explained 87% of the variability in fish THg concentrations among lakes with four key landscape and limnological variables: catchment conifer density (basal area of conifers within a lake's catchment), lake surface area, aqueous dissolved sulfate, and dissolved organic carbon. Conifer density within a lake's catchment was the most important variable explaining fish THg concentrations across lakes, with THg concentrations differing by more than 400 percent across the forest density spectrum. These results illustrate the importance of landscape characteristics in controlling mercury bioaccumulation in fish.

  14. Stable isotope and hydrogeochemical studies of Beaver Lake and Lake Radok, MacRobertson Land, East Antarctica

    International Nuclear Information System (INIS)

    Wand, U.; Hermichen, W.D.; Hoefling, R.; Muehle, K.

    1987-01-01

    Beaver Lake and Lake Radok, the largest known epishelf and the deepest freshwater lake on the Antarctic continent, respectively, were isotopically (δ 2 H, δ 18 O) and hydrogeochemically studied. Lake Radok is an isothermal and non-stratified, i.e. homogeneous water body, while Beaver Lake is stratified with respect to temperature, salinity and isotopic composition. The results for the latter attest to freshwater (derived from snow and glacier melt) overlying seawater. (author)

  15. A survey on Big Data Stream Mining

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Big Data can be static on one machine or distributed ... decision making, and process automation. Big data .... Concept Drifting: concept drifting mean the classifier .... transactions generated by a prefix tree structure. EstDec ...

  16. Emerging technology and architecture for big-data analytics

    CERN Document Server

    Chang, Chip; Yu, Hao

    2017-01-01

    This book describes the current state of the art in big-data analytics, from a technology and hardware architecture perspective. The presentation is designed to be accessible to a broad audience, with general knowledge of hardware design and some interest in big-data analytics. Coverage includes emerging technology and devices for data-analytics, circuit design for data-analytics, and architecture and algorithms to support data-analytics. Readers will benefit from the realistic context used by the authors, which demonstrates what works, what doesn’t work, and what are the fundamental problems, solutions, upcoming challenges and opportunities. Provides a single-source reference to hardware architectures for big-data analytics; Covers various levels of big-data analytics hardware design abstraction and flow, from device, to circuits and systems; Demonstrates how non-volatile memory (NVM) based hardware platforms can be a viable solution to existing challenges in hardware architecture for big-data analytics.

  17. Environmental status of the Lake Michigan region. Volume 3. Chemistry of Lake Michigan

    Energy Technology Data Exchange (ETDEWEB)

    Torrey, M S

    1976-05-01

    The report is a synoptic review of data collected over the past twenty years on the chemistry of Lake Michigan. Changes in water quality and sediment chemistry, attributable to cultural and natural influences, are considered in relation to interacting processes and factors controlling the distribution and concentration of chemical substances within the Lake. Temperature, light, and mixing processes are among the important natural influences that affect nutrient cycling, dispersal of pollutants, and fate of materials entering the Lake. Characterization of inshore-offshore and longitudinal differences in chemical concentrations and sediment chemistry for the main body of the Lake is supplemented by discussion of specific areas such as Green Bay and Grand Traverse Bay. Residues, specific conductance, dissolved oxygen, major and trace nutrients, and contaminants are described in the following context: biological essentiality and/or toxicity, sources to the Lake, concentrations in the water column and sediments, chemical forms, seasonal variations and variation with depth. A summary of existing water quality standards, statutes, and criteria applicable to Lake Michigan is appended.

  18. Toward a manifesto for the 'public understanding of big data'.

    Science.gov (United States)

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  19. What do Big Data do in Global Governance?

    DEFF Research Database (Denmark)

    Krause Hansen, Hans; Porter, Tony

    2017-01-01

    Two paradoxes associated with big data are relevant to global governance. First, while promising to increase the capacities of humans in governance, big data also involve an increasingly independent role for algorithms, technical artifacts, the Internet of things, and other objects, which can...... reduce the control of human actors. Second, big data involve new boundary transgressions as data are brought together from multiple sources while also creating new boundary conflicts as powerful actors seek to gain advantage by controlling big data and excluding competitors. These changes are not just...... about new data sources for global decision-makers, but instead signal more profound changes in the character of global governance....

  20. Robinson Rancheria Strategic Energy Plan; Middletown Rancheria Strategic Energy Plan, Scotts Valley Rancheria Strategic Energy Plan, Elem Indian Colony Strategic Energy Plan, Upperlake Rancheria Strategic Energy Plan, Big Valley Rancheria Strategic Energy Plan

    Energy Technology Data Exchange (ETDEWEB)

    McGinnis and Associates LLC

    2008-08-01

    The Scotts Valley Band of Pomo Indians is located in Lake County in Northern California. Similar to the other five federally recognized Indian Tribes in Lake County participating in this project, Scotts Valley Band of Pomo Indians members are challenged by generally increasing energy costs and undeveloped local energy resources. Currently, Tribal decision makers lack sufficient information to make informed decisions about potential renewable energy resources. To meet this challenge efficiently, the Tribes have committed to the Lake County Tribal Energy Program, a multi Tribal program to be based at the Robinson Rancheria and including The Elem Indian Colony, Big Valley Rancheria, Middletown Rancheria, Habematolel Pomo of Upper Lake and the Scotts Valley Pomo Tribe. The mission of this program is to promote Tribal energy efficiency and create employment opportunities and economic opportunities on Tribal Lands through energy resource and energy efficiency development. This program will establish a comprehensive energy strategic plan for the Tribes based on Tribal specific plans that capture economic and environmental benefits while continuing to respect Tribal cultural practices and traditions. The goal is to understand current and future energy consumption and develop both regional and Tribe specific strategic energy plans, including action plans, to clearly identify the energy options for each Tribe.

  1. Assessment of the Great Lakes Marine Renewable Energy Resources: Characterizing Lake Erie Surge, Seiche and Waves

    Science.gov (United States)

    Farhadzadeh, A.; Hashemi, M. R.

    2016-02-01

    Lake Erie, the fourth largest in surface area, smallest in volume and shallowest among the Great Lakes is approximately 400 km long and 90 km wide. Short term lake level variations are due to storm surge generated by high winds and moving pressure systems over the lake mainly in the southwest-northeast direction, along the lakes longitudinal axis. The historical wave data from three active offshore buoys shows that significant wave height can exceed 5 m in the eastern and central basins. The long-term lake level data show that storm surge can reach up to 3 m in eastern Lake Erie. Owing its shallow depth, Lake Erie frequently experiences seiching motions, the low frequency oscillations that are initiated by storm surge. The seiches whose first mode of oscillations has a period of nearly 14.2 hours can last from several hours to days. In this study, the Lake Erie potential for power generation, primarily using storm surge and seiche and also waves are assessed. Given the cyclic lake level variations due to storm-induced seiching, a concept similar to that of tidal range development is utilized to assess the potential of storm surge and seiche energy harvesting mechanisms for power generation. In addition, wave energy resources of the Lake is characterized -. To achieve these objectives, the following steps are taken : (1) Frequency of occurrence for extreme storm surge and wave events is determined using extreme value analysis such as Peak-Over-Threshold method for the long-term water level and wave data; (2) Spatial and temporal variations of wave height, storm surge and seiche are characterized. The characterization is carried out using the wave and storm surge outputs from numerical simulation of a number of historical extreme events. The coupled ADCIRC and SWAN model is utilized for the modeling; (3) Assessment of the potentials for marine renewable power generation in Lake Erie is made. The approach can be extended to the other lakes in the Great Lakes region.

  2. Combining lake and watershed characteristics with Landsat TM data for remote estimation of regional lake clarity

    Science.gov (United States)

    McCullough, Ian M.; Loftin, Cyndy; Sader, Steven A.

    2012-01-01

    Water clarity is a reliable indicator of lake productivity and an ideal metric of regional water quality. Clarity is an indicator of other water quality variables including chlorophyll-a, total phosphorus and trophic status; however, unlike these metrics, clarity can be accurately and efficiently estimated remotely on a regional scale. Remote sensing is useful in regions containing a large number of lakes that are cost prohibitive to monitor regularly using traditional field methods. Field-assessed lakes generally are easily accessible and may represent a spatially irregular, non-random sample of a region. We developed a remote monitoring program for Maine lakes >8 ha (1511 lakes) to supplement existing field monitoring programs. We combined Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) brightness values for TM bands 1 (blue) and 3 (red) to estimate water clarity (secchi disk depth) during 1990–2010. Although similar procedures have been applied to Minnesota and Wisconsin lakes, neither state incorporates physical lake variables or watershed characteristics that potentially affect clarity into their models. Average lake depth consistently improved model fitness, and the proportion of wetland area in lake watersheds also explained variability in clarity in some cases. Nine regression models predicted water clarity (R2 = 0.69–0.90) during 1990–2010, with separate models for eastern (TM path 11; four models) and western Maine (TM path 12; five models that captured differences in topography and landscape disturbance. Average absolute difference between model-estimated and observed secchi depth ranged 0.65–1.03 m. Eutrophic and mesotrophic lakes consistently were estimated more accurately than oligotrophic lakes. Our results show that TM bands 1 and 3 can be used to estimate regional lake water clarity outside the Great Lakes Region and that the accuracy of estimates is improved with additional model variables that reflect

  3. Big Data in Caenorhabditis elegans: quo vadis?

    Science.gov (United States)

    Hutter, Harald; Moerman, Donald

    2015-11-05

    A clear definition of what constitutes "Big Data" is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of "complete" data sets for this organism is actually rather small--not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein-protein interaction--important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. © 2015 Hutter and Moerman. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  4. 76 FR 7810 - Big Horn County Resource Advisory Committee

    Science.gov (United States)

    2011-02-11

    ..., Wyoming 82801. Comments may also be sent via e-mail to [email protected] , with the words Big... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  5. Hot big bang or slow freeze?

    Energy Technology Data Exchange (ETDEWEB)

    Wetterich, C.

    2014-09-07

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  6. Hot big bang or slow freeze?

    International Nuclear Information System (INIS)

    Wetterich, C.

    2014-01-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe

  7. Hot big bang or slow freeze?

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2014-09-01

    Full Text Available We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  8. Big Data: Survey, Technologies, Opportunities, and Challenges

    Directory of Open Access Journals (Sweden)

    Nawsher Khan

    2014-01-01

    Full Text Available Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  9. Pre-big bang cosmology and quantum fluctuations

    International Nuclear Information System (INIS)

    Ghosh, A.; Pollifrone, G.; Veneziano, G.

    2000-01-01

    The quantum fluctuations of a homogeneous, isotropic, open pre-big bang model are discussed. By solving exactly the equations for tensor and scalar perturbations we find that particle production is negligible during the perturbative Pre-Big Bang phase

  10. Treating floodplain lakes of large rivers as study units for variables that vary within lakes; an evaluation using chlorophyll a and inorganic suspended solids data from floodplain lakes of the Upper Mississippi River

    Science.gov (United States)

    Gray, B.R.; Rogala, J.R.; Houser, J.N.

    2013-01-01

    Contiguous floodplain lakes ('lakes') have historically been used as study units for comparative studies of limnological variables that vary within lakes. The hierarchical nature of these studies implies that study variables may be correlated within lakes and that covariate associations may differ not only among lakes but also by spatial scale. We evaluated the utility of treating lakes as study units for limnological variables that vary within lakes based on the criteria of important levels of among-lake variation in study variables and the observation of covariate associations that vary among lakes. These concerns were selected, respectively, to ensure that lake signatures were distinguishable from within-lake variation and that lake-scale effects on covariate associations might provide inferences not available by ignoring those effects. Study data represented chlorophyll a (CHL) and inorganic suspended solids (ISS) data from lakes within three reaches of the Upper Mississippi River. Sampling occurred in summer from 1993 through 2005 (except 2003); numbers of lakes per reach varied from 7 to 19, and median lake area varied from 53 to 101 ha. CHL and ISS levels were modelled linearly, with lake, year and lake x year effects treated as random. For all reaches, the proportions of variation in CHL and ISS attributable to differences among lakes (including lake and lake x year effects) were substantial (range: 18%-73%). Finally, among-lake variation in CHL and ISS was strongly associated with covariates and covariate effects that varied by lakes or lake-years (including with vegetation levels and, for CHL, log(ISS)). These findings demonstrate the utility of treating floodplain lakes as study units for the study of limnological variables and the importance of addressing hierarchy within study designs when making inferences from data collected within floodplain lakes.

  11. Global Lakes Sentinel Services: Evaluation of Chl-a Trends in Deep Clear Lakes

    Science.gov (United States)

    Cazzaniga, Ilaria; Giardino, Claudia; Bresciani, Mariano; Poser, Kathrin; Peters, Steef; Hommersom, Annelies; Schenk, Karin; Heege, Thomas; Philipson, Petra; Ruescas, Ana; Bottcher, Martin; Stelzer, Kerstin

    2016-08-01

    The aim of this study is the analysis of trend in the trophic level evolution in clear deep lakes which, being characterised by good quality state, are important socio- economic resources for their regions. The selected lakes are situated in Europe (Garda, Maggiore, Constance and Vättern), North America (Michigan) and Africa (Malawi and Tanganyika) and cover a range of eco- regions (continental, perialpine, boreal, rift valley) distributed globally.To evaluate trophic level tendency we mainly focused on chlorophyll-a concentrations (chl-a) which is a direct proxy of trophic status. The chl-a concentrations were obtained from 5216 cloud-free MERIS imagery from 2002 to 2012.The 'GLaSS RoIStats tool' available within the GLaSS project was used to extract chl-a in a number of region of interests (ROI) located in pelagic waters as well as some few other stations depending on lakes morphology. For producing the time-series trend, these extracted data were analysed with the Seasonal Kendall test.The results overall show almost stable conditions with a slight increase in concentration for lakes Maggiore, Constance, and the Green Bay of Lake Michigan; a slight decrease for lakes Garda and Tanganyika and absolutely stable conditions for lakes Vättern and Malawi.The results presented in this work show the great capability of MERIS to perform trend tests analysis on trophic status with focus on chl-a concentration. Being chl-a also a key parameter in water quality monitoring plans, this study also supports the managing practices implemented worldwide for using the water of the lakes.

  12. Benthic-planktonic coupling, regime shifts, and whole-lake primary production in shallow lakes.

    Science.gov (United States)

    Genkai-Kato, Motomi; Vadeboncoeur, Yvonne; Liboriussen, Lone; Jeppesen, Erik

    2012-03-01

    Alternative stable states in shallow lakes are typically characterized by submerged macrophyte (clear-water state) or phytoplankton (turbid state) dominance. However, a clear-water state may occur in eutrophic lakes even when macrophytes are absent. To test whether sediment algae could cause a regime shift in the absence of macrophytes, we developed a model of benthic (periphyton) and planktonic (phytoplankton) primary production using parameters derived from a shallow macrophyte-free lake that shifted from a turbid to a clear-water state following fish removal (biomanipulation). The model includes a negative feedback effect of periphyton on phosphorus (P) release from sediments. This in turn induces a positive feedback between phytoplankton production and P release. Scenarios incorporating a gradient of external P loading rates revealed that (1) periphyton and phytoplankton both contributed substantially to whole-lake production over a broad range of external P loading in a clear-water state; (2) during the clear-water state, the loss of benthic production was gradually replaced by phytoplankton production, leaving whole-lake production largely unchanged; (3) the responses of lakes to biomanipulation and increased external P loading were both dependent on lake morphometry; and (4) the capacity of periphyton to buffer the effects of increased external P loading and maintain a clear-water state was highly sensitive to relationships between light availability at the sediment surface and the of P release. Our model suggests a mechanism for the persistence of alternative states in shallow macrophyte-free lakes and demonstrates that regime shifts may trigger profound changes in ecosystem structure and function.

  13. Changes in depth occupied by Great Lakes lake whitefish populations and the influence of survey design

    Science.gov (United States)

    Rennie, Michael D.; Weidel, Brian C.; Claramunt, Randall M.; Dunlob, Erin S.

    2015-01-01

    Understanding fish habitat use is important in determining conditions that ultimately affect fish energetics, growth and reproduction. Great Lakes lake whitefish (Coregonus clupeaformis) have demonstrated dramatic changes in growth and life history traits since the appearance of dreissenid mussels in the Great Lakes, but the role of habitat occupancy in driving these changes is poorly understood. To better understand temporal changes in lake whitefish depth of capture (Dw), we compiled a database of fishery-independent surveys representing multiple populations across all five Laurentian Great Lakes. By demonstrating the importance of survey design in estimating Dw, we describe a novel method for detecting survey-based bias in Dw and removing potentially biased data. Using unbiased Dw estimates, we show clear differences in the pattern and timing of changes in lake whitefish Dw between our reference sites (Lake Superior) and those that have experienced significant benthic food web changes (lakes Michigan, Huron, Erie and Ontario). Lake whitefish Dw in Lake Superior tended to gradually shift to shallower waters, but changed rapidly in other locations coincident with dreissenid establishment and declines in Diporeia densities. Almost all lake whitefish populations that were exposed to dreissenids demonstrated deeper Dw following benthic food web change, though a subset of these populations subsequently shifted to more shallow depths. In some cases in lakes Huron and Ontario, shifts towards more shallow Dw are occurring well after documented Diporeia collapse, suggesting the role of other drivers such as habitat availability or reliance on alternative prey sources.

  14. Analysis of Big Data Maturity Stage in Hospitality Industry

    OpenAIRE

    Shabani, Neda; Munir, Arslan; Bose, Avishek

    2017-01-01

    Big data analytics has an extremely significant impact on many areas in all businesses and industries including hospitality. This study aims to guide information technology (IT) professionals in hospitality on their big data expedition. In particular, the purpose of this study is to identify the maturity stage of the big data in hospitality industry in an objective way so that hotels be able to understand their progress, and realize what it will take to get to the next stage of big data matur...

  15. A Multidisciplinary Perspective of Big Data in Management Research

    OpenAIRE

    Sheng, Jie; Amankwah-Amoah, J.; Wang, X.

    2017-01-01

    In recent years, big data has emerged as one of the prominent buzzwords in business and management. In spite of the mounting body of research on big data across the social science disciplines, scholars have offered little synthesis on the current state of knowledge. To take stock of academic research that contributes to the big data revolution, this paper tracks scholarly work's perspectives on big data in the management domain over the past decade. We identify key themes emerging in manageme...

  16. An embedding for the big bang

    Science.gov (United States)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  17. Historical changes to Lake Washington and route of the Lake Washington Ship Canal, King County, Washington

    Science.gov (United States)

    Chrzastowski, Michael J.

    1983-01-01

    Lake Washington, in the midst of the greater Seattle metropolitan area of the Puget Sound region (fig. 1), is an exceptional commercial, recreational, and esthetic resource for the region . In the past 130 years, Lake Washington has been changed from a " wild " lake in a wilderness setting to a regulated lake surrounded by a growing metropolis--a transformation that provides an unusual opportunity to study changes to a lake's shoreline and hydrologic characteristics -resulting from urbanization.

  18. Big Data as Governmentality in International Development

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    2017-01-01

    Statistics have long shaped the field of visibility for the governance of development projects. The introduction of big data has altered the field of visibility. Employing Dean's “analytics of government” framework, we analyze two cases—malaria tracking in Kenya and monitoring of food prices...... in Indonesia. Our analysis shows that big data introduces a bias toward particular types of visualizations. What problems are being made visible through big data depends to some degree on how the underlying data is visualized and who is captured in the visualizations. It is also influenced by technical factors...

  19. A Brief Review on Leading Big Data Models

    Directory of Open Access Journals (Sweden)

    Sugam Sharma

    2014-11-01

    Full Text Available Today, science is passing through an era of transformation, where the inundation of data, dubbed data deluge is influencing the decision making process. The science is driven by the data and is being termed as data science. In this internet age, the volume of the data has grown up to petabytes, and this large, complex, structured or unstructured, and heterogeneous data in the form of “Big Data” has gained significant attention. The rapid pace of data growth through various disparate sources, especially social media such as Facebook, has seriously challenged the data analytic capabilities of traditional relational databases. The velocity of the expansion of the amount of data gives rise to a complete paradigm shift in how new age data is processed. Confidence in the data engineering of the existing data processing systems is gradually fading whereas the capabilities of the new techniques for capturing, storing, visualizing, and analyzing data are evolving. In this review paper, we discuss some of the modern Big Data models that are leading contributors in the NoSQL era and claim to address Big Data challenges in reliable and efficient ways. Also, we take the potential of Big Data into consideration and try to reshape the original operationaloriented definition of “Big Science” (Furner, 2003 into a new data-driven definition and rephrase it as “The science that deals with Big Data is Big Science.”

  20. Climatic changes inferred fron analyses of lake-sediment cores, Walker Lake, Nevada

    International Nuclear Information System (INIS)

    Yang, In Che.

    1989-01-01

    Organic and inorganic fractions of sediment collected from the bottom of Walker Lake, Nevada, have been dated by carbon-14 techniques. Sedimentation rates and the organic-carbon content of the sediment were correlated with climatic change. The cold climate between 25,000 and 21,000 years ago caused little runoff, snow accumulation on the mountains, and rapid substantial glacial advances; this period of cold climate resulted in a slow sedimentation rate (0.20 millimeter per year) and in a small organic-carbon content in the sediment. Also, organic-carbon accumulation rates in the lake during this period were slow. The most recent period of slow sedimentation rate and small organic-carbon content occurred between 10,000 and 5500 years ago, indicative of low lake stage and dry climatic conditions. This period of dry climate also was evidenced by dry conditions for Lake Lahontan in Nevada and Searles Lake in California, as cited in the literature. Walker Lake filled rapidly with water between 5500 and 4500 years ago. The data published in this report was not produced under an approved Site Investigation Plan (SIP) or Study Plan (SP) and will not be used in the licensing process. 10 refs., 3 figs., 2 tabs

  1. 75 FR 71069 - Big Horn County Resource Advisory Committee

    Science.gov (United States)

    2010-11-22

    ....us , with the words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  2. 76 FR 26240 - Big Horn County Resource Advisory Committee

    Science.gov (United States)

    2011-05-06

    ... words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668. All comments... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  3. Survey and assessment of post volcanic activities of a young caldera lake, Lake Cuicocha, Ecuador

    Directory of Open Access Journals (Sweden)

    G. Gunkel

    2009-05-01

    Full Text Available Cuicocha is a young volcano adjacent to the inactive Pleistocene Cotacachi volcano complex, located in the western cordilleras of the Ecuadorian Andes. A series of eruptions with intensive ash emission and collapse of the caldera occurred around 4500–3000 y BP. A crater 3.2 km in diameter and a maximum depth of 450 m was formed. Further eruptions of the volcano occurred 1300 y BP and formed four smaller domes within the caldera. Over the last few hundred years, a caldera lake has developed, with a maximum depth of 148 m. The lake water is characterized by sodium carbonate with elevated concentrations of manganese, calcium and chloride. Nowadays, an emission of gases, mainly CO2, and an input of warm spring water occur in Lake Cuicocha. The zone of high activity is in the western basin of the lake at a depth of 78 m, and continuous gas emissions with sediment resuspension were observed using sonar. In the hypolimnion of the lake, CO2 accumulation occurs up to 0.2% saturation, but the risk of a limnic eruption can be excluded at present. The lake possesses monomictic stratification behaviour, and during overturn an intensive gas exchange with the atmosphere occurs. Investigations concerning the sedimentation processes of the lake suggest only a thin sediment layer of up to 10–20 cm in the deeper lake basin; in the western bay, in the area of gas emissions, the lake bottom is partly depleted of sediment in the form of holes, and no lake colmation exists. Decreases in the lake water level of about 30 cm y−1 indicate a percolation of water into fractures and fissures of the volcano, triggered by a nearby earthquake in 1987.

  4. A Synoptic Climatology of Heavy Rain Events in the Lake Eyre and Lake Frome Catchments

    Directory of Open Access Journals (Sweden)

    Michael John Pook

    2014-11-01

    Full Text Available The rare occasions when Lake Eyre in central, southern Australia fills with water excite great interest and produce major ecological responses. The filling of other smaller lakes such as Lake Frome, have less impact but can contribute important information about the current and past climates of these arid regions. Here, the dominant synoptic systems responsible for heavy rainfall over the catchments of Lake Eyre and Lake Frome since 1950 are identified and compared. Heavy rain events are defined as those where the mean catchment rainfall for 24 hours reaches a prescribed threshold. There were 25 such daily events at Lake Eyre and 28 in the Lake Frome catchment. The combination of a monsoon trough at mean sea level and a geopotential trough in the mid-troposphere was found to be the synoptic system responsible for the majority of the heavy rain events affecting Lake Eyre and one in five of the events at Lake Frome. Complex fronts where subtropical interactions occurred with Southern Ocean fronts also contributed over 20% of the heavy rainfall events in the Frome catchment. Surface troughs without upper air support were found to be associated with 10% or fewer of events in each catchment, indicating that mean sea level pressure analyses alone do not adequately capture the complexity of the heavy rainfall events. At least 80% of the heavy rain events across both catchments occurred when the Southern Oscillation Index (SOI was in its positive phase, and for Lake Frome, the SOI exceeded +10 on 60% of occasions, suggesting that the background atmospheric state in the Pacific Ocean was tilted towards La Niña. Hydrological modeling of the catchments suggests that the 12-month running mean of the soil moisture in a sub-surface layer provides a low frequency filter of the precipitation and matches measured lake levels relatively well.

  5. Big Science

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1986-05-15

    Astronomy, like particle physics, has become Big Science where the demands of front line research can outstrip the science budgets of whole nations. Thus came into being the European Southern Observatory (ESO), founded in 1962 to provide European scientists with a major modern observatory to study the southern sky under optimal conditions.

  6. Expanding models of lake trophic state to predict cyanobacteria in lakes

    Science.gov (United States)

    Background/Question/Methods: Cyanobacteria are a primary taxonomic group associated with harmful algal blooms in lakes. Understanding the drivers of cyanobacteria presence has important implications for lake management and for the protection of human and ecosystem health. Chlor...

  7. The ralationship between the Tamarix spp. growth and lake level change in the Bosten Lake,northwest China

    Science.gov (United States)

    Ye, Mao; Hou, JiaWen

    2015-04-01

    Dendrochronology methods are used to analyze the characteristics of Tamarix spp. growth in Bosten Lake. Based on the long-term annual and monthly data of lake level, this paper models the relationship between ring width of Tamarix spp. and lake level change. The sensitivity index is applied to determine the rational change range of lake level for protecting the Tamarix spp. growth. The results show that :( 1) the annual change of lake level in Bosten Lake has tree evident stages from 1955 to 2012. The monthly change of lake level has two peak values and the seasonal change is not significant; (2) the average value of radical width of Tamarix spp. is 3.39mm. With the increment of Tamarix spp. annual growth , the average radical width has a decreasing trend, which is similar to the annual change trend of lake level in the same years ;( 3) the response of the radical width of Tamarix spp. to annual change of lake level is sensitive significantly. When the lake level is 1045.66m, the Sk value of radical width of Tamarix spp. appears minimum .when the lake level is up to1046.27m, the Sk value is maximum. Thus the sensitivity level of radical width of Tamarix spp. is 1045.66- 1046.27m which could be regarded as the rational lake level change range for protecting the Tamarix spp. growth.

  8. Commentary: Epidemiology in the era of big data.

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  9. Lake Chad, Chad, Africa

    Science.gov (United States)

    1992-01-01

    Hydrologic and ecologic changes in the Lake Chad Basin are shown in this Oct 1992 photograph. In space photo documentation, Lake Chad was at its greatest area extent (25,000 sq. km.) during Gemini 9 in June 1966 (see S66-38444). Its reduction during the severe droughts from 1968 to 1974 was first noted during Skylab (1973-1974). After the drought began again in 1982, the lake reached its minimum extent (1,450 sq. km.) in Space Shuttle photographs taken in 1984 and 1985. In this STS-52 photograph, Lake Chad has begun to recover. The area of the open water and interdunal impoundments in the southern basin (the Chari River Basin) is estimated to be 1,900 to 2100 sq. km. Note the green vegetation in the valley of the K'Yobe flow has wetted the northern lake basin for the first time in several years. There is evidence of biomass burning south of the K'Yobe Delta and in the vegetated interdunal areas near the dike in the center of the lake. Also note the dark 'Green Line' of the Sahel (the g

  10. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  11. Digital humanitarians how big data is changing the face of humanitarian response

    CERN Document Server

    Meier, Patrick

    2015-01-01

    The Rise of Digital HumanitariansMapping Haiti LiveSupporting Search And Rescue EffortsPreparing For The Long Haul Launching An SMS Life Line Sending In The Choppers Openstreetmap To The Rescue Post-Disaster Phase The Human Story Doing Battle With Big Data Rise Of Digital Humanitarians This Book And YouThe Rise of Big (Crisis) DataBig (Size) Data Finding Needles In Big (Size) Data Policy, Not Simply Technology Big (False) Data Unpacking Big (False) Data Calling 991 And 999 Big (

  12. Big Data Provenance: Challenges, State of the Art and Opportunities.

    Science.gov (United States)

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  13. [Embracing medical innovation in the era of big data].

    Science.gov (United States)

    You, Suning

    2015-01-01

    Along with the advent of big data era worldwide, medical field has to place itself in it inevitably. The current article thoroughly introduces the basic knowledge of big data, and points out the coexistence of its advantages and disadvantages. Although the innovations in medical field are struggling, the current medical pattern will be changed fundamentally by big data. The article also shows quick change of relevant analysis in big data era, depicts a good intention of digital medical, and proposes some wise advices to surgeons.

  14. Big Data and Health Economics: Opportunities, Challenges and Risks

    Directory of Open Access Journals (Sweden)

    Diego Bodas-Sagi

    2018-03-01

    Full Text Available Big Data offers opportunities in many fields. Healthcare is not an exception. In this paper we summarize the possibilities of Big Data and Big Data technologies to offer useful information to policy makers. In a world with tight public budgets and ageing populations we feel necessary to save costs in any production process. The use of outcomes from Big Data could be in the future a way to improve decisions at a lower cost than today. In addition to list the advantages of properly using data and technologies from Big Data, we also show some challenges and risks that analysts could face. We also present an hypothetical example of the use of administrative records with health information both for diagnoses and patients.

  15. Speaking sociologically with big data: symphonic social science and the future for big data research

    OpenAIRE

    Halford, Susan; Savage, Mike

    2017-01-01

    Recent years have seen persistent tension between proponents of big data analytics, using new forms of digital data to make computational and statistical claims about ‘the social’, and many sociologists sceptical about the value of big data, its associated methods and claims to knowledge. We seek to move beyond this, taking inspiration from a mode of argumentation pursued by Putnam (2000), Wilkinson and Pickett (2009) and Piketty (2014) that we label ‘symphonic social science’. This bears bot...

  16. Application and Exploration of Big Data Mining in Clinical Medicine.

    Science.gov (United States)

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  17. Big Data in Public Health: Terminology, Machine Learning, and Privacy.

    Science.gov (United States)

    Mooney, Stephen J; Pejaver, Vikas

    2018-04-01

    The digital world is generating data at a staggering and still increasing rate. While these "big data" have unlocked novel opportunities to understand public health, they hold still greater potential for research and practice. This review explores several key issues that have arisen around big data. First, we propose a taxonomy of sources of big data to clarify terminology and identify threads common across some subtypes of big data. Next, we consider common public health research and practice uses for big data, including surveillance, hypothesis-generating research, and causal inference, while exploring the role that machine learning may play in each use. We then consider the ethical implications of the big data revolution with particular emphasis on maintaining appropriate care for privacy in a world in which technology is rapidly changing social norms regarding the need for (and even the meaning of) privacy. Finally, we make suggestions regarding structuring teams and training to succeed in working with big data in research and practice.

  18. Big Sites, Big Questions, Big Data, Big Problems: Scales of Investigation and Changing Perceptions of Archaeological Practice in the Southeastern United States

    Directory of Open Access Journals (Sweden)

    Cameron B Wesson

    2014-08-01

    Full Text Available Since at least the 1930s, archaeological investigations in the southeastern United States have placed a priority on expansive, near-complete, excavations of major sites throughout the region. Although there are considerable advantages to such large–scale excavations, projects conducted at this scale are also accompanied by a series of challenges regarding the comparability, integrity, and consistency of data recovery, analysis, and publication. We examine the history of large–scale excavations in the southeast in light of traditional views within the discipline that the region has contributed little to the ‘big questions’ of American archaeology. Recently published analyses of decades old data derived from Southeastern sites reveal both the positive and negative aspects of field research conducted at scales much larger than normally undertaken in archaeology. Furthermore, given the present trend toward the use of big data in the social sciences, we predict an increased use of large pre–existing datasets developed during the New Deal and other earlier periods of archaeological practice throughout the region.

  19. Lake Morphometry for NHD Lakes in Souris Red Rainy Region 9 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  20. Lake Morphometry for NHD Lakes in Arkansas White Red Region 11 HUC

    Data.gov (United States)

    U.S. Environmental Protection Agency — Lake morphometry metrics are known to influence productivity in lakes and are important for building various types of ecological and environmental models of lentic...

  1. A proposed framework of big data readiness in public sectors

    Science.gov (United States)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  2. Big data analytics to improve cardiovascular care: promise and challenges.

    Science.gov (United States)

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  3. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  4. The role of big laboratories

    International Nuclear Information System (INIS)

    Heuer, R-D

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward. (paper)

  5. Influence of permafrost on lake terraces of Lake Heihai (NE Tibetan Plateau)

    Science.gov (United States)

    Lockot, Gregori; Hartmann, Kai; Wünnemann, Bernd

    2013-04-01

    The Tibetan Plateau (TP) is one of the key regions for climatic global change. Besides the poles the TP is the third highest storage of frozen water in glaciers. Here global warming is three times higher than in the rest of the world. Additionally the TP provides water for billions of people and influences the moisture availability from the Indian and East Asian monsoon systems. During the Holocene extent and intensity of the monsoonal systems changed. Hence, in the last decades, a lot of work was done to reconstruct timing and frequency of monsoonal moisture, to understand the past and give a better forecast for the future. Comparative workings often show very heterogeneous patterns of timing and frequency of the Holocene precipitation and temperature maximum, emphasizing the local importance of catchment dynamics. In this study we present first results of lake Heihai (36°N, 93°15'E, 4500m a.s.l.), situated at the north-eastern border of the TP. The lake is surrounded by a broad band of near-shore lake sediments, attesting a larger lake extent in the past. These sediments were uplifted by permafrost, reaching nowadays heights ca. +8 meters above present lake level. Due to the uplift one of the main inflows was blocked and the whole hydrology of the catchment changed. To quantify the uplift of permafrost Hot Spot Analysis were accomplished at a DEM of the near-shore area. As a result regions of high permafrost uplift and those which mirror the original height of lake ground were revealed. The most obvious uplift took place in the northern and western part of the lake, where the four uplift centers are located. In contrast the southern and eastern areas show a rather degraded pattern (probably by fluvial erosion, thermokarst, etc.). The ancient lake bottom, without permafrost uplift was estimated to be 4-6 meters above the modern lake level. For a better understanding of permafrost interaction inside the terrace bodies a 5m sediment profile was sampled and

  6. BIG´s italesættelse af BIG

    DEFF Research Database (Denmark)

    Brodersen, Anne Mygind; Sørensen, Britta Vilhelmine; Seiding, Mette

    2008-01-01

    Since Bjarke Ingels established the BIG (Bjarke Ingels Group) architectural firm in 2006, the company has succeeded in making itself heard and in attracting the attention of politicians and the media. BIG did so first and foremost by means of an overall approach to urban development that is both...... close to the political powers that be, and gain their support, but also to attract attention in the public debate. We present the issues this way: How does BIG speak out for itself? How can we explain the way the company makes itself heard, based on an analysis of the big.dk web site, the Clover Block...... by sidestepping the usual democratic process required for local plans. Politicians declared a positive interest in both the building project and a rapid decision process. However, local interest groups felt they were excluded from any influence regarding the proposal and launched a massive resistance campaign...

  7. Lake sediments as natural seismographs: Earthquake-related deformations (seismites) in central Canadian lakes

    Science.gov (United States)

    Doughty, M.; Eyles, N.; Eyles, C. H.; Wallace, K.; Boyce, J. I.

    2014-11-01

    Central Canada experiences numerous intraplate earthquakes but their recurrence and source areas remain obscure due to shortness of the instrumental and historic records. Unconsolidated fine-grained sediments in lake basins are 'natural seismographs' with the potential to record ancient earthquakes during the last 10,000 years since the retreat of the Laurentide Ice Sheet. Many lake basins are cut into bedrock and are structurally-controlled by the same Precambrian basement structures (shear zones, terrane boundaries and other lineaments) implicated as the source of ongoing mid-plate earthquake activity. A regional seismic sub-bottom profiling of lakes Gull, Muskoka, Joseph, Rousseau, Ontario, Wanapitei, Fairbanks, Vermilion, Nipissing, Georgian Bay, Mazinaw, Simcoe, Timiskaming, Kipawa, Parry Sound and Lake of Bays, encompassing a total of more than 2000 kilometres of high-resolution track line data supplemented by multibeam and sidescan sonar survey records show a consistent sub-bottom stratigraphy of relatively-thick lowermost lateglacial facies composed of interbedded semi-transparent mass flow facies (debrites, slumps) and rhythmically-laminated silty-clays. Mass flows together with cratered ('kettled') lake floors and associated deformations reflect a dynamic ice-contact glaciolacustrine environment. Exceptionally thick mass flow successions in Lake Timiskaming along the floor of the Timiskaming Graben within the seismically-active Western Quebec Seismic Zone (WQSZ), point to a higher frequency of earthquakes and slope failure during deglaciation and rapid glacio-isostatic rebound though faulting continues into the postglacial. Lateglacial faulting, diapiric deformation and slumping of coeval lateglacial sediments is observed in Parry Sound, Lake Muskoka and Lake Joseph, which are all located above prominent Precambrian terrane boundaries. Lateglacial sediments are sharply overlain by relatively-thin rhythmically-laminated and often semi

  8. Probing the pre-big bang universe

    International Nuclear Information System (INIS)

    Veneziano, G.

    2000-01-01

    Superstring theory suggests a new cosmology whereby a long inflationary phase preceded a non singular big bang-like event. After discussing how pre-big bang inflation naturally arises from an almost trivial initial state of the Universe, I will describe how present or near-future experiments can provide sensitive probes of how the Universe behaved in the pre-bang era

  9. CERN: A big year for LEP

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In April this year's data-taking period for CERN's big LEP electron-positron collider got underway, and is scheduled to continue until November. The immediate objective of the four big experiments - Aleph, Delphi, L3 and Opal - will be to increase considerably their stock of carefully recorded Z decays, currently totalling about three-quarters of a million

  10. Whiting in Lake Michigan

    Science.gov (United States)

    2002-01-01

    Satellites provide a view from space of changes on the Earth's surface. This series of images from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) aboard the Orbview-2 satellite shows the dramatic change in the color of Lake Michigan during the summer. The bright color that appears in late summer is probably caused by calcium carbonate-chalk-in the water. Lake Michigan always has a lot of calcium carbonate in it because the floor of the lake is limestone. During most of the year the calcium carbonate remains dissolved in the cold water, but at the end of summer the lake warms up, lowering the solubility of calcium carbonate. As a result, the calcium carbonate precipitates out of the water, forming clouds of very small solid particles that appear as bright swirls from above. The phenomenon is appropriately called a whiting event. A similar event occured in 1999, but appears to have started later and subsided earlier. It is also possible that a bloom of the algae Microcystis is responsible for the color change, but unlikely because of Lake Michigan's depth and size. Microcystis blooms have occured in other lakes in the region, however. On the shore of the lake it is possible to see the cities of Chicago, Illinois, and Milwaukee, Wisconsin. Both appear as clusters of gray-brown pixels. Image courtesy the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE

  11. Lake-0: A model for the simulation of nuclides transfer in lake scenarios

    International Nuclear Information System (INIS)

    Garcia-Olivares, A.; Aguero, A.; Pinedo, P.

    1994-01-01

    This report presents documentation and a user's manual for the program LAKE-0, a mathematical model of nuclides transfer in lake scenarios. Mathematical equations and physical principles used to develop the code are presented in section 2. The program use is presented in section 3 including input data sets and output data. Section 4 presents two example problems, and some results. The complete program listing including comments is presented in Appendix A. Nuclides are assumed to enter the lake via atmospheric deposition and carried by the water runoff and the dragged sediments from the adjacent catchment. The dynamics of the nuclides inside the lake is based in the model proposed by Codell (11) as modified in (5). The removal of concentration from the lake water is due to outflow from the lake and to the transfer of activity to the bottom sediments. The model has been applied to the Esthwaite Water (54 degree 21 minute N, 03 degree 00 minute W at 65 m. asl.) in the frame of the VAMP Aquatic Working Group (8) and to Devoke Water (54 degree 21 minute 5'N, 03 degree, 18 minute W at 230 m. asl.)

  12. LAKE-0: a model for the simulation of nuclides transfer in lake scenarios

    International Nuclear Information System (INIS)

    Garcia-Olivares, A.; Aguero, A.; Pinedo, P.

    1994-01-01

    This report presents documentation and a user's manual for the program LAKE-0, a mathematical model of nuclides transfer in lake scenarios. Mathematical equations and physical principles used to develop the code are presented in section 2. The program use is presented in section 3 including input data sets and output data. Section 4 presents two example problems, and some results. The complete program listing including comments is presented in Appendix A. Nuclides are assumed to center the lake via atmospheric deposition and carried by the water runoff and the dragged sediments from the adjacent catchment. The dynamics of the nuclides inside the lake is based in the model proposed by Codell (11) as modified in (5). The removal of concentration from the lake water is due to out flow from the lake and to the transfer of activity to the button sediments. The model has been applied to the Esthwaite Water (54 degree celsius 2 l'N, 03 degree celsius 00'W at 65 m. asi.) in the frame of the VAMP Aquatic Working Group (8) and to Devoke Water (5 21.5'N, 03H8'W at 230 m. asi.). (Author). 13 refs

  13. Development and evaluation of the Lake Multi-biotic Integrity Index for Dongting Lake, China

    Directory of Open Access Journals (Sweden)

    Xing Wang

    2015-06-01

    Full Text Available A Lake Multi-biotic Integrity Index (LMII for the China’s second largest interior lake (Dongting Lake was developed to assess the water quality status using algal and macroinvertebrate metrics. Algae and benthic macroinvertebrate assemblages were sampled at 10 sections across 3 subregions of Dongting Lake. We used a stepwise process to evaluate properties of candidate metrics and selected ten for the LMII: Pampean diatom index, diatom quotient, trophic diatom index, relative abundance diatoms, Margalef index of algae, percent sensitive diatoms, % facultative individuals, % Chironomidae individuals, % predators individuals, and total number of macroinvertebrate taxa. We then tested the accuracy and feasibility of the LMII by comparing the correlation with physical-chemical parameters. Evaluation of the LMII showed that it discriminated well between reference and impaired sections and was strongly related to the major chemical and physical stressors (r = 0.766, P<0.001. The re-scored results from the 10 sections showed that the water quality of western Dongting Lake was good, while that of southern Dongting Lake was relatively good and whereas that of eastern Dongting Lake was poor. The discriminatory biocriteria of the LMII are suitable for the assessment of the water quality of Dongting Lake. Additionally, more metrics belonging to habitat, hydrology, physics and chemistry should be considered into the LMII, so as to establish comprehensive assessment system which can reflect the community structure of aquatic organisms, physical and chemical characteristics of water environment, human activities, and so on.

  14. Research on the Impact of Big Data on Logistics

    Directory of Open Access Journals (Sweden)

    Wang Yaxing

    2017-01-01

    Full Text Available In the context of big data development, a large amount of data will appear at logistics enterprises, especially in the aspect of logistics, such as transportation, warehousing, distribution and so on. Based on the analysis of the characteristics of big data, this paper studies the impact of big data on the logistics and its action mechanism, and gives reasonable suggestions. Through building logistics data center by using the big data technology, some hidden value information behind the data will be digged out, in which the logistics enterprises can benefit from it.

  15. Concurrence of big data analytics and healthcare: A systematic review.

    Science.gov (United States)

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  16. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration; Klimentov, Alexei; Korchuganova, Tatiana

    2017-01-01

    BigPanDA monitoring is a web based application which provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analyzing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this wor...

  17. Big Data Analytics in Healthcare.

    Science.gov (United States)

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  18. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration

    2017-01-01

    BigPanDA monitoring is a web-based application that provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analysing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this work...

  19. Solution structure of leptospiral LigA4 Big domain

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Song; Zhang, Jiahai [Hefei National Laboratory for Physical Sciences at Microscale, School of Life Sciences, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhang, Xuecheng [School of Life Sciences, Anhui University, Hefei, Anhui 230039 (China); Tu, Xiaoming, E-mail: xmtu@ustc.edu.cn [Hefei National Laboratory for Physical Sciences at Microscale, School of Life Sciences, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2015-11-13

    Pathogenic Leptospiraspecies express immunoglobulin-like proteins which serve as adhesins to bind to the extracellular matrices of host cells. Leptospiral immunoglobulin-like protein A (LigA), a surface exposed protein containing tandem repeats of bacterial immunoglobulin-like (Big) domains, has been proved to be involved in the interaction of pathogenic Leptospira with mammalian host. In this study, the solution structure of the fourth Big domain of LigA (LigA4 Big domain) from Leptospira interrogans was solved by nuclear magnetic resonance (NMR). The structure of LigA4 Big domain displays a similar bacterial immunoglobulin-like fold compared with other Big domains, implying some common structural aspects of Big domain family. On the other hand, it displays some structural characteristics significantly different from classic Ig-like domain. Furthermore, Stains-all assay and NMR chemical shift perturbation revealed the Ca{sup 2+} binding property of LigA4 Big domain. - Highlights: • Determining the solution structure of a bacterial immunoglobulin-like domain from a surface protein of Leptospira. • The solution structure shows some structural characteristics significantly different from the classic Ig-like domains. • A potential Ca{sup 2+}-binding site was identified by strains-all and NMR chemical shift perturbation.

  20. Solution structure of leptospiral LigA4 Big domain

    International Nuclear Information System (INIS)

    Mei, Song; Zhang, Jiahai; Zhang, Xuecheng; Tu, Xiaoming

    2015-01-01

    Pathogenic Leptospiraspecies express immunoglobulin-like proteins which serve as adhesins to bind to the extracellular matrices of host cells. Leptospiral immunoglobulin-like protein A (LigA), a surface exposed protein containing tandem repeats of bacterial immunoglobulin-like (Big) domains, has been proved to be involved in the interaction of pathogenic Leptospira with mammalian host. In this study, the solution structure of the fourth Big domain of LigA (LigA4 Big domain) from Leptospira interrogans was solved by nuclear magnetic resonance (NMR). The structure of LigA4 Big domain displays a similar bacterial immunoglobulin-like fold compared with other Big domains, implying some common structural aspects of Big domain family. On the other hand, it displays some structural characteristics significantly different from classic Ig-like domain. Furthermore, Stains-all assay and NMR chemical shift perturbation revealed the Ca"2"+ binding property of LigA4 Big domain. - Highlights: • Determining the solution structure of a bacterial immunoglobulin-like domain from a surface protein of Leptospira. • The solution structure shows some structural characteristics significantly different from the classic Ig-like domains. • A potential Ca"2"+-binding site was identified by strains-all and NMR chemical shift perturbation.