WorldWideScience

Sample records for program big spring

  1. Recharge Area, Base-Flow and Quick-Flow Discharge Rates and Ages, and General Water Quality of Big Spring in Carter County, Missouri, 2000-04

    Science.gov (United States)

    Imes, Jeffrey L.; Plummer, Niel; Kleeschulte, Michael J.; Schumacher, John G.

    2007-01-01

    during the period of record (water years 1922 through 2004) was 1,170 cubic feet per second on December 7, 1982. The daily mean water temperature of Big Spring was monitored during water years 2001 through 2004 and showed little variability, ranging from 13 to 15? C (degree Celsius). Water temperatures generally vary less than 1? C throughout the year. The warmest temperatures occur during October and November and decrease until April, indicating Big Spring water temperature does show a slight seasonal variation. The use of the traditional hydrograph separation program HYSEP to determine the base flow and quick flow or runoff components at Big Spring failed to yield base-flow and quick-flow discharge curves that matched observations of spring characteristics. Big Spring discharge data were used in combination with specific conductance data to develop an improved hydrograph separation method for the spring. The estimated annual mean quick flow ranged from 15 to 48 cubic feet per second for the HYSEP analysis and ranged from 26 to 154 cubic feet per second for the discharge and specific conductance method for water years 2001 to 2004. Using the discharge and specific conductance method, the estimated base-flow component rises abruptly as the spring hydrograph rises, attains a peak value on the same day as the discharge peak, and then declines abruptly from its peak value. Several days later, base flow begins to increase again at an approximately linear trend, coinciding with the time at which the percentage of quick flow has reached a maximum after each recharge-induced discharge peak. The interval between the discharge peak and the peak in percentage quick flow ranges from 8 to 11 days for seven hydrograph peaks, consistent with quick-flow traveltime estimates by dye-trace tests from the mature karst Hurricane Creek Basin in the central part of the recharge area. Concentrations of environmental tracers chlorofluorocarbons (CFCs: CFC-11, CFC-12, CFC-113)

  2. Distribution and movement of Big Spring spinedace (Lepidomeda mollispinis pratensis) in Condor Canyon, Meadow Valley Wash, Nevada

    Science.gov (United States)

    Jezorek, Ian G.; Connolly, Patrick J.

    2013-01-01

    Big Spring spinedace (Lepidomeda mollispinis pratensis) is a cyprinid whose entire population occurs within a section of Meadow Valley Wash, Nevada. Other spinedace species have suffered population and range declines (one species is extinct). Managers, concerned about the vulnerability of Big Spring spinedace, have considered habitat restoration actions or translocation, but they have lacked data on distribution or habitat use. Our study occurred in an 8.2-km section of Meadow Valley Wash, including about 7.2 km in Condor Canyon and 0.8 km upstream of the canyon. Big Spring spinedace were present upstream of the currently listed critical habitat, including in the tributary Kill Wash. We found no Big Spring spinedace in the lower 3.3 km of Condor Canyon. We tagged Big Spring spinedace ≥70 mm fork length (range 70–103 mm) with passive integrated transponder tags during October 2008 (n = 100) and March 2009 (n = 103) to document movement. At least 47 of these individuals moved from their release location (up to 2 km). Thirty-nine individuals moved to Kill Wash or the confluence area with Meadow Valley Wash. Ninety-three percent of movement occurred in spring 2009. Fish moved both upstream and downstream. We found no movement downstream over a small waterfall at river km 7.9 and recorded only one fish that moved downstream over Delmue Falls (a 12-m drop) at river km 6.1. At the time of tagging, there was no significant difference in fork length or condition between Big Spring Spinedace that were later detected moving and those not detected moving. We found no significant difference in fork length or condition at time of tagging of Big Spring spinedace ≥70 mm fork length that were detected moving and those not detected moving. Kill Wash and its confluence area appeared important to Big Spring spinedace; connectivity with these areas may be key to species persistence. These areas may provide a habitat template for restoration or translocation. The lower 3.3 km of

  3. Results from the Big Spring basin water quality monitoring and demonstration projects, Iowa, USA

    Science.gov (United States)

    Rowden, R.D.; Liu, H.; Libra, R.D.

    2001-01-01

    Agricultural practices, hydrology, and water quality of the 267-km2 Big Spring groundwater drainage basin in Clayton County, Iowa, have been monitored since 1981. Land use is agricultural; nitrate-nitrogen (-N) and herbicides are the resulting contaminants in groundwater and surface water. Ordovician Galena Group carbonate rocks comprise the main aquifer in the basin. Recharge to this karstic aquifer is by infiltration, augmented by sinkhole-captured runoff. Groundwater is discharged at Big Spring, where quantity and quality of the discharge are monitored. Monitoring has shown a threefold increase in groundwater nitrate-N concentrations from the 1960s to the early 1980s. The nitrate-N discharged from the basin typically is equivalent to over one-third of the nitrogen fertilizer applied, with larger losses during wetter years. Atrazine is present in groundwater all year; however, contaminant concentrations in the groundwater respond directly to recharge events, and unique chemical signatures of infiltration versus runoff recharge are detectable in the discharge from Big Spring. Education and demonstration efforts have reduced nitrogen fertilizer application rates by one-third since 1981. Relating declines in nitrate and pesticide concentrations to inputs of nitrogen fertilizer and pesticides at Big Spring is problematic. Annual recharge has varied five-fold during monitoring, overshadowing any water-quality improvements resulting from incrementally decreased inputs. ?? Springer-Verlag 2001.

  4. Spatial-Temporal Analysis on Spring Festival Travel Rush in China Based on Multisource Big Data

    Directory of Open Access Journals (Sweden)

    Jiwei Li

    2016-11-01

    Full Text Available Spring Festival travel rush is a phenomenon in China that population travel intensively surges in a short time around Chinese Spring Festival. This phenomenon, which is a special one in the urbanization process of China, brings a large traffic burden and various kinds of social problems, thereby causing widespread public concern. This study investigates the spatial-temporal characteristics of Spring Festival travel rush in 2015 through time series analysis and complex network analysis based on multisource big travel data derived from Baidu, Tencent, and Qihoo. The main results are as follows: First, big travel data of Baidu and Tencent obtained from location-based services might be more accurate and scientific than that of Qihoo. Second, two travel peaks appeared at five days before and six days after the Spring Festival, respectively, and the travel valley appeared on the Spring Festival. The Spring Festival travel network at the provincial scale did not have small-world and scale-free characteristics. Instead, the travel network showed a multicenter characteristic and a significant geographic clustering characteristic. Moreover, some travel path chains played a leading role in the network. Third, economic and social factors had more influence on the travel network than geographical location factors. The problem of Spring Festival travel rush will not be effectively improved in a short time because of the unbalanced urban-rural development and the unbalanced regional development. However, the development of the modern high-speed transport system and the modern information and communication technology can alleviate problems brought by Spring Festival travel rush. We suggest that a unified real-time traffic platform for Spring Festival travel rush should be established through the government's integration of mobile big data and the official authority data of the transportation department.

  5. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  6. Technology Evaluation for the Big Spring Water Treatment System at the Y-12 National Security Complex, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Bechtel Jacobs Company LLC

    2002-01-01

    The Y-12 National Security Complex (Y-12 Complex) is an active manufacturing and developmental engineering facility that is located on the U.S. Department of Energy (DOE) Oak Ridge Reservation. Building 9201-2 was one of the first process buildings constructed at the Y-12 Complex. Construction involved relocating and straightening of the Upper East Fork Poplar Creek (UEFPC) channel, adding large quantities of fill material to level areas along the creek, and pumping of concrete into sinkholes and solution cavities present within the limestone bedrock. Flow from a large natural spring designated as ''Big Spring'' on the original 1943 Stone and Webster Building 9201-2 Field Sketch FS6003 was captured and directed to UEFPC through a drainpipe designated Outfall 51. The building was used from 1953 to 1955 for pilot plant operations for an industrial process that involved the use of large quantities of elemental mercury. Past operations at the Y-12 Complex led to the release of mercury to the environment. Significant environmental media at the site were contaminated by accidental releases of mercury from the building process facilities piping and sumps associated with Y-12 Complex mercury handling facilities. Releases to the soil surrounding the buildings have resulted in significant levels of mercury in these areas of contamination, which is ultimately transported to UEFPC, its streambed, and off-site. Bechtel Jacobs Company LLC (BJC) is the DOE-Oak Ridge Operations prime contractor responsible for conducting environmental restoration activities at the Y-12 Complex. In order to mitigate the mercury being released to UEFPC, the Big Spring Water Treatment System will be designed and constructed as a Comprehensive Environmental Response, Compensation, and Liability Act action. This facility will treat the combined flow from Big Spring feeding Outfall 51 and the inflow now being processed at the East End Mercury Treatment System (EEMTS). Both discharge to UEFPC adjacent to

  7. A Big Data Analytics Methodology Program in the Health Sector

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Howell-Barber, H.

    2016-01-01

    The benefits of Big Data Analytics are cited frequently in the literature. However, the difficulties of implementing Big Data Analytics can limit the number of organizational projects. In this study, the authors evaluate business, procedural and technical factors in the implementation of Big Data Analytics, applying a methodology program. Focusing…

  8. Spring 5 & reactive streams

    CERN Multimedia

    CERN. Geneva; Clozel, Brian

    2017-01-01

    Spring is a framework widely used by the world-wide Java community, and it is also extensively used at CERN. The accelerator control system is constituted of 10 million lines of Java code, spread across more than 1000 projects (jars) developed by 160 software engineers. Around half of this (all server-side Java code) is based on the Spring framework. Warning: the speakers will assume that people attending the seminar are familiar with Java and Spring’s basic concepts. Spring 5.0 and Spring Boot 2.0 updates (45 min) This talk will cover the big ticket items in the 5.0 release of Spring (including Kotlin support, @Nullable and JDK9) and provide an update on Spring Boot 2.0, which is scheduled for the end of the year. Reactive Spring (1h) Spring Framework 5.0 has been released - and it now supports reactive applications in the Spring ecosystem. During this presentation, we'll talk about the reactive foundations of Spring Framework with the Reactor project and the reactive streams specification. We'll al...

  9. Evolution of the Air Toxics under the Big Sky Program

    Science.gov (United States)

    Marra, Nancy; Vanek, Diana; Hester, Carolyn; Holian, Andrij; Ward, Tony; Adams, Earle; Knuth, Randy

    2011-01-01

    As a yearlong exploration of air quality and its relation to respiratory health, the "Air Toxics Under the Big Sky" program offers opportunities for students to learn and apply science process skills through self-designed inquiry-based research projects conducted within their communities. The program follows a systematic scope and sequence…

  10. Big Bayou Creek and Little Bayou Creek Watershed Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Kszos, L.A.; Peterson, M.J.; Ryon; Smith, J.G.

    1999-03-01

    Biological monitoring of Little Bayou and Big Bayou creeks, which border the Paducah Site, has been conducted since 1987. Biological monitoring was conducted by University of Kentucky from 1987 to 1991 and by staff of the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) from 1991 through March 1999. In March 1998, renewed Kentucky Pollutant Discharge Elimination System (KPDES) permits were issued to the US Department of Energy (DOE) and US Enrichment Corporation. The renewed DOE permit requires that a watershed monitoring program be developed for the Paducah Site within 90 days of the effective date of the renewed permit. This plan outlines the sampling and analysis that will be conducted for the watershed monitoring program. The objectives of the watershed monitoring are to (1) determine whether discharges from the Paducah Site and the Solid Waste Management Units (SWMUs) associated with the Paducah Site are adversely affecting instream fauna, (2) assess the ecological health of Little Bayou and Big Bayou creeks, (3) assess the degree to which abatement actions ecologically benefit Big Bayou Creek and Little Bayou Creek, (4) provide guidance for remediation, (5) provide an evaluation of changes in potential human health concerns, and (6) provide data which could be used to assess the impact of inadvertent spills or fish kill. According to the cleanup will result in these watersheds [Big Bayou and Little Bayou creeks] achieving compliance with the applicable water quality criteria.

  11. Measuring the efficacy of a wildfire education program in Colorado Springs.

    Science.gov (United States)

    G.H. Donovan; P.A. Champ; D.T. Butry

    2007-01-01

    We examine an innovative wildfire risk education program in Colorado Springs, which rated the wildfire risk of 35,000 homes in the city's wildland urban interface. Evidence from home sales before and after the program's implementation suggests that the program was successful at changing homebuyers' attitudes toward wildfire risk, particularly preferences...

  12. Hydrogeology, geochemistry, and quality of water of The Basin and Oak Spring areas of the Chisos Mountains, Big Bend National Park, Texas

    Science.gov (United States)

    Baker, E.T.; Buszka, P.M.

    1993-01-01

    Test drilling near two sewage lagoons in The Basin area of the Chisos Mountains, Big Bend National Park, Texas, has shown that the alluvium and colluvium on which the lagoons are located is not saturated in the immediate vicinity of the lagoons. A shallow aquifer, therefore, does not exist in this critical area at and near the lagoons. Should seepage outflow from the lagoons occur, the effluent from the lagoons might eventually be incorporated into shallow ground water moving westward in the direction of Oak Spring. Under these conditions such water could reach the spring. Test borings that bottomed in bedrock below the alluvial and colluvial fill material are dry, indicating that no substantial leakage from the lagoons was detected. Therefore, no contaminant plume was identified. Fill material in The Basin does not contain water everywhere in its extensive outcropping area and supplies only a small quantity of ground water to Window Pouroff, which is the only natural surface outlet of The Basin.

  13. Tucannon River Spring Chinook Salmon Captive Brood Program, FY 2000 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bumgarner, Joseph D.; Gallinat, Michael P.

    2001-06-01

    This report summarizes the objectives, tasks, and accomplishments of the Tucannon River spring chinook captive brood program from program inception (1997) through April 2001. The WDFW initiated a captive broodstock program in 1997. The overall goal of the Tucannon River captive broodstock program is for the short-term, and eventually long-term, rebuilding of the Tucannon River spring chinook salmon run, with the hope that natural production will eventually sustain itself. The project goal is to rear captive salmon to adults, spawn them, rear their progeny, and release approximately 150,000 smolts annually into the Tucannon River between 2003-2007. These smolt releases, in combination with the current hatchery supplementation program (132,000 smolts), and wild production, is expected to produce 600-700 returning adult spring chinook to the Tucannon River each year from 2005-2010. The Master Plan, Environmental Assessment, and most facility modifications at LFH were completed for the Tucannon River spring chinook captive broodstock program during FY2000 and FY2001. DNA samples collected since 1997 have been sent to the WDFW genetics lab in Olympia for baseline DNA analysis. Results from the genetic analysis are not available at this time. The captive broodstock program is planned to collect fish from five (1997-2001) brood years (BY). The captive broodstock program was initiated with 1997 BY juveniles, and the 2000 BY fish have been selected. As of April 30, 2001, WDFW has 172 BY 1997, 262 BY 1998, 407 BY 1999, and approximately 1,190 BY 2000 fish on hand at LFH. Twelve of 13 mature 97 BY females were spawned in 2000. Total eggtake was 14,813. Mean fecundity was 1,298 eggs/female based on 11 fully spawned females. Egg survival to eye-up was 47.3%. This low survival was expected for three year old captive broodstock females. As of April 30, 2001, WDFW has 4,211 captive broodstock progeny on hand. These fish will be tagged with blank wire tag without fin clips and

  14. Big George to Carter Mountain 115-kV transmission line project, Park and Hot Springs Counties, Wyoming. Environmental Assessment

    Energy Technology Data Exchange (ETDEWEB)

    1994-02-01

    The Western Area Power Administration (Western) is proposing to rebuild, operate, and maintain a 115-kilovolt (kV) transmission line between the Big George and Carter Mountain Substations in northwest Wyoming (Park and Hot Springs Counties). This environmental assessment (EA) was prepared in compliance with the National Environmental Policy Act (NEPA) and the regulations of the Council on Environmental Quality (CEQ) and the Department of Energy (DOE). The existing Big George to Carter Mountain 69-kV transmission line was constructed in 1941 by the US Department of Interior, Bureau of Reclamation, with 1/0 copper conductor on wood-pole H-frame structures without an overhead ground wire. The line should be replaced because of the deteriorated condition of the wood-pole H-frame structures. Because the line lacks an overhead ground wire, it is subject to numerous outages caused by lightning. The line will be 54 years old in 1995, which is the target date for line replacement. The normal service life of a wood-pole line is 45 years. Under the No Action Alternative, no new transmission lines would be built in the project area. The existing 69-kV transmission line would continue to operate with routine maintenance, with no provisions made for replacement.

  15. Supporting Imagers' VOICE: A National Training Program in Comparative Effectiveness Research and Big Data Analytics.

    Science.gov (United States)

    Kang, Stella K; Rawson, James V; Recht, Michael P

    2017-12-05

    Provided methodologic training, more imagers can contribute to the evidence basis on improved health outcomes and value in diagnostic imaging. The Value of Imaging Through Comparative Effectiveness Research Program was developed to provide hands-on, practical training in five core areas for comparative effectiveness and big biomedical data research: decision analysis, cost-effectiveness analysis, evidence synthesis, big data principles, and applications of big data analytics. The program's mixed format consists of web-based modules for asynchronous learning as well as in-person sessions for practical skills and group discussion. Seven diagnostic radiology subspecialties and cardiology are represented in the first group of program participants, showing the collective potential for greater depth of comparative effectiveness research in the imaging community. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  16. The plumbing system of the Pagosa thermal Springs, Colorado: Application of geologically constrained geophysical inversion and data fusion

    Science.gov (United States)

    Revil, A.; Cuttler, S.; Karaoulis, M.; Zhou, J.; Raynolds, B.; Batzle, M.

    2015-06-01

    Fault and fracture networks usually provide the plumbing for movement of hydrothermal fluids in geothermal fields. The Big Springs of Pagosa Springs in Colorado is known as the deepest geothermal hot springs in the world. However, little is known about the plumbing system of this hot spring, especially regarding the position of the reservoir (if any) or the position of the major tectonic faults controlling the flow of the thermal water in this area. The Mancos shale, a Cretaceous shale, dominates many of the surface expressions around the springs and impede an easy recognition of the fault network. We use three geophysical methods (DC resistivity, self-potential, and seismic) to image the faults in this area, most of which are not recognized in the geologic fault map of the region. Results from these surveys indicate that the hot Springs (the Big Spring and a warm spring located 1.8 km further south) are located at the intersection of the Victoire Fault, a major normal crustal fault, and two north-northeast trending faults (Fault A and B). Self-potential and DC resistivity tomographies can be combined and a set of joint attributes defined to determine the localization of the flow of hot water associated with the Eight Miles Mesa Fault, a second major tectonic feature responsible for the occurrence of warm springs further West and South from the Big Springs of Pagosa Springs.

  17. Deterministic Line-Shape Programming of Silicon Nanowires for Extremely Stretchable Springs and Electronics.

    Science.gov (United States)

    Xue, Zhaoguo; Sun, Mei; Dong, Taige; Tang, Zhiqiang; Zhao, Yaolong; Wang, Junzhuan; Wei, Xianlong; Yu, Linwei; Chen, Qing; Xu, Jun; Shi, Yi; Chen, Kunji; Roca I Cabarrocas, Pere

    2017-12-13

    Line-shape engineering is a key strategy to endow extra stretchability to 1D silicon nanowires (SiNWs) grown with self-assembly processes. We here demonstrate a deterministic line-shape programming of in-plane SiNWs into extremely stretchable springs or arbitrary 2D patterns with the aid of indium droplets that absorb amorphous Si precursor thin film to produce ultralong c-Si NWs along programmed step edges. A reliable and faithful single run growth of c-SiNWs over turning tracks with different local curvatures has been established, while high resolution transmission electron microscopy analysis reveals a high quality monolike crystallinity in the line-shaped engineered SiNW springs. Excitingly, in situ scanning electron microscopy stretching and current-voltage characterizations also demonstrate a superelastic and robust electric transport carried by the SiNW springs even under large stretching of more than 200%. We suggest that this highly reliable line-shape programming approach holds a strong promise to extend the mature c-Si technology into the development of a new generation of high performance biofriendly and stretchable electronics.

  18. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends.

    Science.gov (United States)

    Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by

  19. Tucannon River spring chinook salmon captive brood program, FY 2000 annual report; ANNUAL

    International Nuclear Information System (INIS)

    Bumgarner, Joseph D.; Gallinat, Michael P.

    2001-01-01

    This report summarizes the objectives, tasks, and accomplishments of the Tucannon River spring chinook captive brood program from program inception (1997) through April 2001. The WDFW initiated a captive broodstock program in 1997. The overall goal of the Tucannon River captive broodstock program is for the short-term, and eventually long-term, rebuilding of the Tucannon River spring chinook salmon run, with the hope that natural production will eventually sustain itself. The project goal is to rear captive salmon to adults, spawn them, rear their progeny, and release approximately 150,000 smolts annually into the Tucannon River between 2003-2007. These smolt releases, in combination with the current hatchery supplementation program (132,000 smolts), and wild production, is expected to produce 600-700 returning adult spring chinook to the Tucannon River each year from 2005-2010. The Master Plan, Environmental Assessment, and most facility modifications at LFH were completed for the Tucannon River spring chinook captive broodstock program during FY2000 and FY2001. DNA samples collected since 1997 have been sent to the WDFW genetics lab in Olympia for baseline DNA analysis. Results from the genetic analysis are not available at this time. The captive broodstock program is planned to collect fish from five (1997-2001) brood years (BY). The captive broodstock program was initiated with 1997 BY juveniles, and the 2000 BY fish have been selected. As of April 30, 2001, WDFW has 172 BY 1997, 262 BY 1998, 407 BY 1999, and approximately 1,190 BY 2000 fish on hand at LFH. Twelve of 13 mature 97 BY females were spawned in 2000. Total eggtake was 14,813. Mean fecundity was 1,298 eggs/female based on 11 fully spawned females. Egg survival to eye-up was 47.3%. This low survival was expected for three year old captive broodstock females. As of April 30, 2001, WDFW has 4,211 captive broodstock progeny on hand. These fish will be tagged with blank wire tag without fin clips and

  20. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  1. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  2. Integrated plant-safety assessment, Systematic Evaluation Program: Big Rock Point Plant (Docket No. 50-155)

    International Nuclear Information System (INIS)

    1983-09-01

    The Systematic Evaluation Program was initiated in February 1977 by the US Nuclear Regulatory Commission to review the designs of older operating nuclear reactor plants to reconfirm and document their safety. This report documents the review of the Big Rock Point Plant, which is one of ten plants reviewed under Phase II of this program. This report indicates how 137 topics selected for review under Phase I of the program were addressed. It also addresses a majority of the pending licensing actions for Big Rock Point, which include TMI Action Plan requirements and implementation criteria for resolved generic issues. Equipment and procedural changes have been identified as a result of the review

  3. Executive summary: Weldon Spring Site Environmental Report for calendar year 1992. Weldon Spring Site Remedial Action Project, Weldon Spring, Missouri

    Energy Technology Data Exchange (ETDEWEB)

    1993-06-01

    This report has been prepared to provide information about the public safety and environmental protection programs conducted by the Weldon Spring Site Remedial Action Project. The Weldon Spring site is located in southern St. Charles County, Missouri, approximately 48 km (30 mi) west of St. Louis. The site consists of two main areas, the Weldon Spring Chemical Plant and raffinate pits and the Weldon Spring Quarry. The objectives of the Site Environmental Report are to present a summary of data from the environmental monitoring program, to characterize trends and environmental conditions at the site, and to confirm compliance with environmental and health protection standards and requirements. The report also presents the status of remedial activities and the results of monitoring these activities to assess their impacts on the public and environment. The scope of the environmental monitoring program at the Weldon Spring site has changed since it was initiated. Previously, the program focused on investigations of the extent and level of contaminants in the groundwater, surface waters, buildings, and air at the site. In 1992, the level of remedial activities required monitoring for potential impacts of those activities, particularly on surface water runoff and airborne effluents. This report includes monitoring data from routine radiological and nonradiological sampling activities. These data include estimates of dose to the public from the Weldon Spring site; estimates of effluent releases; and trends in groundwater contaminant levels. Also, applicable compliance requirements, quality assurance programs, and special studies conducted in 1992 to support environmental protection programs are reviewed.

  4. Pro Spring Batch

    CERN Document Server

    Minella, Michael T

    2011-01-01

    Since its release, Spring Framework has transformed virtually every aspect of Java development including web applications, security, aspect-oriented programming, persistence, and messaging. Spring Batch, one of its newer additions, now brings the same familiar Spring idioms to batch processing. Spring Batch addresses the needs of any batch process, from the complex calculations performed in the biggest financial institutions to simple data migrations that occur with many software development projects. Pro Spring Batch is intended to answer three questions: *What? What is batch processing? What

  5. Tucannon River Spring Chinook Salmon Captive Broodstock Program, Annual Report 2001.

    Energy Technology Data Exchange (ETDEWEB)

    Gallinat, Michael P.; Bumgarner, Joseph D.

    2002-05-01

    This report summarizes the objectives, tasks, and accomplishments of the Tucannon River spring chinook captive brood during 2001. The WDFW initiated a captive broodstock program in 1997. The overall goal of the Tucannon River captive broodstock program is for the short-term, and eventually long-term, rebuilding of the Tucannon River spring chinook salmon run, with the hope that natural production will sustain itself. The project goal is to rear captive salmon selected from the supplementation program to adults, spawn them, rear their progeny, and release approximately 150,000 smolts annually into the Tucannon River between 2003-2007. These smolt releases, in combination with the current hatchery supplementation program (132,000 smolts) and wild production, are expected to produce 600-700 returning adult spring chinook to the Tucannon River each year from 2005-2010. The captive broodstock program will collect fish from five (1997-2001) brood years (BY). The captive broodstock program was initiated with 1997 BY juveniles, and the 2001 BY fish have been selected. As of Jan 1, 2002, WDFW has 17 BY 1997, 159 BY 1998, 316 BY 1999, 448 BY 2000, and approximately 1,200 BY 2001 fish on hand at LFH. The 2001 eggtake from the 1997 brood year (Age 4) was 233,894 eggs from 125 ripe females. Egg survival was 69%. Mean fecundity based on the 105 fully spawned females was 1,990 eggs/female. The 2001 eggtake from the 1998 brood year (Age 3) was 47,409 eggs from 41 ripe females. Egg survival was 81%. Mean fecundity based on the 39 fully spawned females was 1,160 eggs/female. The total 2001 eggtake from the captive brood program was 281,303 eggs. As of May 1, 2002 we have 171,495 BY 2001 captive brood progeny on hand. A total of 20,592 excess fish were marked as parr (AD/CWT) and will be released during early May, 2002 into the Tucannon River (rkm 40-45). This will allow us to stay within our maximum allowed number (150,000) of smolts released. During April 2002, WDFW volitionally

  6. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  7. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  8. Big Java late objects

    CERN Document Server

    Horstmann, Cay S

    2012-01-01

    Big Java: Late Objects is a comprehensive introduction to Java and computer programming, which focuses on the principles of programming, software engineering, and effective learning. It is designed for a two-semester first course in programming for computer science students.

  9. Report on the FY 1998 survey for preservation of Jozankei Hot Spring. Hot spring variation survey; 1998 nendo Jozankei onsen hozen chosa. Onsen hendo chosa hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-09-01

    Of the FY 1998 survey for preservation of Jozankei Hot Spring, a survey was conducted with the aim of grasping the state of variation in ingredients of hot spring, etc. in the area and of elucidating the causes of hot spring variation. During the period from October 27, 1998 to August 28, 1999, the following were carried out: sampling of specimens of spring water at 6 spring sources, river water at 2 points and precipitation at 2 points; measurement of temperature, spring temperature, pH, electric conductivity, etc.; analyses of Na, Ca, CL, HCO{sub 3}, SiO{sub 2}, etc. The results of the analysis are as follows. As to spring sources, A-2, A-7 and B-1, the precipitation or river water flow rate seem to largely affect the variation in hot spring measuring values. As to spring resources, A-6 and B-4, the relation with the precipitation or river water flow rate is not clear, but a big change is recognized in the snow-melting season. The tendency to the two variations seems to be caused by the difference between the spring with which the river water is greatly concerned by the crack system of the spring having reached the river and the spring which was closed on the earth surface. The temperature variation of springs was considered to be affected by the river water which flowed into the springs. (NEDO)

  10. Grande Ronde Endemic Spring Chinook Salmon Supplementation Program: Facility Operation and Maintenance and Monitoring and Evaluation, 2000 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Boe, Stephen J.; Lofy, Peter T. (Confederated Tribes of the Umatilla Indian Reservation, Pendleton, OR)

    2003-03-01

    This is the third annual report of a multi-year project to operate adult collection and juvenile acclimation facilities on Catherine Creek and the upper Grande Ronde River for Snake River spring chinook salmon. These two streams have historically supported populations that provided significant tribal and non-tribal fisheries. Supplementation using conventional and captive broodstock techniques is being used to restore fisheries in these streams. Statement of Work Objectives for 2000: (1) Participate in implementation of the comprehensive multiyear operations plan for the Grande Ronde Endemic Spring Chinook Supplementation Program (GRESCP). (2) Plan for recovery of endemic summer steelhead populations in Catherine Creek and the upper Grande Ronde River. (3) Ensure proper construction and trial operation of semi-permanent adult and juvenile facilities for use in 2000. (4) Collect summer steelhead. (5) Collect adult endemic spring chinook salmon broodstock. (6) Acclimate juvenile spring chinook salmon prior to release into the upper Grande Ronde River and Catherine Creek. (7) Document accomplishments and needs to permitters, comanagers, and funding agency. (8) Communicate project results to the scientific community. (9) Plan detailed GRESCP Monitoring and Evaluation for future years. (10) Monitor adult population abundance and characteristics of Grande Ronde River spring chinook salmon populations and incidentally-caught summer steelhead and bull trout. (11) Monitor condition, movement, and mortality of spring chinook salmon acclimated at remote facilities. (12) Monitor water quality at facilities. (13) Participate in Monitoring & Evaluation of the captive brood component of the Program to document contribution to the Program.

  11. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  12. Evaluating connection of aquifers to springs and streams, Great Basin National Park and vicinity, Nevada

    Science.gov (United States)

    Prudic, David E.; Sweetkind, Donald S.; Jackson, Tracie R.; Dotson, K. Elaine; Plume, Russell W.; Hatch, Christine E.; Halford, Keith J.

    2015-12-22

    Federal agencies that oversee land management for much of the Snake Range in eastern Nevada, including the management of Great Basin National Park by the National Park Service, need to understand the potential extent of adverse effects to federally managed lands from nearby groundwater development. As a result, this study was developed (1) to attain a better understanding of aquifers controlling groundwater flow on the eastern side of the southern part of the Snake Range and their connection with aquifers in the valleys, (2) to evaluate the relation between surface water and groundwater along the piedmont slopes, (3) to evaluate sources for Big Springs and Rowland Spring, and (4) to assess groundwater flow from southern Spring Valley into northern Hamlin Valley. The study focused on two areas—the first, a northern area along the east side of Great Basin National Park that included Baker, Lehman, and Snake Creeks, and a second southern area that is the potential source area for Big Springs. Data collected specifically for this study included the following: (1) geologic field mapping; (2) drilling, testing, and water quality sampling from 7 test wells; (3) measuring discharge and water chemistry of selected creeks and springs; (4) measuring streambed hydraulic gradients and seepage rates from 18 shallow piezometers installed into the creeks; and (5) monitoring stream temperature along selected reaches to identify places of groundwater inflow.

  13. Tucannon River Spring Chinook Salmon Captive Broodstock Program, Annual Report 2002.

    Energy Technology Data Exchange (ETDEWEB)

    Gallinat, Michael; Varney, Michelle

    2003-05-01

    This report summarizes the objectives, tasks, and accomplishments of the Tucannon River Spring Chinook Captive Broodstock Program during 2002. The WDFW initiated a captive broodstock program in 1997. The overall goal of the Tucannon River captive broodstock program is for the short-term, and eventually long-term, rebuilding of the Tucannon River spring chinook salmon run, with the hope that natural production will sustain itself. The project goal is to rear captive salmon selected from the supplementation program to adults, spawn them, rear their progeny, and release approximately 150,000 smolts annually into the Tucannon River between 2003-2007. These smolt releases, in combination with the current hatchery supplementation program (132,000 smolts) and wild production, are expected to produce 600-700 returning adult spring chinook to the Tucannon River each year from 2005-2010. The captive broodstock program collected fish from five (1997-2001) brood years (BY). As of January 1, 2003, WDFW has approximately 11 BY 1998, 194 BY 1999, 314 BY 2000, 447 BY 2001, and 300 BY 2002 (for extra males) fish on hand at LFH. The 2002 eggtake from the 1997 brood year (Age 5) was 13,176 eggs from 10 ripe females. Egg survival was 22%. Mean fecundity based on the 5 fully spawned females was 1,803 eggs/female. The 2002 eggtake from the 1998 brood year (Age 4) was 143,709 eggs from 93 ripe females. Egg survival was 29%. Mean fecundity based on the 81 fully spawned females was 1,650 eggs/female. The 2002 eggtake from the 1999 brood year (Age 3) was 19,659 eggs from 18 ripe females. Egg survival was 55%. Mean fecundity based on the 18 fully spawned fish was 1,092 eggs/female. The total 2002 eggtake from the captive brood program was 176,544 eggs. A total of 120,833 dead eggs (68%) were removed with 55,711 live eggs remaining for the program. As of May 1, 2003 we had 46,417 BY 2002 captive brood progeny on hand A total of 20,592 excess BY 01 fish were marked as parr (AD/CWT) and

  14. Evaluation of NASA SPoRT's Pseudo-Geostationary Lightning Mapper Products in the 2011 Spring Program

    Science.gov (United States)

    Stano, Geoffrey T.; Carcione, Brian; Siewert, Christopher; Kuhlman, Kristin M.

    2012-01-01

    NASA's Short-term Prediction Research and Transition (SPoRT) program is a contributing partner with the GOES-R Proving Ground (PG) preparing forecasters to understand and utilize the unique products that will be available in the GOES-R era. This presentation emphasizes SPoRT s actions to prepare the end user community for the Geostationary Lightning Mapper (GLM). This preparation is a collaborative effort with SPoRT's National Weather Service partners, the National Severe Storms Laboratory (NSSL), and the Hazardous Weather Testbed s Spring Program. SPoRT continues to use its effective paradigm of matching capabilities to forecast problems through collaborations with our end users and working with the developers at NSSL to create effective evaluations and visualizations. Furthermore, SPoRT continues to develop software plug-ins so that these products will be available to forecasters in their own decision support system, AWIPS and eventually AWIPS II. In 2009, the SPoRT program developed the original pseudo geostationary lightning mapper (PGLM) flash extent product to demonstrate what forecasters may see with GLM. The PGLM replaced the previous GLM product and serves as a stepping-stone until the AWG s official GLM proxy is ready. The PGLM algorithm is simple and can be applied to any ground-based total lightning network. For 2011, the PGLM used observations from four ground-based networks (North Alabama, Kennedy Space Center, Oklahoma, and Washington D.C.). While the PGLM is not a true proxy product, it is intended as a tool to train forecasters about total lightning as well as foster discussions on product visualizations and incorporating GLM-resolution data into forecast operations. The PGLM has been used in 2010 and 2011 and is likely to remain the primary lightning training tool for the GOES-R program for the near future. This presentation will emphasize the feedback received during the 2011 Spring Program. This will discuss several topics. Based on feedback

  15. Geohydrologic Investigations and Landscape Characteristics of Areas Contributing Water to Springs, the Current River, and Jacks Fork, Ozark National Scenic Riverways, Missouri

    Science.gov (United States)

    Mugel, Douglas N.; Richards, Joseph M.; Schumacher, John G.

    2009-01-01

    The Ozark National Scenic Riverways (ONSR) is a narrow corridor that stretches for approximately 134 miles along the Current River and Jacks Fork in southern Missouri. Most of the water flowing in the Current River and Jacks Fork is discharged to the rivers from springs within the ONSR, and most of the recharge area of these springs is outside the ONSR. This report describes geohydrologic investigations and landscape characteristics of areas contributing water to springs and the Current River and Jacks Fork in the ONSR. The potentiometric-surface map of the study area for 2000-07 shows that the groundwater divide extends beyond the surface-water divide in some places, notably along Logan Creek and the northeastern part of the study area, indicating interbasin transfer of groundwater between surface-water basins. A low hydraulic gradient occurs in much of the upland area west of the Current River associated with areas of high sinkhole density, which indicates the presence of a network of subsurface karst conduits. The results of a low base-flow seepage run indicate that most of the discharge in the Current River and Jacks Fork was from identified springs, and a smaller amount was from tributaries whose discharge probably originated as spring discharge, or from springs or diffuse groundwater discharge in the streambed. Results of a temperature profile conducted on an 85-mile reach of the Current River indicate that the lowest average temperatures were within or downstream from inflows of springs. A mass-balance on heat calculation of the discharge of Bass Rock Spring, a previously undescribed spring, resulted in an estimated discharge of 34.1 cubic feet per second (ft3/s), making it the sixth largest spring in the Current River Basin. The 13 springs in the study area for which recharge areas have been estimated accounted for 82 percent (867 ft3/s of 1,060 ft3/s) of the discharge of the Current River at Big Spring during the 2006 seepage run. Including discharge from

  16. Integrated plant safety assessment. Systematic evaluation program, Big Rock Point Plant (Docket No. 50-155). Final report

    International Nuclear Information System (INIS)

    1984-05-01

    The Systematic Evaluation Program was initiated in February 1977 by the U.S. Nuclear Regulatory Commission to review the designs of older operating nuclear reactor plants to reconfirm and document their safety. The review provides (1) an assessment of how these plants compare with current licensing safety requirements relating to selected issues, (2) a basis for deciding how these differences should be resolved in an integrated plant review, and (3) a documented evaluation of plant safety when the supplement to the Final Integrated Plant Safety Assessment Report has been issued. This report documents the review of the Big Rock Point Plant, which is one of ten plants reviewed under Phase II of this program. This report indicates how 137 topics selected for review under Phase I of the program were addressed. It also addresses a majority of the pending licensing actions for Big Rock Point, which include TMI Action Plan requirements and implementation criteria for resolved generic issues. Equipment and procedural changes have been identified as a result of the review

  17. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  18. LSVT LOUD and LSVT BIG: Behavioral Treatment Programs for Speech and Body Movement in Parkinson Disease

    Directory of Open Access Journals (Sweden)

    Cynthia Fox

    2012-01-01

    Full Text Available Recent advances in neuroscience have suggested that exercise-based behavioral treatments may improve function and possibly slow progression of motor symptoms in individuals with Parkinson disease (PD. The LSVT (Lee Silverman Voice Treatment Programs for individuals with PD have been developed and researched over the past 20 years beginning with a focus on the speech motor system (LSVT LOUD and more recently have been extended to address limb motor systems (LSVT BIG. The unique aspects of the LSVT Programs include the combination of (a an exclusive target on increasing amplitude (loudness in the speech motor system; bigger movements in the limb motor system, (b a focus on sensory recalibration to help patients recognize that movements with increased amplitude are within normal limits, even if they feel “too loud” or “too big,” and (c training self-cueing and attention to action to facilitate long-term maintenance of treatment outcomes. In addition, the intensive mode of delivery is consistent with principles that drive activity-dependent neuroplasticity and motor learning. The purpose of this paper is to provide an integrative discussion of the LSVT Programs including the rationale for their fundamentals, a summary of efficacy data, and a discussion of limitations and future directions for research.

  19. The Big Sky inside

    Science.gov (United States)

    Adams, Earle; Ward, Tony J.; Vanek, Diana; Marra, Nancy; Hester, Carolyn; Knuth, Randy; Spangler, Todd; Jones, David; Henthorn, Melissa; Hammill, Brock; Smith, Paul; Salisbury, Rob; Reckin, Gene; Boulafentis, Johna

    2009-01-01

    The University of Montana (UM)-Missoula has implemented a problem-based program in which students perform scientific research focused on indoor air pollution. The Air Toxics Under the Big Sky program (Jones et al. 2007; Adams et al. 2008; Ward et al. 2008) provides a community-based framework for understanding the complex relationship between poor…

  20. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  1. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-01-01

    on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also

  3. Simulation of spring discharge from a limestone aquifer in Iowa, USA

    Science.gov (United States)

    Zhang, Y.-K.; Bai, E.-W.; Libra, R.; Rowden, R.; Liu, H.

    1996-01-01

    A lumped-parameter model and least-squares method were used to simulate temporal variations of discharge from Big Spring, Iowa, USA, from 1983 to 1994. The simulated discharge rates poorly match the observed one when precipitation is taken as the sole input. The match is improved significantly when the processes of evapotranspiration and infiltration are considered. The best results are obtained when snowmelt is also included in the model. Potential evapotranspiration was estimated with Thornthwaite's formula, infiltration was calculated through a water-balance approach, and snowmelt was generated by a degree-day model. The results show that groundwater in the limestone aquifer is mainly recharged by snowmelt in early spring and by infiltration from rainfall in later spring and early summer. Simulated discharge was visually calibrated against measured discharge; the similarity between the two supports the validity of this approach. The model can be used to study the effects of climate change on groundwater resources and their quality.

  4. Laurel Springs & DoDEA

    National Research Council Canada - National Science Library

    Jhung, Seung

    2000-01-01

    At the request of the client organization, Laurel Springs School, we developed an in-depth market analysis of comparable educational programs offered within the Department of Defense Education Activities (DoDEA...

  5. Intelligent Test Mechanism Design of Worn Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available With the continuous development of national economy, big gear was widely applied in metallurgy and mine domains. So, big gear plays an important role in above domains. In practical production, big gear abrasion and breach take place often. It affects normal production and causes unnecessary economic loss. A kind of intelligent test method was put forward on worn big gear mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. The measure equations transformations were made on involute straight gear. Original polar coordinate equations were transformed into rectangular coordinate equations. Big gear abrasion measure principle was introduced. Detection principle diagram was given. Detection route realization method was introduced. OADM12 laser sensor was selected. Detection on big gear abrasion area was realized by detection mechanism. Tested data of unworn gear and worn gear were led in designed calculation program written by Visual Basic language. Big gear abrasion quantity can be obtained. It provides a feasible method for intelligent test and intelligent repair welding on worn big gear.

  6. Extended burnup demonstration: reactor fuel program. Pre-irradiation characterization and summary of pre-program poolside examinations. Big Rock Point extended burnup fuel

    International Nuclear Information System (INIS)

    Exarhos, C.A.; Van Swam, L.F.; Wahlquist, F.P.

    1981-12-01

    This report is a resource document characterizing the 64 fuel rods being irradiated at the Big Rock Point reactor as part of the Extended Burnup Demonstration being sponsored jointly by the US Department of Energy, Consumers Power Company, Exxon Nuclear Company, and General Public Utilities. The program entails extending the exposure of standard BWR fuel to a discharge average of 38,000 MWD/MTU to demonstrate the feasibility of operating fuel of standard design to levels significantly above current limits. The fabrication characteristics of the Big Rock Point EBD fuel are presented along with measurement of rod length, rod diameter, pellet stack height, and fuel rod withdrawal force taken at poolside at burnups up to 26,200 MWD/MTU. A review of the fuel examination data indicates no performance characteristics which might restrict the continued irradiation of the fuel

  7. Adapting bioinformatics curricula for big data

    Science.gov (United States)

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  8. Learning Spring application development

    CERN Document Server

    Soni, Ravi Kant

    2015-01-01

    This book is intended for those who are interested in learning the core features of the Spring Framework. Prior knowledge of Java programming and web development concepts with basic XML knowledge is expected.

  9. Spring in the Arab Spring

    NARCIS (Netherlands)

    Borg, G.J.A.

    2011-01-01

    Column Gert Borg | Spring in the Arab Spring door dr. Gert Borg, onderzoeker bij Islam en Arabisch aan de Radboud Universiteit Nijmegen en voormalig directeur van het Nederlands-Vlaams Instituut Caïro Spring If, in Google, you type "Arab Spring" and hit the button, you get more than

  10. Big Math for Little Kids

    Science.gov (United States)

    Greenes, Carole; Ginsburg, Herbert P.; Balfanz, Robert

    2004-01-01

    "Big Math for Little Kids," a comprehensive program for 4- and 5-year-olds, develops and expands on the mathematics that children know and are capable of doing. The program uses activities and stories to develop ideas about number, shape, pattern, logical reasoning, measurement, operations on numbers, and space. The activities introduce the…

  11. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  12. Survey of Cyber Crime in Big Data

    Science.gov (United States)

    Rajeswari, C.; Soni, Krishna; Tandon, Rajat

    2017-11-01

    Big data is like performing computation operations and database operations for large amounts of data, automatically from the data possessor’s business. Since a critical strategic offer of big data access to information from numerous and various areas, security and protection will assume an imperative part in big data research and innovation. The limits of standard IT security practices are notable, with the goal that they can utilize programming sending to utilize programming designers to incorporate pernicious programming in a genuine and developing risk in applications and working frameworks, which are troublesome. The impact gets speedier than big data. In this way, one central issue is that security and protection innovation are sufficient to share controlled affirmation for countless direct get to. For powerful utilization of extensive information, it should be approved to get to the information of that space or whatever other area from a space. For a long time, dependable framework improvement has arranged a rich arrangement of demonstrated ideas of demonstrated security to bargain to a great extent with the decided adversaries, however this procedure has been to a great extent underestimated as “needless excess” and sellers In this discourse, essential talks will be examined for substantial information to exploit this develop security and protection innovation, while the rest of the exploration difficulties will be investigated.

  13. Stars Spring up Out of the Darkness

    Science.gov (United States)

    2006-01-01

    [figure removed for brevity, see original site] Click on the image for movie of Stars Spring up Out of the Darkness This artist's animation illustrates the universe's early years, from its explosive formation to its dark ages to its first stars and mini-galaxies. Scientists using NASA's Spitzer Space Telescope found patches of infrared light splattered across the sky that might be the collective glow of clumps of the universe's first objects. Astronomers do not know if these first objects were stars or 'quasars,' which are black holes voraciously consuming surrounding gas. The movie begins with a flash of color that represents the birth of the universe, an explosion called the Big Bang that occurred about 13.7 billion years ago. A period of darkness ensues, where gas begins to clump together. The universe's first stars are then shown springing up out of the gas clumps, flooding the universe with light, an event that probably happened about a few hundred million years after the Big Bang. Though these first stars formed out of gas alone, their deaths seeded the universe with the dusty heavy chemical elements that helped create future generations of stars. The first stars, called Population III stars (our star is a Population I star), were much bigger and brighter than any in our nearby universe, with masses about 1,000 times that of our sun. They grouped together into mini-galaxies, which then merged to form galaxies like our own mature Milky Way galaxy. The first quasars, not shown here, ultimately became the centers of powerful galaxies that are more common in the distant universe.

  14. Tucannon River Spring Chinook Captive Broodstock Program Final Environmental Assessment and Finding of No Significant Impact

    Energy Technology Data Exchange (ETDEWEB)

    N/A

    2000-05-24

    Bonneville Power Administration (BPA) is proposing to fund the Tucannon River Spring Chinook Captive Broodstock Program, a small-scale production initiative designed to increase numbers of a weak but potentially recoverable population of spring chinook salmon in the Tucannon River in the State of Washington. BPA has prepared an Environmental Assessment (EA) (DOE/EA-l326) evaluating the proposed project. Based on the analysis in the EA, BPA has determined that the proposed action is not a major Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an Environmental Impact Statement (EIS) is not required, and BPA is issuing this Finding of No Significant Impact (FONSI).

  15. Big Data: Are Biomedical and Health Informatics Training Programs Ready? Contribution of the IMIA Working Group for Health and Medical Informatics Education.

    Science.gov (United States)

    Otero, P; Hersh, W; Jai Ganesh, A U

    2014-08-15

    The growing volume and diversity of health and biomedical data indicate that the era of Big Data has arrived for healthcare. This has many implications for informatics, not only in terms of implementing and evaluating information systems, but also for the work and training of informatics researchers and professionals. This article addresses the question: What do biomedical and health informaticians working in analytics and Big Data need to know? We hypothesize a set of skills that we hope will be discussed among academic and other informaticians. The set of skills includes: Programming - especially with data-oriented tools, such as SQL and statistical programming languages; Statistics - working knowledge to apply tools and techniques; Domain knowledge - depending on one's area of work, bioscience or health care; and Communication - being able to understand needs of people and organizations, and articulate results back to them. Biomedical and health informatics educational programs must introduce concepts of analytics, Big Data, and the underlying skills to use and apply them into their curricula. The development of new coursework should focus on those who will become experts, with training aiming to provide skills in "deep analytical talent" as well as those who need knowledge to support such individuals.

  16. Framework for Springs Stewardship Program and proposed action development: Spring Mountains National Recreation Area, Humboldt-Toiyabe National Forest

    Science.gov (United States)

    Marc Coles-Ritchie; Stephen J. Solem; Abraham E. Springer; Burton Pendleton

    2014-01-01

    In the desert Southwest, springs are an important ecological feature and serve as a focal point for both biological and human interactions on the landscape. As a result, attention has been placed on the stewardship and protection of these important resources. Management has traditionally focused on the more accessible and heavily used eastern canyons within the Spring...

  17. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  18. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  19. Spring performance tester for miniature extension springs

    Science.gov (United States)

    Salzbrenner, Bradley; Boyce, Brad

    2017-05-16

    A spring performance tester and method of testing a spring are disclosed that has improved accuracy and precision over prior art spring testers. The tester can perform static and cyclic testing. The spring tester can provide validation for product acceptance as well as test for cyclic degradation of springs, such as the change in the spring rate and fatigue failure.

  20. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  1. Executive summary: Weldon Spring Site Environmental Report for calendar year 1992

    International Nuclear Information System (INIS)

    1993-06-01

    This report has been prepared to provide information about the public safety and environmental protection programs conducted by the Weldon Spring Site Remedial Action Project. The Weldon Spring site is located in southern St. Charles County, Missouri, approximately 48 km (30 mi) west of St. Louis. The site consists of two main areas, the Weldon Spring Chemical Plant and raffinate pits and the Weldon Spring Quarry. The objectives of the Site Environmental Report are to present a summary of data from the environmental monitoring program, to characterize trends and environmental conditions at the site, and to confirm compliance with environmental and health protection standards and requirements. The report also presents the status of remedial activities and the results of monitoring these activities to assess their impacts on the public and environment. The scope of the environmental monitoring program at the Weldon Spring site has changed since it was initiated. Previously, the program focused on investigations of the extent and level of contaminants in the groundwater, surface waters, buildings, and air at the site. In 1992, the level of remedial activities required monitoring for potential impacts of those activities, particularly on surface water runoff and airborne effluents. This report includes monitoring data from routine radiological and nonradiological sampling activities. These data include estimates of dose to the public from the Weldon Spring site; estimates of effluent releases; and trends in groundwater contaminant levels. Also, applicable compliance requirements, quality assurance programs, and special studies conducted in 1992 to support environmental protection programs are reviewed

  2. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    Directory of Open Access Journals (Sweden)

    Mike W.-L. Cheung

    2016-05-01

    Full Text Available Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists – and probably the most crucial one – is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  3. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    Science.gov (United States)

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  4. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  5. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  6. Hood River Production Program Monitoring and Evaluation (M&E) - Confederated Tribes of Warm Springs : Annual Report For Fiscal Year, October 2007 – September 2008.

    Energy Technology Data Exchange (ETDEWEB)

    Gerstenberger, Ryan [Confederated Tribes of Warm Springs Reservation

    2009-07-27

    This progress report describes work performed by the Confederated Tribes of Warm Springs (CTWSRO) portion of the Hood River Production Program Monitoring and Evaluation Project (HRPP) during the 2008 fiscal year. A total of 64,736 hatchery winter steelhead, 12,108 hatchery summer steelhead, and 68,426 hatchery spring Chinook salmon smolts were acclimated and released in the Hood River basin during the spring. The HRPP exceeded program goals for a release of and 50,000 winter steelhead but fell short of the steelhead release goals of 30,000 summer steelhead and 75,000 spring Chinook in 2008. Passive Integrated Transponders (PIT) tags were implanted in 6,652 hatchery winter steelhead, and 1,196 hatchery summer steelhead, to compare migratory attributes and survival rates of hatchery fish released into the Hood River. Water temperatures were recorded at six locations within the Hood River subbasin to monitor for compliance with Oregon Department of Environmental Quality water quality standards. A preseason spring Chinook salmon adult run forecast was generated, which predicted an abundant return adequate to meet escapement goal and brood stock needs. As a result the tribal and sport fisheries were opened. A tribal creel was conducted from May 22 to July 18 during which an estimated 172 spring Chinook were harvested. One hundred sixteen Spring Chinook salmon redds were observed and 72 carcasses were inspected on 19.4 miles of spawning grounds throughout the Hood River Basin during 2008. Annual salvage operations were completed in two irrigation canals resulting in the liberation of 1,641 fish back to the Hood River.

  7. Big Data Provenance: Challenges, State of the Art and Opportunities.

    Science.gov (United States)

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  8. Processing Solutions for Big Data in Astronomy

    Science.gov (United States)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  9. Environmental compliance assessment findings for Weldon Spring Site Remedial Action Program

    International Nuclear Information System (INIS)

    Sigmon, C.F.; Levine, M.B.

    1990-01-01

    This report presents the results of an environmental assessment conducted at Weldon Spring Site Remedial Action Project (WSSRAP) in St. Charles County, Missouri, in accordance with the Formerly Utilized Sites Remedial Action Program (FUSRAP) Environmental Compliance Assessment Checklists. The purpose of this assessment was to evaluate the compliance of the site with applicable federal and Missouri environment regulations. Assessments activities included the following: review of site records, reports ,and files; inspection of the WSSRAP storage building, other selected buildings, and the adjacent grounds; and interviews with project personnel. This assessment was conducted on August 28-30, 1989. The assessment covered five management areas as set forth in the Checklist: Hazardous Waste Management, Polychlorinated Biphenyls (PCBs) Management; Air Emissions; Wastewater Discharges and Petroleum Management. No samples were collected. 1 ref., 2 figs., 1 tab

  10. An overview of big data and data science education at South African universities

    Directory of Open Access Journals (Sweden)

    Eduan Kotzé

    2016-02-01

    Full Text Available Man and machine are generating data electronically at an astronomical speed and in such a way that society is experiencing cognitive challenges to analyse this data meaningfully. Big data firms, such as Google and Facebook, identified this problem several years ago and are continuously developing new technologies or improving existing technologies in order to facilitate the cognitive analysis process of these large data sets. The purpose of this article is to contribute to our theoretical understanding of the role that big data might play in creating new training opportunities for South African universities. The article investigates emerging literature on the characteristics and main components of big data, together with the Hadoop application stack as an example of big data technology. Due to the rapid development of big data technology, a paradigm shift of human resources is required to analyse these data sets; therefore, this study examines the state of big data teaching at South African universities. This article also provides an overview of possible big data sources for South African universities, as well as relevant big data skills that data scientists need. The study also investigates existing academic programs in South Africa, where the focus is on teaching advanced database systems. The study found that big data and data science topics are introduced to students on a postgraduate level, but that the scope is very limited. This article contributes by proposing important theoretical topics that could be introduced as part of the existing academic programs. More research is required, however, to expand these programs in order to meet the growing demand for data scientists with big data skills.

  11. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  12. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  13. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  14. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  15. Instant Spring security starter

    CERN Document Server

    Jagielski, Piotr

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A concise guide written in an easy-to-follow format following the Starter guide approach.This book is for people who have not used Spring Security before and want to learn how to use it effectively in a short amount of time. It is assumed that readers know both Java and HTTP protocol at the level of basic web programming. The reader should also be familiar with Inversion-of-Control/Dependency Injection, preferably with the Spring framework itsel

  16. Big Data Science Cafés: High School Students Experiencing Real Research with Scientists

    Science.gov (United States)

    Walker, C. E.; Pompea, S. M.

    2017-12-01

    The Education and Public Outreach group at the National Optical Astronomy Observatory has designed an outside-of-school education program to excite the interest of talented youth in future projects like the Large Synoptic Survey Telescope (LSST) and the NOAO (archival) Data Lab - their data approaches and key science projects. Originally funded by the LSST Corporation, the program cultivates talented youth to enter STEM disciplines and serves as a model to disseminate to the 40+ institutions involved in LSST. One Saturday a month during the academic year, high school students have the opportunity to interact with expert astronomers who work with large astronomical data sets in their scientific work. Students learn about killer asteroids, the birth and death of stars, colliding galaxies, the structure of the universe, gravitational waves, dark energy, dark matter, and more. The format for the Saturday science cafés has been a short presentation, discussion (plus food), computer lab activity and more discussion. They last about 2.5 hours and have been planned by a group of interested local high school students, an undergraduate student coordinator, the presenting astronomers, the program director and an evaluator. High school youth leaders help ensure an enjoyable and successful program for fellow students. They help their fellow students with the activities and help evaluate how well the science café went. Their remarks shape the next science café and improve the program. The experience offers youth leaders ownership of the program, opportunities to take on responsibilities and learn leadership and communication skills, as well as foster their continued interests in STEM. The prototype Big Data Science Academy was implemented successfully in the Spring 2017 and engaged almost 40 teens from greater Tucson in the fundamentals of astronomy concepts and research. As with any first implementation there were bumps. However, staff, scientists, and student leaders all

  17. Commentary: Epidemiology in the era of big data.

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  18. Peak discharge, flood frequency, and peak stage of floods on Big Cottonwood Creek at U.S. Highway 50 near Coaldale, Colorado, and Fountain Creek below U.S. Highway 24 in Colorado Springs, Colorado, 2016

    Science.gov (United States)

    Kohn, Michael S.; Stevens, Michael R.; Mommandi, Amanullah; Khan, Aziz R.

    2017-12-14

    The U.S. Geological Survey (USGS), in cooperation with the Colorado Department of Transportation, determined the peak discharge, annual exceedance probability (flood frequency), and peak stage of two floods that took place on Big Cottonwood Creek at U.S. Highway 50 near Coaldale, Colorado (hereafter referred to as “Big Cottonwood Creek site”), on August 23, 2016, and on Fountain Creek below U.S. Highway 24 in Colorado Springs, Colorado (hereafter referred to as “Fountain Creek site”), on August 29, 2016. A one-dimensional hydraulic model was used to estimate the peak discharge. To define the flood frequency of each flood, peak-streamflow regional-regression equations or statistical analyses of USGS streamgage records were used to estimate annual exceedance probability of the peak discharge. A survey of the high-water mark profile was used to determine the peak stage, and the limitations and accuracy of each component also are presented in this report. Collection and computation of flood data, such as peak discharge, annual exceedance probability, and peak stage at structures critical to Colorado’s infrastructure are an important addition to the flood data collected annually by the USGS.The peak discharge of the August 23, 2016, flood at the Big Cottonwood Creek site was 917 cubic feet per second (ft3/s) with a measurement quality of poor (uncertainty plus or minus 25 percent or greater). The peak discharge of the August 29, 2016, flood at the Fountain Creek site was 5,970 ft3/s with a measurement quality of poor (uncertainty plus or minus 25 percent or greater).The August 23, 2016, flood at the Big Cottonwood Creek site had an annual exceedance probability of less than 0.01 (return period greater than the 100-year flood) and had an annual exceedance probability of greater than 0.005 (return period less than the 200-year flood). The August 23, 2016, flood event was caused by a precipitation event having an annual exceedance probability of 1.0 (return

  19. The Last Big Bang

    Energy Technology Data Exchange (ETDEWEB)

    McGuire, Austin D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meade, Roger Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-13

    As one of the very few people in the world to give the “go/no go” decision to detonate a nuclear device, Austin “Mac” McGuire holds a very special place in the history of both the Los Alamos National Laboratory and the world. As Commander of Joint Task Force Unit 8.1.1, on Christmas Island in the spring and summer of 1962, Mac directed the Los Alamos data collection efforts for twelve of the last atmospheric nuclear detonations conducted by the United States. Since data collection was at the heart of nuclear weapon testing, it fell to Mac to make the ultimate decision to detonate each test device. He calls his experience THE LAST BIG BANG, since these tests, part of Operation Dominic, were characterized by the dramatic displays of the heat, light, and sounds unique to atmospheric nuclear detonations – never, perhaps, to be witnessed again.

  20. A Proposed Concentration Curriculum Design for Big Data Analytics for Information Systems Students

    Science.gov (United States)

    Molluzzo, John C.; Lawler, James P.

    2015-01-01

    Big Data is becoming a critical component of the Information Systems curriculum. Educators are enhancing gradually the concentration curriculum for Big Data in schools of computer science and information systems. This paper proposes a creative curriculum design for Big Data Analytics for a program at a major metropolitan university. The design…

  1. Challenges and potential solutions for big data implementations in developing countries.

    Science.gov (United States)

    Luna, D; Mayan, J C; García, M J; Almerares, A A; Househ, M

    2014-08-15

    The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: "big data", "developing countries", "data mining", "health information systems", and "computing methodologies". A thematic review of selected articles was performed. There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

  2. Big Data Components for Business Process Optimization

    Directory of Open Access Journals (Sweden)

    Mircea Raducu TRIFU

    2016-01-01

    Full Text Available In these days, more and more people talk about Big Data, Hadoop, noSQL and so on, but very few technical people have the necessary expertise and knowledge to work with those concepts and technologies. The present issue explains one of the concept that stand behind two of those keywords, and this is the map reduce concept. MapReduce model is the one that makes the Big Data and Hadoop so powerful, fast, and diverse for business process optimization. MapReduce is a programming model with an implementation built to process and generate large data sets. In addition, it is presented the benefits of integrating Hadoop in the context of Business Intelligence and Data Warehousing applications. The concepts and technologies behind big data let organizations to reach a variety of objectives. Like other new information technologies, the main important objective of big data technology is to bring dramatic cost reduction.

  3. Weldon Spring Site Environmental Report for calendar year 1994

    International Nuclear Information System (INIS)

    1995-05-01

    This report for Calendar Year 1994 has been prepared to provide information about the public safety and environmental protection programs conducted by the Weldon Spring Site Remedial Action Project (WSSRAP). The Weldon Spring site is located in southern St. Charles County, Missouri, approximately 48 km (30 mi) west of St. Louis. The site consists of two main areas, the Weldon Spring Chemical Plant and raffinate pits and the Weldon Spring Quarry. The chemical plant, raffinate pits, and quarry are located on Missouri State Route 94, southwest of US Route 40/61. The objectives of the Site Environmental Report are to present a summary of data from the environmental monitoring program, to characterize trends and environmental conditions at the site, and to confirm compliance with environmental and health protection standards and requirements. The report also presents the status of remedial activities and the results of monitoring these activities to assess their impacts on the public and environment. This report includes monitoring data from routine radiological and nonradiological sampling activities. These data include estimates of dose to the public from the Weldon Spring site, estimates of effluent releases, and trends in groundwater contaminant levels. Additionally, applicable compliance requirements, quality assurance programs, and special studies conducted in 1994 to support environmental protection programs are discussed. Dose estimates presented in this report are based on hypothetical exposure scenarios of public use of areas near the site. In addition, release estimates have been calculated on the basis of 1994 National Pollutant Discharge Elimination System (NPDES) and air monitoring data. Effluent discharges from the site under routine NPDES and National Emission Standards for Hazardous Air Pollutants (NESHAPS) monitoring were below permitted levels

  4. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  5. Steering with big words: articulating ideographs in research programs

    NARCIS (Netherlands)

    Bos, Colette; Walhout, Bart; Walhout, Bart; Peine, Alexander; van Lente, Harro

    2014-01-01

    Nowadays, science should address societal challenges, such as ‘sustainability’, or ‘responsible research and innovation’. This emerging form of steering toward broad and generic goals involves the use of ‘big words’: encompassing concepts that are uncontested themselves, but that allow for multiple

  6. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  7. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  8. Integrating R and Hadoop for Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2014-06-01

    Full Text Available Analyzing and working with big data could be very difficult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Official statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools successfully and wide spread used for storage and processing of big data sets on clusters of commodity hardware is Hadoop. Hadoop framework contains libraries, a distributed file-system (HDFS, a resource-management platform and implements a version of the MapReduce programming model for large scale data processing. In this paper we investigate the possibilities of integrating Hadoop with R which is a popular software used for statistical computing and data visualization. We present three ways of integrating them: R with Streaming, Rhipe and RHadoop and we emphasize the advantages and disadvantages of each solution.

  9. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  10. Getting started with Greenplum for big data analytics

    CERN Document Server

    Gollapudi, Sunila

    2013-01-01

    Standard tutorial-based approach.""Getting Started with Greenplum for Big Data"" Analytics is great for data scientists and data analysts with a basic knowledge of Data Warehousing and Business Intelligence platforms who are new to Big Data and who are looking to get a good grounding in how to use the Greenplum Platform. It's assumed that you will have some experience with database design and programming as well as be familiar with analytics tools like R and Weka.

  11. Really big numbers

    CERN Document Server

    Schwartz, Richard Evan

    2014-01-01

    In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...

  12. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  13. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  14. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  15. Engineering in-plane silicon nanowire springs for highly stretchable electronics

    Science.gov (United States)

    Xue, Zhaoguo; Dong, Taige; Zhu, Zhimin; Zhao, Yaolong; Sun, Ying; Yu, Linwei

    2018-01-01

    Crystalline silicon (c-Si) is unambiguously the most important semiconductor that underpins the development of modern microelectronics and optoelectronics, though the rigid and brittle nature of bulk c-Si makes it difficult to implement directly for stretchable applications. Fortunately, the one-dimensional (1D) geometry, or the line-shape, of Si nanowire (SiNW) can be engineered into elastic springs, which indicates an exciting opportunity to fabricate highly stretchable 1D c-Si channels. The implementation of such line-shape-engineering strategy demands both a tiny diameter of the SiNWs, in order to accommodate the strains under large stretching, and a precise growth location, orientation and path control to facilitate device integration. In this review, we will first introduce the recent progresses of an in-plane self-assembly growth of SiNW springs, via a new in-plane solid-liquid-solid (IPSLS) mechanism, where mono-like but elastic SiNW springs are produced by surface-running metal droplets that absorb amorphous Si thin film as precursor. Then, the critical growth control and engineering parameters, the mechanical properties of the SiNW springs and the prospects of developing c-Si based stretchable electronics, will be addressed. This efficient line-shape-engineering strategy of SiNW springs, accomplished via a low temperature batch-manufacturing, holds a strong promise to extend the legend of modern Si technology into the emerging stretchable electronic applications, where the high carrier mobility, excellent stability and established doping and passivation controls of c-Si can be well inherited. Project supported by the National Basic Research 973 Program (No. 2014CB921101), the National Natural Science Foundation of China (No. 61674075), the National Key Research and Development Program of China (No. 2017YFA0205003), the Jiangsu Excellent Young Scholar Program (No. BK20160020), the Scientific and Technological Support Program in Jiangsu Province (No. BE

  16. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  17. Sampling and analysis of 100 Area springs

    International Nuclear Information System (INIS)

    1992-02-01

    This report is submitted in fulfillment of Hanford Federal Facility Agreement and Consent Order Milestone M-30-01, submit a report to EPA and Ecology evaluating the impact to the Columbia River from contaminated springs and seeps as described in the operable unit work plans listed in M-30-03. Springs, seeps, sediments, and the Columbia River were sampled for chemical and radiological analyses during the period September 16 through October 21, 1991. A total of 26 locations were sampled. Results of these analyses show that radiological and nonradiological contaminants continue to enter the Columbia River from the retired reactor areas of the 100 Area via the springs. The primary contaminants in the springs are strontium-90, tritium, and chromium. These contaminants were detected in concentrations above drinking water standards. Analysis of total organic carbon were run on all water samples collected; there is no conclusive evidence that organic constituents are entering the river through the springs. Total organic carbon analyses were generally higher for the surface water than for the springs. The results of this study will be used to develop a focused, yet flexible, long-term spring sampling program. Analysis of Columbia River water samples collected at the Hanford Townsite (i.e., downstream of the reactor areas) did not detect any Hanford-specific contaminants

  18. Linear magnetic spring and spring/motor combination

    Science.gov (United States)

    Patt, Paul J. (Inventor); Stolfi, Fred R. (Inventor)

    1991-01-01

    A magnetic spring, or a spring and motor combination, providing a linear spring force characteristic in each direction from a neutral position, in which the spring action may occur for any desired coordinate of a typical orthogonal coordinate system. A set of magnets are disposed, preferably symmetrically about a coordinate axis, poled orthogonally to the desired force direction. A second set of magnets, respectively poled opposite the first set, are arranged on the sprung article. The magnets of one of the sets are spaced a greater distance apart than those of the other, such that an end magnet from each set forms a pair having preferably planar faces parallel to the direction of spring force, the faces being offset so that in a neutral position the outer edge of the closer spaced magnet set is aligned with the inner edge of the greater spaced magnet set. For use as a motor, a coil can be arranged with conductors orthogonal to both the magnet pole directions and the direction of desired spring force, located across from the magnets of one set and fixed with respect to the magnets of the other set. In a cylindrical coordinate system having axial spring force, the magnets are radially poled and motor coils are concentric with the cylinder axis.

  19. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  20. Long-Term Hydrologic Monitoring Program. Project Shoal site, Sand Springs Range, Churchill County, Nevada

    International Nuclear Information System (INIS)

    1984-05-01

    The Shoal site is located in Churchill County in the northern part of the Sand Springs Range, approximately 30 miles (48.3 kilometers) southeast of Fallon, Nevada. Project Shoal, with a yield of 12 kilotons, was detonated October 26, 1963. It was conducted as part of the Vela program to obtain event measurements relating to the detection of underground nuclear detonations. The purpose of the Long-Term Hydrologic Monitoring Program at the Shoal site is to obtain data that will assure public safety; inform the public, the news media, and the scientific community relative to radiological contamination; and to document compliance with federal, state, and local antipollution requirements. The Shoal site geographical setting, climate, geology, and hydrology are described. Site history, including Shoal event information and Shoal monitoring is described. The final radiological surveys following the Shoal site cleanup described in this report indicate that there are no radiation levels above natural background on or near the land surface and that no hazard exists or is likely to occur during public use of the surface of the Shoal site. The Long-Term Hydrologic Monitoring Program for the Shoal site is described. 17 references, 4 figures

  1. Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs

    Directory of Open Access Journals (Sweden)

    Mohd Usama

    2017-11-01

    Full Text Available At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS, Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.

  2. Theoretical Predictions of Springing and Their Comparison with Full Scale Measurements

    DEFF Research Database (Denmark)

    Gu, X.; Storhaug, G.; Vidic-Perunovic, Jelena

    2003-01-01

    The present paper considers a large ocean going ship with significant springing responses, which have made a large contribution to the fatigue cracking for certain structural details. Four different theories for predicting ship responses and associated computer programs for predictions of springing...

  3. Urban Big Data and Sustainable Development Goals: Challenges and Opportunities

    Directory of Open Access Journals (Sweden)

    Ali Kharrazi

    2016-12-01

    Full Text Available Cities are perhaps one of the most challenging and yet enabling arenas for sustainable development goals. The Sustainable Development Goals (SDGs emphasize the need to monitor each goal through objective targets and indicators based on common denominators in the ability of countries to collect and maintain relevant standardized data. While this approach is aimed at harmonizing the SDGs at the national level, it presents unique challenges and opportunities for the development of innovative urban-level metrics through big data innovations. In this article, we make the case for advancing more innovative targets and indicators relevant to the SDGs through the emergence of urban big data. We believe that urban policy-makers are faced with unique opportunities to develop, experiment, and advance big data practices relevant to sustainable development. This can be achieved by situating the application of big data innovations through developing mayoral institutions for the governance of urban big data, advancing the culture and common skill sets for applying urban big data, and investing in specialized research and education programs.

  4. Future Availability of Water Supply from Karstic Springs under Probable Climate Change. The case of Aravissos, Central Macedonia, Greece.

    Science.gov (United States)

    Vafeiadis, M.; Spachos, Th.; Zampetoglou, K.; Soupilas, Th.

    2012-04-01

    The test site of Aravissos is located at 70 Km to the West (W-NW) of Thessaloniki at the south banks of mount Païko, in the north part of Central Macedonia The karstic Aravissos springs supply 40% of total volume needed for the water supply of Thessaloniki, Greece. As the water is of excellent quality, it is feed directly in the distribution network without any previous treatment. The availability of this source is therefore of high importance for the sustainable water supply of this area with almost 1000000 inhabitants. The water system of Aravissos is developed in a karstic limestone with an age of about Late Cretaceous that covers almost the entire western part of the big-anticline of Païko Mountain. The climate in this area and the water consumption area, Thessaloniki, is a typical Mediterranean climate with mild and humid winters and hot and dry summers. The total annual number of rainy days is around 110. The production of the Aravissos springs depends mostly from the annual precipitations. As the feeding catchement and the karst aquifer are not well defined, a practical empirical balance model, that contains only well known relevant terms, is applied for the simulation of the operation of the springs under normal water extraction for water supply in present time. The estimation of future weather conditions are based on GCM and RCM simulation data and the extension of trend lines of the actual data. The future evolution of the availability of adequate water quantities from the springs is finally estimated from the balance model and the simulated future climatic data. This study has been realised within the project CC-WaterS, funded by the SEE program of the European Regional Development Fund (http://www.ccwaters.eu/).

  5. Preliminary geothermal investigations at Manley Hot Springs, Alaska

    Energy Technology Data Exchange (ETDEWEB)

    East, J.

    1982-04-01

    Manley Hot Springs is one of several hot springs which form a belt extending from the Seward Peninsula to east-central Alaska. All of the hot springs are low-temperature, water-dominated geothermal systems, having formed as the result of circulation of meteoric water along deepseated fractures near or within granitic intrusives. Shallow, thermally disturbed ground at Manley Hot Springs constitutes an area of 1.2 km by 0.6 km along the lower slopes of Bean Ridge on the north side of the Tanana Valley. This area includes 32 springs and seeps and one warm (29.1/sup 0/C) well. The hottest springs range in temperature from 61/sup 0/ to 47/sup 0/C and are presently utilized for space heating and irrigation. This study was designed to characterize the geothermal system present at Manley Hot Springs and delineate likely sites for geothermal drilling. Several surveys were conducted over a grid system which included shallow ground temperature, helium soil gas, mercury soil and resistivity surveys. In addition, a reconnaissance ground temperature survey and water chemistry sampling program was undertaken. The preliminary results, including some preliminary water chemistry, show that shallow hydrothermal activity can be delineated by many of the surveys. Three localities are targeted as likely geothermal well sites, and a model is proposed for the geothermal system at Manley Hot Springs.

  6. Weldon Spring Site Remedial Action Project: Report from the DOE voluntary protection program onsite review, November 17--21, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-01-28

    This report summarizes the Department of Energy Voluntary Protection Program (DOE-VPP) Review Team`s findings from the five-day onsite evaluation of the Weldon Spring Site Remedial Action Project (WSSRAP), conducted November 17--21, 1997. The site was evaluated against the program requirements contained in ``US Department of Energy Voluntary Protection Program, Part 1: Program Elements`` to determine its success in implementing the five tenets of DOE-VPP. DOE-VPP consists of three programs, with names and functions similar to those in OSHA`s VPP. These programs are STAR, MERIT, and DEMONSTRATION. The STAR program is the core of DOE-VPP. The program is aimed at truly outstanding protectors of employee safety and health. The MERIT program is a steppingstone for contractors and subcontractors that have good safety and health programs but need time and DOE guidance to achieve STAR status. The DEMONSTRATION program is rarely used; it allows DOE to recognize achievements in unusual situations about which DOE needs to learn more before determining approval requirements for the STAR status.

  7. Weldon Spring Site Remedial Action Project: Report from the DOE voluntary protection program onsite review, November 17-21, 1997

    International Nuclear Information System (INIS)

    1998-01-01

    This report summarizes the Department of Energy Voluntary Protection Program (DOE-VPP) Review Team's findings from the five-day onsite evaluation of the Weldon Spring Site Remedial Action Project (WSSRAP), conducted November 17--21, 1997. The site was evaluated against the program requirements contained in ''US Department of Energy Voluntary Protection Program, Part 1: Program Elements'' to determine its success in implementing the five tenets of DOE-VPP. DOE-VPP consists of three programs, with names and functions similar to those in OSHA's VPP. These programs are STAR, MERIT, and DEMONSTRATION. The STAR program is the core of DOE-VPP. The program is aimed at truly outstanding protectors of employee safety and health. The MERIT program is a steppingstone for contractors and subcontractors that have good safety and health programs but need time and DOE guidance to achieve STAR status. The DEMONSTRATION program is rarely used; it allows DOE to recognize achievements in unusual situations about which DOE needs to learn more before determining approval requirements for the STAR status

  8. A practical guide to big data research in psychology.

    Science.gov (United States)

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Seasonal shifts in the diet of the big brown bat (Eptesicus fuscus), Fort Collins, Colorado

    Science.gov (United States)

    Valdez, Ernest W.; O'Shea, Thomas J.

    2014-01-01

    Recent analyses suggest that the big brown bat (Eptesicus fuscus) may be less of a beetle specialist (Coleoptera) in the western United States than previously thought, and that its diet might also vary with temperature. We tested the hypothesis that big brown bats might opportunistically prey on moths by analyzing insect fragments in guano pellets from 30 individual bats (27 females and 3 males) captured while foraging in Fort Collins, Colorado, during May, late July–early August, and late September 2002. We found that bats sampled 17–20 May (n = 12 bats) had a high (81–83%) percentage of volume of lepidopterans in guano, with the remainder (17–19% volume) dipterans and no coleopterans. From 28 May–9 August (n = 17 bats) coleopterans dominated (74–98% volume). On 20 September (n = 1 bat) lepidopterans were 99% of volume in guano. Migratory miller moths (Euxoa auxiliaris) were unusually abundant in Fort Collins in spring and autumn of 2002 and are known agricultural pests as larvae (army cutworms), suggesting that seasonal dietary flexibility in big brown bats has economic benefits.

  10. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  11. Forget the hype or reality. Big data presents new opportunities in Earth Science.

    Science.gov (United States)

    Lee, T. J.

    2015-12-01

    Earth science is arguably one of the most mature science discipline which constantly acquires, curates, and utilizes a large volume of data with diverse variety. We deal with big data before there is big data. For example, while developing the EOS program in the 1980s, the EOS data and information system (EOSDIS) was developed to manage the vast amount of data acquired by the EOS fleet of satellites. EOSDIS continues to be a shining example of modern science data systems in the past two decades. With the explosion of internet, the usage of social media, and the provision of sensors everywhere, the big data era has bring new challenges. First, Goggle developed the search algorithm and a distributed data management system. The open source communities quickly followed up and developed Hadoop file system to facility the map reduce workloads. The internet continues to generate tens of petabytes of data every day. There is a significant shortage of algorithms and knowledgeable manpower to mine the data. In response, the federal government developed the big data programs that fund research and development projects and training programs to tackle these new challenges. Meanwhile, comparatively to the internet data explosion, Earth science big data problem has become quite small. Nevertheless, the big data era presents an opportunity for Earth science to evolve. We learned about the MapReduce algorithms, in memory data mining, machine learning, graph analysis, and semantic web technologies. How do we apply these new technologies to our discipline and bring the hype to Earth? In this talk, I will discuss how we might want to apply some of the big data technologies to our discipline and solve many of our challenging problems. More importantly, I will propose new Earth science data system architecture to enable new type of scientific inquires.

  12. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  13. Groundwater flow cycling between a submarine spring and an inland fresh water spring.

    Science.gov (United States)

    Davis, J Hal; Verdi, Richard

    2014-01-01

    Spring Creek Springs and Wakulla Springs are large first magnitude springs that derive water from the Upper Floridan Aquifer. The submarine Spring Creek Springs are located in a marine estuary and Wakulla Springs are located 18 km inland. Wakulla Springs has had a consistent increase in flow from the 1930s to the present. This increase is probably due to the rising sea level, which puts additional pressure head on the submarine Spring Creek Springs, reducing its fresh water flow and increasing flows in Wakulla Springs. To improve understanding of the complex relations between these springs, flow and salinity data were collected from June 25, 2007 to June 30, 2010. The flow in Spring Creek Springs was most sensitive to rainfall and salt water intrusion, and the flow in Wakulla Springs was most sensitive to rainfall and the flow in Spring Creek Springs. Flows from the springs were found to be connected, and composed of three repeating phases in a karst spring flow cycle: Phase 1 occurred during low rainfall periods and was characterized by salt water backflow into the Spring Creek Springs caves. The higher density salt water blocked fresh water flow and resulted in a higher equivalent fresh water head in Spring Creek Springs than in Wakulla Springs. The blocked fresh water was diverted to Wakulla Springs, approximately doubling its flow. Phase 2 occurred when heavy rainfall resulted in temporarily high creek flows to nearby sinkholes that purged the salt water from the Spring Creek Springs caves. Phase 3 occurred after streams returned to base flow. The Spring Creek Springs caves retained a lower equivalent fresh water head than Wakulla Springs, causing them to flow large amounts of fresh water while Wakulla Springs flow was reduced by about half. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  14. Grande Ronde Endemic Spring Chinook Salmon Supplementation Program: Monitoring and Evaluation, 2002 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Boe, Stephen J.; Weldert, Rey F.; Crump, Carrie A. (Confederated Tribes of the Umatilla Indian Reservation, Department of Natural Resources, Pendleton, OR)

    2003-03-01

    This is the fifth annual report of a multi-year project to operate adult collection and juvenile acclimation facilities on Catherine Creek and the upper Grande Ronde River for Snake River spring chinook salmon. These two streams have historically supported populations that provided significant tribal and non-tribal fisheries. Conventional and captive broodstock supplementation techniques are being used to restore spring chinook salmon fisheries in these streams. Statement of Work Objectives for 2002: (1) Plan for, administer, coordinate and assist comanagers in GRESCP M&E activities. (2) Evaluate performance of supplemented juvenile spring chinook salmon. (3) Evaluate life history differences between wild and hatchery-origin (F{sub 1}) adult spring chinook salmon. (4) Describe life history characteristics and genetics of adult summer steelhead collected at weirs.

  15. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  16. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  17. Evaluation of the 1996 predictions of the run-timing of wild migrant spring/summer yearling chinook in the Snake River Basin using Program RealTime

    International Nuclear Information System (INIS)

    Townsend, R.L.; Yasuda, D.; Skalski, J.R.

    1997-03-01

    This report is a post-season analysis of the accuracy of the 1996 predictions from the program RealTime. Observed 1996 migration data collected at Lower Granite Dam were compared to the predictions made by RealTime for the spring outmigration of wild spring/summer chinook. Appendix A displays the graphical reports of the RealTime program that were interactively accessible via the World Wide Web during the 1996 migration season. Final reports are available at address http://www.cqs.washington.edu/crisprt/. The CRISP model incorporated the predictions of the run status to move the timing forecasts further down the Snake River to Little Goose, Lower Monumental and McNary Dams. An analysis of the dams below Lower Granite Dam is available separately

  18. Spring Tire

    Science.gov (United States)

    Asnani, Vivake M.; Benzing, Jim; Kish, Jim C.

    2011-01-01

    The spring tire is made from helical springs, requires no air or rubber, and consumes nearly zero energy. The tire design provides greater traction in sandy and/or rocky soil, can operate in microgravity and under harsh conditions (vastly varying temperatures), and is non-pneumatic. Like any tire, the spring tire is approximately a toroidal-shaped object intended to be mounted on a transportation wheel. Its basic function is also similar to a traditional tire, in that the spring tire contours to the surface on which it is driven to facilitate traction, and to reduce the transmission of vibration to the vehicle. The essential difference between other tires and the spring tire is the use of helical springs to support and/or distribute load. They are coiled wires that deform elastically under load with little energy loss.

  19. Studying Springs in Series Using a Single Spring

    Science.gov (United States)

    Serna, Juan D.; Joshi, Amitabh

    2011-01-01

    Springs are used for a wide range of applications in physics and engineering. Possibly, one of their most common uses is to study the nature of restoring forces in oscillatory systems. While experiments that verify Hooke's law using springs are abundant in the physics literature, those that explore the combination of several springs together are…

  20. A simulation methodology of spacer grid residual spring deflection for predictive and interpretative purposes

    International Nuclear Information System (INIS)

    Kim, K. T.; Kim, H. K.; Yoon, K. H.

    1994-01-01

    The in-reactor fuel rod support conditions against the fretting wear-induced damage can be evaluated by spacer grid residual spring deflection. In order to predict the spacer grid residual spring deflection as a function of burnup for various spring designs, a simulation methodology of spacer grid residual spring deflection has been developed and implemented in the GRIDFORCE program. The simulation methodology takes into account cladding creep rate, initial spring deflection, initial spring force, and spring force relaxation rate as the key parameters affecting the residual spring deflection. The simulation methodology developed in this study can be utilized as an effective tool in evaluating the capability of a newly designed spacer grid spring to prevent the fretting wear-induced damage

  1. Arab Spring Impact on Executive Education in Egypt

    Science.gov (United States)

    Wafa, Dina

    2015-01-01

    Purpose: The purpose of this paper is to study the impact of the Arab Spring on public administration programs in Egypt, with a special focus on executive education programs. Design/Methodology/Approach: The study draws on stakeholder analysis, and uses both primary and secondary data. Findings: The author describes the impact of the Arab Spring…

  2. Beginning Spring

    CERN Document Server

    Caliskan, Mert

    2015-01-01

    Get up to speed quickly with this comprehensive guide toSpring Beginning Spring is the complete beginner's guide toJava's most popular framework. Written with an eye towardreal-world enterprises, the book covers all aspects of applicationdevelopment within the Spring Framework. Extensive samples withineach chapter allow developers to get up to speed quickly byproviding concrete references for experimentation, building askillset that drives successful application development byexploiting the full capabilities of Java's latest advances. Spring provides the exact toolset required to build anent

  3. The Big Fish Down Under: Examining Moderators of the "Big-Fish-Little-Pond" Effect for Australia's High Achievers

    Science.gov (United States)

    Seaton, Marjorie; Marsh, Herbert W.; Yeung, Alexander Seeshing; Craven, Rhonda

    2011-01-01

    Big-fish-little-pond effect (BFLPE) research has demonstrated that academic self-concept is negatively affected by attending high-ability schools. This article examines data from large, representative samples of 15-year-olds from each Australian state, based on the three Program for International Student Assessment (PISA) databases that focus on…

  4. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  5. A Study of the Application of Big Data in a Rural Comprehensive Information Service

    Directory of Open Access Journals (Sweden)

    Leifeng Guo

    2015-05-01

    Full Text Available Big data has attracted extensive interest due to its potential tremendous social and scientific value. Researchers are also trying to extract potential value from agriculture big data. This paper presents a study of information services based on big data from the perspective of a rural comprehensive information service. First, we introduce the background of the rural comprehensive information service, and then we present in detail the National Rural Comprehensive Information Service Platform (NRCISP, which is supported by the national science and technology support program. Next, we discuss big data in the NRCISP according to data characteristics, data sources, and data processing. Finally, we discuss a service model and services based on big data in the NRCISP.

  6. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  7. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  8. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  9. Effect of section shape on frequencies of natural oscillations of tubular springs

    Science.gov (United States)

    Pirogov, S. P.; Chuba, A. Yu; Cherentsov, D. A.

    2018-05-01

    The necessity of determining the frequencies of natural oscillations of manometric tubular springs is substantiated. Based on the mathematical model and computer program, numerical experiments were performed that allowed us to reveal the effect of geometric parameters on the frequencies of free oscillations of manometric tubular springs.

  10. Big Bang Tumor Growth and Clonal Evolution.

    Science.gov (United States)

    Sun, Ruping; Hu, Zheng; Curtis, Christina

    2018-05-01

    The advent and application of next-generation sequencing (NGS) technologies to tumor genomes has reinvigorated efforts to understand clonal evolution. Although tumor progression has traditionally been viewed as a gradual stepwise process, recent studies suggest that evolutionary rates in tumors can be variable with periods of punctuated mutational bursts and relative stasis. For example, Big Bang dynamics have been reported, wherein after transformation, growth occurs in the absence of stringent selection, consistent with effectively neutral evolution. Although first noted in colorectal tumors, effective neutrality may be relatively common. Additionally, punctuated evolution resulting from mutational bursts and cataclysmic genomic alterations have been described. In this review, we contrast these findings with the conventional gradualist view of clonal evolution and describe potential clinical and therapeutic implications of different evolutionary modes and tempos. Copyright © 2018 Cold Spring Harbor Laboratory Press; all rights reserved.

  11. Just Spring

    CERN Document Server

    Konda, Madhusudhan

    2011-01-01

    Get a concise introduction to Spring, the increasingly popular open source framework for building lightweight enterprise applications on the Java platform. This example-driven book for Java developers delves into the framework's basic features, as well as advanced concepts such as containers. You'll learn how Spring makes Java Messaging Service easier to work with, and how its support for Hibernate helps you work with data persistence and retrieval. Throughout Just Spring, you'll get your hands deep into sample code, beginning with a problem that illustrates dependency injection, Spring's co

  12. The Use of Expert Judgment in the Assessment of Demonstrated Learning in the Antioch College-Yellow Springs Adult Degree Completion Program. CAEL Institutional Report No. 1. Antioch College.

    Science.gov (United States)

    Lewis, Robert

    The implementation of the Adult Degree Completion Program (ADCP) at Antioch-Yellow Springs is described. The ADCP is a transfer program designed to enable adults who have never finished colege to complete their undergraduate degree work, often without having to abandon their obligations to families or to professions. To enroll in the program,…

  13. Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs

    OpenAIRE

    Mohd Usama; Mengchen Liu; Min Chen

    2017-01-01

    At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and ...

  14. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  15. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  16. The role of risk assessment in project planning at the Weldon Spring Quarry, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    Haroun, L.A.; Peterson, J.M.

    1989-01-01

    This paper presents the methodology used to prepare a baseline risk evaluation of the bulk wastes at the quarry. The DOE is proposing to remove these bulk wastes and transport them approximately 6.4 km (4 mi) to a temporary storage facility at the chemical plant area of the Weldon Spring site. The DOE has responsibility for cleanup activities at the Weldon Spring site under its Surplus Facilities Management Program (SFMP). A baseline risk evaluation is an evaluation of the potential impacts on human health and the environment that may result from exposure to releases of contaminants from a site in the absence of site remediation. This evaluation is a key component of the remedial investigation (RI) process, as identified in guidance from the US Environmental Protection Agency (EPA) that addresses sites subject to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980, as amended by the Superfund Amendments and Reauthorization Act of 1986. Response actions at the Weldon Spring quarry are subject to CERCLA requirements because the quarry is listed on the EPA's National Priorities List

  17. Equivalent Air Spring Suspension Model for Quarter-Passive Model of Passenger Vehicles

    OpenAIRE

    Abid, Haider J.; Chen, Jie; Nassar, Ameen A.

    2015-01-01

    This paper investigates the GENSIS air spring suspension system equivalence to a passive suspension system. The SIMULINK simulation together with the OptiY optimization is used to obtain the air spring suspension model equivalent to passive suspension system, where the car body response difference from both systems with the same road profile inputs is used as the objective function for optimization (OptiY program). The parameters of air spring system such as initial pressure, volume of bag, l...

  18. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  19. Spring Framework 5: Themes & Trends

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Spring Framework 5.0/5.1, scheduled for release in early/late 2017, focuses on several key themes: reactive web applications based on Reactive Streams, comprehensive support for JDK 9 and HTTP/2, as well as the latest API generations in the Enterprise Java ecosystem. This talk presents the overall story in the context of wider industry trends, highlighting Spring’s unique programming model strategy.

  20. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  1. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  2. Big data science: A literature review of nursing research exemplars.

    Science.gov (United States)

    Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W

    Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  4. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  5. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  6. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  7. Application of a nonlinear spring element to analysis of circumferentially cracked pipe under dynamic loading

    International Nuclear Information System (INIS)

    Olson, R.; Scott, P.; Wilkowski, G.M.

    1992-01-01

    As part of the US NRC's Degraded Piping Program, the concept of using a nonlinear spring element to simulate the response of cracked pipe in dynamic finite element pipe evaluations was initially proposed. The nonlinear spring element is used to represent the moment versus rotation response of the cracked pipe section. The moment-rotation relationship for the crack size and material of interest is determined from either J-estimation scheme analyses or experimental data. In this paper, a number of possible approaches for modeling the nonlinear stiffness of the cracked pipe section are introduced. One approach, modeling the cracked section moment rotation response with a series of spring-slider elements, is discussed in detail. As part of this discussion, results from a series of finite element predictions using the spring-slider nonlinear spring element are compared with the results from a series of dynamic cracked pipe system experiments from the International Piping Integrity Research Group (IPIRG) program

  8. Corporate Social Responsibility programs of Big Food in Australia: a content analysis of industry documents.

    Science.gov (United States)

    Richards, Zoe; Thomas, Samantha L; Randle, Melanie; Pettigrew, Simone

    2015-12-01

    To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing. © 2015 Public Health Association of Australia.

  9. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  10. Measurement of the neutron spectrum of the Big Ten critical assembly by lithium-6 spectrometry

    International Nuclear Information System (INIS)

    De Leeuw-Gierts, G.; De Leeuw, S.; Hansen, G.E.; Helmick, H.H.

    1979-01-01

    The central neutron-flux spectrum of the Los Alamos Scientific Laboratory's critical assembly, Big Ten, was measured with a 6 Li spectrometer and techniques developed at the Centre d'Etude de L'Energie Nucleaire, Mol, as part of an experimental program to establish the characteristics of Big Ten

  11. Measurement of the neutron spectrum of the Big Ten critical assembly by lithium-6 spectrometry

    International Nuclear Information System (INIS)

    Leeuw-Gierts, G. de; Leeuw, S. de

    1980-01-01

    The central neutron-flux spectrum of the Los Alamos Scientific Laboratory's critical assembly, Big Ten, was measured with a 6 Li spectrometer and techniques developed at the Centre d'Etude de l'Energie Nucleaire, Mol, as part of an experimental program to establish the characteristics of Big Ten

  12. Big Data and Intelligence: Applications, Human Capital, and Education

    Directory of Open Access Journals (Sweden)

    Michael Landon-Murray

    2016-06-01

    Full Text Available The potential for big data to contribute to the US intelligence mission goes beyond bulk collection, social media and counterterrorism. Applications will speak to a range of issues of major concern to intelligence agencies, from military operations to climate change to cyber security. There are challenges too: procurement lags, data stovepiping, separating signal from noise, sources and methods, a range of normative issues, and central to managing these challenges, human capital. These potential applications and challenges are discussed and a closer look at what data scientists do in the Intelligence Community (IC is offered. Effectively filling the ranks of the IC’s data science workforce will depend on the provision of well-trained data scientists from the higher education system. Program offerings at America’s top fifty universities will thus be surveyed (just a few years ago there were reportedly no degrees in data science. One Master’s program that has melded data science with intelligence is examined as well as a university big data research center focused on security and intelligence. This discussion goes a long way to clarify the prospective uses of data science in intelligence while probing perhaps the key challenge to optimal application of big data in the IC.

  13. Thermal springs of Wyoming

    Energy Technology Data Exchange (ETDEWEB)

    Breckenridge, R.M.; Hinckley, B.S.

    1978-01-01

    This bulletin attempts, first, to provide a comprehensive inventory of the thermal springs of Wyoming; second, to explore the geologic and hydrologic factors producing these springs; and, third, to analyze the springs collectively as an indicator of the geothermal resources of the state. A general discussion of the state's geology and the mechanisms of thermal spring production, along with a brief comparison of Wyoming's springs with worldwide thermal features are included. A discussion of geothermal energy resources, a guide for visitors, and an analysis of the flora of Wyoming's springs follow the spring inventory. The listing and analysis of Wyoming's thermal springs are arranged alphabetically by county. Tabulated data are given on elevation, ownership, access, water temperature, and flow rate. Each spring system is described and its history, general characteristics and uses, geology, hydrology, and chemistry are discussed. (MHR)

  14. Strontium isotopic composition of hot spring and mineral spring waters, Japan

    International Nuclear Information System (INIS)

    Notsu, Kenji; Wakita, Hiroshi; Nakamura, Yuji

    1991-01-01

    In Japan, hot springs and mineral springs are distributed in Quaternary and Neogene volcanic regions as well as in granitic, sedimentary and metamorphic regions lacking in recent volcanic activity. The 87 Sr/ 86 Sr ratio was determined in hot spring and mineral spring waters obtained from 47 sites. The ratios of waters from Quaternary and Neogene volcanic regions were in the range 0.703-0.708, which is lower than that from granitic, sedimentary and metamorphic regions (0.706-0.712). The geographical distribution of the ratios coincides with the bedrock geology, and particularly the ratios of the waters in Quaternary volcanic regions correlate with those of surrounding volcanic rocks. These features suggest that subsurface materials control the 87 Sr/ 86 Sr ratios of soluble components in the hot spring and mineral spring waters. (author)

  15. A natural tracer investigation of the hydrological regime of Spring Creek Springs, the largest submarine spring system in Florida

    Science.gov (United States)

    Dimova, Natasha T.; Burnett, William C.; Speer, Kevin

    2011-04-01

    This work presents results from a nearly two-year monitoring of the hydrologic dynamics of the largest submarine spring system in Florida, Spring Creek Springs. During the summer of 2007 this spring system was observed to have significantly reduced flow due to persistent drought conditions. Our examination of the springs revealed that the salinity of the springs' waters had increased significantly, from 4 in 2004 to 33 in July 2007 with anomalous high radon ( 222Rn, t1/2=3.8 days) in surface water concentrations indicating substantial saltwater intrusion into the local aquifer. During our investigation from August 2007 to May 2009 we deployed on an almost monthly basis a continuous radon-in-water measurement system and monitored the salinity fluctuations in the discharge area. To evaluate the springs' freshwater flux we developed three different models: two of them are based on water velocity measurements and either salinity or 222Rn in the associated surface waters as groundwater tracers. The third approach used only salinity changes within the spring area. The three models showed good agreement and the results confirmed that the hydrologic regime of the system is strongly correlated to local precipitation and water table fluctuations with higher discharges after major rain events and very low, even reverse flow during prolong droughts. High flow spring conditions were observed twice during our study, in the early spring and mid-late summer of 2008. However the freshwater spring flux during our observation period never reached that reported from a 1970s value of 4.9×10 6 m 3/day. The maximum spring flow was estimated at about 3.0×10 6 m 3/day after heavy precipitation in February-March 2008. As a result of this storm (total of 173 mm) the salinity in the spring area dropped from about 27 to 2 in only two days. The radon-in-water concentrations dramatically increased in parallel, from about 330 Bq/m 3 to about 6600 Bq/m 3. Such a rapid response suggests a direct

  16. Collaborative Approaches Needed to Close the Big Data Skills Gap

    Directory of Open Access Journals (Sweden)

    Steven Miller

    2014-04-01

    Full Text Available The big data and analytics talent discussion has largely focused on a single role – the data scientist. However, the need is much broader than data scientists. Data has become a strategic business asset. Every professional occupation must adapt to this new mindset. Universities in partnership with industry must move quickly to ensure that the graduates they produce have the required skills for the age of big data. Existing curricula should be reviewed and adapted to ensure relevance. New curricula and degree programs are needed to meet the needs of industry.

  17. Equivalent Air Spring Suspension Model for Quarter-Passive Model of Passenger Vehicles.

    Science.gov (United States)

    Abid, Haider J; Chen, Jie; Nassar, Ameen A

    2015-01-01

    This paper investigates the GENSIS air spring suspension system equivalence to a passive suspension system. The SIMULINK simulation together with the OptiY optimization is used to obtain the air spring suspension model equivalent to passive suspension system, where the car body response difference from both systems with the same road profile inputs is used as the objective function for optimization (OptiY program). The parameters of air spring system such as initial pressure, volume of bag, length of surge pipe, diameter of surge pipe, and volume of reservoir are obtained from optimization. The simulation results show that the air spring suspension equivalent system can produce responses very close to the passive suspension system.

  18. The effect of support springs in ends welded gap hollow YT-joint

    Directory of Open Access Journals (Sweden)

    R. F. Vieira

    Full Text Available This paper presents an analysis on the effect of support springs in an ends circular hollow sections welded into a YT joint. The overall behavior and failure of the joint were characterized under axial compression of the lap brace. Two joint failure modes were identified: chord wall plastification (Mode A and cross-sectional chord buckling (Mode F in the region below the lap brace. The system was modeled with and without support springs using the numerical finite element program Ansys. Model results were compared with experimental data in terms of principal stress in the joint intersection. The finite element model without support springs proved to be more accurate than that with support springs.

  19. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  20. Instant Spring Tool Suite

    CERN Document Server

    Chiang, Geoff

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. A tutorial guide that walks you through how to use the features of Spring Tool Suite using well defined sections for the different parts of Spring.Instant Spring Tool Suite is for novice to intermediate Java developers looking to get a head-start in enterprise application development using Spring Tool Suite and the Spring framework. If you are looking for a guide for effective application development using Spring Tool Suite, then this book is for you.

  1. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  2. Investigation of the mineral potential of the Clipper Gap, Lone Mountain-Weepah, and Pipe Spring plutons, Nevada

    International Nuclear Information System (INIS)

    Tingley, J.V.; Maldonado, F.

    1983-01-01

    The Clipper Gap pluton, composed mostly of quartz monzonite with minor granite, granodiorite, and crosscutting alaskite dikes, intrudes Paleozoic western facies strata. A narrow zone of contact metamorphism is present at the intrusive-sediment contact. No mineral production has been recorded from Clipper Gap, but quartz veins containing gold-silver-copper mineral occurrences have been prospected there from the late 1800's to the present. Areas of the Lone Mountain-Weepah plutons that were studied are located in Esmeralda County about 14 km west of Tonopah, Nevada. At Lone Mountain, a Cretaceous intrusive cuts folded Precambrian and Cambrian sediments. Lead-zinc ores have been mined from small replacement ore bodies in the Alpine district, west of Lone Mountain. Copper and molybdenum occurrences have been found along the east flank of Lone Mountain, and altered areas were noted in intrusive outcrops around the south end of Lone Mountain. Mineral occurrences are widespread and varied with mining activity dating back to the 1860's. The Pipe Spring pluton study area is flanked by two important mining districts, Manhattan to the north and Belmont to the northeast. Mining activity at Belmont dates from 1865. Activity at Manhattan was mainly between 1907 and 1947, but the district is active at the present time (1979). Four smaller mining areas, Monarch, Spanish Springs, Baxter Spring, and Willow Springs, are within the general boundary of the area. The Pipe Spring pluton study area contains numerous prospects along the northern contact zone of the pluton. Tungsten-bearing veins occur within the pluton near Spanish Springs, with potential for gold-tungsten placer in the Ralston Valley. Nickel and associated metals occur at Willow Spring and Monarch Ranch, where prospects may be associated with the margin of the Big Ten Peak Caldera

  3. Pro Spring Integration

    CERN Document Server

    Lui, M; Chan, Andy; Long, Josh

    2011-01-01

    Pro Spring Integration is an authoritative book from the experts that guides you through the vast world of enterprise application integration (EAI) and application of the Spring Integration framework towards solving integration problems. The book is:. * An introduction to the concepts of enterprise application integration * A reference on building event-driven applications using Spring Integration * A guide to solving common integration problems using Spring Integration What makes this book unique is its coverage of contemporary technologies and real-world information, with a focus on common p

  4. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  5. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  6. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  7. Introducing Public Libraries to The Big Read: Final Report on the Audio Guide Distribution

    Science.gov (United States)

    Sloan, Kay; Randall, Michelle

    2009-01-01

    In July 2008, over 14,000 public libraries throughout the U.S. received, free of charge, a set of fourteen Audio Guides introducing them to The Big Read. Since 2007, when the National Endowment for the Arts and the Institute of Museum and Library Services, in partnership with Arts Midwest, debuted The Big Read, the program has awarded grants to…

  8. Big Bear Exploration Ltd. 1998 annual report

    International Nuclear Information System (INIS)

    1999-01-01

    During the first quarter of 1998 Big Bear completed a purchase of additional assets in the Rainbow Lake area of Alberta in which light oil purchase was financed with new equity and bank debt. The business plan was to immediately exploit these light oil assets, the result of which would be increased reserves, production and cash flow. Although drilling results in the first quarter on the Rainbow Lake properties was mixed, oil prices started to free fall and drilling costs were much higher than expected. As a result, the company completed a reduced program which resulted in less incremental loss and cash flow than it budgeted for. On April 29, 1998, Big Bear entered into agreement with Belco Oil and Gas Corp. and Moan Investments Ltd. for the issuance of convertible preferred shares at a gross value of $15,750,000, which shares were eventually converted at 70 cents per share to common equity. As a result of the continued plunge in oil prices, the lending value of the company's assets continued to fall, requiring it to take action in order to meet its financial commitments. Late in the third quarter Big Bear issued equity for proceeds of $11,032,000 which further reduced the company's debt. Although the company has been extremely active in identifying and pursuing acquisition opportunities, it became evident that Belco Oil and Gas Corp. and Big Bear did nor share common criteria for acquisitions, which resulted in the restructuring of their relationship in the fourth quarter. With the future of oil prices in question, Big Bear decided that it would change its focus to that of natural gas and would refocus ts efforts to acquire natural gas assets to fuel its growth. The purchase of Blue Range put Big Bear in a difficult position in terms of the latter's growth. In summary, what started as a difficult year ended in disappointment

  9. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  10. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  11. The study on stress-strain state of the spring at high temperature using ABAQUS

    Directory of Open Access Journals (Sweden)

    H Sun

    2014-01-01

    Full Text Available Cylindrical helical springs are widely used in the elements of thermal energy devices. It is necessary to guarantee the stability of the stress state of spring in high temperature. Relaxation phenomenon of stress is studied in this paper. Calculations are carried out in the environment of ABAQUS. The verification is taken out using analytical calculations.This paper describes the distribution and character of stress contour lines on the cross section of spring under the condition of instantaneous load, explicates the relaxation law with time. Research object is cylindrical helical spring, that working at high temperature. The purpose of this work is to get the stress relaxation law of spring, and to guarantee the long-term strength.This article presents the basic theory of helical spring. Establishes spring mathematical model of creep under the loads of compression and torsion. The stress formulas of each component in the cross section of spring are given. The calculation process of relaxation is analyzed in the program ABAQUS.In this paper compare the analytical formulas of spring stress with the simulation results, which are created by program ABAQUS.Finite element model for stress creep analysis in the cross section is created, material of spring – stainless steel 10X18N9T, springs are used at the temperature 650℃.At the beginning, stress-stain of spring is in the elastic state. Analyzes the change law of creep stress under the condition of constant load and a fixed compression.When analyzing under the condition of a fixed compression, the stresses are quickly decreased in most area in the cross section of spring, and the point of minimum shear stress gradually moves to the direction of outer diameter, because of this, stresses in a small area near the center increase slowly at first then decrease gradually with time. When analyzing under the condition of constant load, the stresses are quickly decreased in the around area and in creased

  12. Reactor dosimetry calibrations in the Big Ten critical assembly

    International Nuclear Information System (INIS)

    Barr, D.W.; Hansen, G.E.

    1977-01-01

    Eleven irradiations of foil packs located in the central region of Big Ten were made for the Interlaboratory Reaction Rate Program. Each irradiation was at a nominal 10 15 fluence and the principal fluence monitor was the National Bureau of Standards' double fission chamber containing 235 U and 238 U deposits and located at the center of Big Ten. Secondary monitors consisted of three external fission chambers and two internal foil sets containing Au, In, and Al. Activities of one set were counted at the LASL and the other at the Hanford Engineering Developement Laboratory. The uncertainty in relative fluence for each irradiation was +-0.3%

  13. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  14. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  15. Missouri Department of Natural Resources Hazardous Waste Program Weldon Spring site remedial action project. Status of project to date January 1996

    International Nuclear Information System (INIS)

    1998-01-01

    This document describes the progress made by the Missouri Department of Natural Resources during the third year (1995) of the Agreement in Support (AIS) in its oversight role at the Weldon Springs Site. The accomplishments this year include participation in several workgroup meetings, oversight of the two operable units (Groundwater and Quarry Residuals), coordination between the US DOE and the various regulatory programs, and continued independent analysis of the treated water discharges

  16. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  17. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  18. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  19. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  20. Envisioning the future of 'big data' biomedicine.

    Science.gov (United States)

    Bui, Alex A T; Van Horn, John Darrell

    2017-05-01

    Through the increasing availability of more efficient data collection procedures, biomedical scientists are now confronting ever larger sets of data, often finding themselves struggling to process and interpret what they have gathered. This, while still more data continues to accumulate. This torrent of biomedical information necessitates creative thinking about how the data are being generated, how they might be best managed, analyzed, and eventually how they can be transformed into further scientific understanding for improving patient care. Recognizing this as a major challenge, the National Institutes of Health (NIH) has spearheaded the "Big Data to Knowledge" (BD2K) program - the agency's most ambitious biomedical informatics effort ever undertaken to date. In this commentary, we describe how the NIH has taken on "big data" science head-on, how a consortium of leading research centers are developing the means for handling large-scale data, and how such activities are being marshalled for the training of a new generation of biomedical data scientists. All in all, the NIH BD2K program seeks to position data science at the heart of 21 st Century biomedical research. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. The Design of Intelligent Repair Welding Mechanism and Relative Control System of Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available Effective repair of worn big gear has large influence on ensuring safety production and enhancing economic benefits. A kind of intelligent repair welding method was put forward mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. Big gear repair welding mechanism was designed in this paper. The work principle and part selection of big gear repair welding mechanism was introduced. The three dimensional mode of big gear repair welding mechanism was constructed by Pro/E three dimensional design software. Three dimensional motions can be realized by motor controlling ball screw. According to involute gear feature, the complicated curve motion on curved gear surface can be transformed to linear motion by orientation. By this way, the repair welding on worn gear area can be realized. In the design of big gear repair welding mechanism control system, Siemens S7-200 series hardware was chosen. Siemens STEP7 programming software was chosen as system design tool. The entire repair welding process was simulated by experiment simulation. It provides a kind of practical and feasible method for the intelligent repair welding of big worn gear.

  2. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  3. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  4. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  5. Clean Cities Now: Vol. 18, No. 1, Spring 2014 (Newsletter)

    Energy Technology Data Exchange (ETDEWEB)

    2014-04-01

    Spring 2014 edition of the biannual newsletter of the U.S. Department of Energy's Clean Cities program. Each issue contains program news, success stories, and information about tools and resources to assist in the deployment of alternative fuels, advanced vehicles, idle reduction, fuel efficiency improvements, and other measures to cut petroleum use in transportation.

  6. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  7. Big Data Meets Physics Education Research: From MOOCs to University-Led High School Programs

    Science.gov (United States)

    Seaton, Daniel

    2017-01-01

    The Massive Open Online Course (MOOC) movement has catalyzed discussions of digital learning on campuses around the world and highlighted the increasingly large, complex datasets related to learning. Physics Education Research can and should play a key role in measuring outcomes of this most recent wave of digital education. In this talk, I will discuss big data and learning analytics through multiple modes of teaching and learning enabled by the open-source edX platform: open-online, flipped, and blended. Open-Online learning will be described through analysis of MOOC offerings from Harvard and MIT, where 2.5 million unique users have led to 9 million enrollments across nearly 300 courses. Flipped instruction will be discussed through an Advanced Placement program at Davidson College that empowers high school teachers to use AP aligned, MOOC content directly in their classrooms with only their students. Analysis of this program will be highlighted, including results from a pilot study showing a positive correlation between content usage and externally validated AP exam scores. Lastly, blended learning will be discussed through specific residential use cases at Davidson College and MIT, highlighting unique course models that blend open-online and residential experiences. My hope for this talk is that listeners will better understand the current wave of digital education and the opportunities it provides for data-driven teaching and learning.

  8. Feasibility study for management of the bulk wastes at the Weldon Spring quarry, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    1990-02-01

    The US Department of Energy (DOE), under its Surplus Facilities Management Program, is responsible for conducting remedial actions at the Weldon Spring site in St. Charles County, Missouri. The Weldon Spring site, which is listed on the National Priorities List of the US Environmental Protection Agency (EPA), became contaminated as a result of processing and disposal activities that took place from the 1940s through the 1960s. The site consists of a quarry and a chemical plant area located about 6.4 km (4 mi) northeast of the quarry. The quarry is surrounded by the Weldon Spring Wildlife Area and is near a well field that constitutes a major source of potable water for St. Charles County; the nearest supply well is located about 0.8 km (0.5 mi) southeast of the quarry. From 1942 to 1969, the quarry was used for the disposal of various radioactively and chemically contaminated materials. Bulk wastes in the quarry consist of contaminated soils and sediments, rubble, metal debris, and equipment. As part of overall site remediation, DOE is proposing to conduct an interim remedial action at the quarry to manage the radioactively and chemically contaminated bulk waste contained therein. 105 refs., 33 figs., 42 tabs

  9. Simulated big sagebrush regeneration supports predicted changes at the trailing and leading edges of distribution shifts

    Science.gov (United States)

    Schlaepfer, Daniel R.; Taylor, Kyle A.; Pennington, Victoria E.; Nelson, Kellen N.; Martin, Trace E.; Rottler, Caitlin M.; Lauenroth, William K.; Bradford, John B.

    2015-01-01

    Many semi-arid plant communities in western North America are dominated by big sagebrush. These ecosystems are being reduced in extent and quality due to economic development, invasive species, and climate change. These pervasive modifications have generated concern about the long-term viability of sagebrush habitat and sagebrush-obligate wildlife species (notably greater sage-grouse), highlighting the need for better understanding of the future big sagebrush distribution, particularly at the species' range margins. These leading and trailing edges of potential climate-driven sagebrush distribution shifts are likely to be areas most sensitive to climate change. We used a process-based regeneration model for big sagebrush, which simulates potential germination and seedling survival in response to climatic and edaphic conditions and tested expectations about current and future regeneration responses at trailing and leading edges that were previously identified using traditional species distribution models. Our results confirmed expectations of increased probability of regeneration at the leading edge and decreased probability of regeneration at the trailing edge below current levels. Our simulations indicated that soil water dynamics at the leading edge became more similar to the typical seasonal ecohydrological conditions observed within the current range of big sagebrush ecosystems. At the trailing edge, an increased winter and spring dryness represented a departure from conditions typically supportive of big sagebrush. Our results highlighted that minimum and maximum daily temperatures as well as soil water recharge and summer dry periods are important constraints for big sagebrush regeneration. Overall, our results confirmed previous predictions, i.e., we see consistent changes in areas identified as trailing and leading edges; however, we also identified potential local refugia within the trailing edge, mostly at sites at higher elevation. Decreasing

  10. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  11. Development of Competency-Based Articulated Automotive Program. Big Bend Community College and Area High Schools. Final Report.

    Science.gov (United States)

    Buche, Fred; Cox, Charles

    A competency-based automotive mechanics curriculum was developed at Big Bend Community College (Washington) in order to provide the basis for an advanced placement procedure for high school graduates and experienced adults through a competency assessment. In order to create the curriculum, Big Bend Community College automotive mechanics…

  12. Learning big data with Amazon Elastic MapReduce

    CERN Document Server

    Singh, Amarkant

    2014-01-01

    This book is aimed at developers and system administrators who want to learn about Big Data analysis using Amazon Elastic MapReduce. Basic Java programming knowledge is required. You should be comfortable with using command-line tools. Prior knowledge of AWS, API, and CLI tools is not assumed. Also, no exposure to Hadoop and MapReduce is expected.

  13. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  14. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  15. Big Data Analysis for Personalized Health Activities: Machine Learning Processing for Automatic Keyword Extraction Approach

    Directory of Open Access Journals (Sweden)

    Jun-Ho Huh

    2018-04-01

    Full Text Available The obese population is increasing rapidly due to the change of lifestyle and diet habits. Obesity can cause various complications and is becoming a social disease. Nonetheless, many obese patients are unaware of the medical treatments that are right for them. Although a variety of online and offline obesity management services have been introduced, they are still not enough to attract the attention of users and are not much of help to solve the problem. Obesity healthcare and personalized health activities are the important factors. Since obesity is related to lifestyle habits, eating habits, and interests, I concluded that the big data analysis of these factors could deduce the problem. Therefore, I collected big data by applying the machine learning and crawling method to the unstructured citizen health data in Korea and the search data of Naver, which is a Korean portal company, and Google for keyword analysis for personalized health activities. It visualized the big data using text mining and word cloud. This study collected and analyzed the data concerning the interests related to obesity, change of interest on obesity, and treatment articles. The analysis showed a wide range of seasonal factors according to spring, summer, fall, and winter. It also visualized and completed the process of extracting the keywords appropriate for treatment of abdominal obesity and lower body obesity. The keyword big data analysis technique for personalized health activities proposed in this paper is based on individual’s interests, level of interest, and body type. Also, the user interface (UI that visualizes the big data compatible with Android and Apple iOS. The users can see the data on the app screen. Many graphs and pictures can be seen via menu, and the significant data values are visualized through machine learning. Therefore, I expect that the big data analysis using various keywords specific to a person will result in measures for personalized

  16. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  17. Full base isolation for earthquake protection by helical springs and viscodampers

    International Nuclear Information System (INIS)

    Hueffmann, G.K.

    1985-01-01

    GERB, a company specializing in vibration isolation has developed a new system for the three dimensional earthquake protection of whole structures, based on helical springs with definite linear flexibility of similar order in all three dimensions and velocity proportional viscodampers, also highly effective in all degrees of freedom. This system has already been successfully used for the installation of big diesel- and turbo-generators in seismic zones for quite a long time, where earthquake protection has been combined with conventional vibration control concepts. Tests on the shaking table of the Earthquake Research Institute at Skopje/Yugoslavia with a model of a 5-story-steel-frame-building comparing a fixed base and spring viscodamper supported installation have shown high stress relief in the structure at limited amplitudes. This system will give not only more protection for buildings and the people inside, but the extra cost equals the savings in the structure. Some unique advantages of this system are: no creep, deterioration or fatigue with time, easy inspection, simple replacement of elements if necessary and also simple modification of the system for example in case of load changes, static uncoupling from the subfoundation (independence of settlements) and low influence of travelling wave effects. (orig.)

  18. 78 FR 37787 - Order Denying Export Privileges

    Science.gov (United States)

    2013-06-24

    ..., 1900 Simler Avenue, Big Spring, TX 79720. On January 13, 2012, in the U.S. District Court, Southern... Big Spring, Federal Corrections Institution, 1900 Simler Avenue, Big Spring, TX 79720, and when acting... person, firm, corporation, or business organization related to Pavon by affiliation, ownership, control...

  19. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  20. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  1. Review of selected 100-N waste sites related to N-Springs remediation projects

    International Nuclear Information System (INIS)

    DeFord, D.H.; Carpenter, R.W.

    1996-01-01

    This document has been prepared in support of the environmental restoration program at the US Department of Energy's Hanford Site near Richland, Washington, by the Bechtel Hanford, Inc. Facility and Waste Site Research Office. It provides historical information that documents and characterizes selected waste sites that are related to the N-Springs remediation projects. The N-Springs are a series of small, inconspicuous groundwater seepage springs located along the Columbia River shoreline near the 100-N Reactor. The spring site is hydrologically down-gradient from several 100-N Area liquid waste sites that are believed to have been the source(s) of the effluents being discharged by the springs. This report documents and characterizes these waste sites, including the 116-N-1 Crib and Trench, 116-N-3 Crib and Trench, unplanned releases, septic tariks, and a backwash pond

  2. Pengembangan Aplikasi Antarmuka Layanan Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Gede Karya

    2017-11-01

    Full Text Available In the 2016 Higher Competitive Grants Research (Hibah Bersaing Dikti, we have been successfully developed models, infrastructure and modules of Hadoop-based big data analysis application. It has also successfully developed a virtual private network (VPN network that allows integration and access to the infrastructure from outside the FTIS Computer Laboratorium. Infrastructure and application modules of analysis are then wanted to be presented as services to small and medium enterprises (SMEs in Indonesia. This research aims to develop application of big data analysis service interface integrated with Hadoop-Cluster. The research begins with finding appropriate methods and techniques for scheduling jobs, calling for ready-made Java Map-Reduce (MR application modules, and techniques for tunneling input / output and meta-data construction of service request (input and service output. The above methods and techniques are then developed into a web-based service application, as well as an executable module that runs on Java and J2EE based programming environment and can access Hadoop-Cluster in the FTIS Computer Lab. The resulting application can be accessed by the public through the site http://bigdata.unpar.ac.id. Based on the test results, the application has functioned well in accordance with the specifications and can be used to perform big data analysis. Keywords: web based service, big data analysis, Hadop, J2EE Abstrak Pada penelitian Hibah Bersaing Dikti tahun 2016 telah berhasil dikembangkan model, infrastruktur dan modul-modul aplikasi big data analysis berbasis Hadoop. Selain itu juga telah berhasil dikembangkan jaringan virtual private network (VPN yang memungkinkan integrasi dan akses infrastruktur tersebut dari luar Laboratorium Komputer FTIS. Infrastruktur dan modul aplikasi analisis tersebut selanjutnya ingin dipresentasikan sebagai layanan kepada usaha kecil dan menengah (UKM di Indonesia. Penelitian ini bertujuan untuk mengembangkan

  3. Mercury content in Hot Springs

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, R

    1974-01-01

    A method of determination of mercury in hot spring waters by flameless atomic absorption spectrophotometry is described. Further, the mercury content and the chemical behavior of the elementary mercury in hot springs are described. Sulfide and iodide ions interfered with the determination of mercury by the reduction-vapor phase technique. These interferences could, however, be minimized by the addition of potassium permanganate. Waters collected from 55 hot springs were found to contain up to 26.0 ppb mercury. High concentrations of mercury have been found in waters from Shimoburo Springs, Aomori (10.0 ppb), Osorezan Springs, Aomori (1.3 approximately 18.8 ppb), Gosyogake Springs, Akita (26.0 ppb), Manza Springs, Gunma (0.30 approximately 19.5 ppb) and Kusatu Springs, Gunma (1.70 approximately 4.50 ppb). These hot springs were acid waters containing a relatively high quantity of chloride or sulfate.

  4. How could discharge management affect Florida spring fish assemblage structure?

    Science.gov (United States)

    Work, Kirsten; Codner, Keneil; Gibbs, Melissa

    2017-08-01

    Freshwater bodies are increasingly affected by reductions in water quantity and quality and by invasions of exotic species. To protect water quantity and maintain the ecological integrity of many water bodies in central Florida, a program of adopting Minimum Flows and Levels (MFLs) has begun for both lentic and lotic waters. The purpose of this study was to determine whether there were relationships between discharge and stage, water quality, and biological parameters for Volusia Blue Spring, a first magnitude spring (discharge > 380,000 m 3 day -1 or 100 mgd) for which an MFL program was adopted in 2006. Over the course of fourteen years, we assessed fish density and diversity weekly, monthly, or seasonally with seine and snorkel counts. We evaluated annual changes in the assemblages for relationships with water quantity and quality. Low discharge and dissolved oxygen combined with high stage and conductivity produced a fish population with a lower density and diversity in 2014 than in previous years. Densities of fish taxonomic/functional groups also were low in 2014 and measures of water quantity were significant predictors of fish assemblage structure. As a result of the strong relationships between variation in discharge and an array of chemical and biological characteristics of the spring, we conclude that maintaining the historical discharge rate is important for preserving the ecological integrity of Volusia Blue Spring. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  6. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  7. Groundwater monitoring strategies at the Weldon Spring site, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    Meyer, K.A. Jr.

    1988-01-01

    This paper presents groundwater monitoring strategies at the Weldon Spring Site in east-central Missouri. The Weldon Spring Site is former ordnance works and uranium processing facility. In 1987, elevated levels of inorganic anions and nitroaromatics were detected in groundwater beneath the site. Studies are currently underway to characterize the hydrogeologic regime and to define groundwater contamination. The complex hydrogeology at the Weldon Spring Site requires innovative monitoring strategies. Combinations of fracture and conduit flow exist in the limestone bedrock. Perched zones are also present near surface impoundments. Losing streams and springs surround the site. Confronting this complex combination of hydrogeologic conditions is especially challenging

  8. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  9. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  10. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  11. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  12. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  13. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  14. Visit to valuable water springs. 22. ; Kanazawa spring and springs at the mountain flank of Iwate volcano. Meisui wo tazunete. 22. ; Kanazawa shimizu to Iwate sanroku yusuigun

    Energy Technology Data Exchange (ETDEWEB)

    Itadera, K. (Kanagawa Hot Springs Research Institute, Kanagawa (Japan)); Shimano, Y. (Utsunomiya Bunsei Junior College, Tochigi (Japan))

    1993-06-30

    This paper describes the following matters on the springs at the mountain flank of Iwate volcano in Iwate Prefecture, with the Kanazawa spring as the main subject: The new and old Iwate volcanos have rock-bed flow deposits which resulted from mountain disintegration, distributed over their south, east and north flanks, and most of the spring water wells up in these areas; the south, east and north flanks have about 80 springs, about 30 springs, and about 10 springs, respectively; the number of springs and the water well-up scale show a trend of inverse proportion; the Kanazawa spring is a generic name of the several springs located on the north flank in the Kanazawa area; its main spring forms a spring pond with an area of about 100 m[sup 2] with a spring water temperature of about 11.5[degree]C, electric conductivity of 200 [mu] S/cm or higher, and a flow-out rate of 500 l/s or more; the Kanazawa spring is characterized by having as large total dissolved component amount as 170 mg/l or more and abundant amount of SO4[sup 2-] and Cl[sup -]; and the spring presents properties different from those in other springs. 10 refs., 5 figs., 1 tab.

  15. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  16. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  17. Spring integration essentials

    CERN Document Server

    Pandey, Chandan

    2015-01-01

    This book is intended for developers who are either already involved with enterprise integration or planning to venture into the domain. Basic knowledge of Java and Spring is expected. For newer users, this book can be used to understand an integration scenario, what the challenges are, and how Spring Integration can be used to solve it. Prior experience of Spring Integration is not expected as this book will walk you through all the code examples.

  18. Getting started with Spring Framework a hands-on guide to begin developing applications using Spring Framework

    CERN Document Server

    Sharma, J

    2016-01-01

    Getting started with Spring Framework is a hands-on guide to begin developing applications using Spring Framework. The examples (consisting of 74 sample projects) that accompany this book are based on Spring 4.3 and Java 8. You can download the examples described in this book from the following GitHub project:github.com/getting-started-with-spring/3rdEdition This book is meant for Java developers with little or no knowledge of Spring Framework. Getting started with Spring Framework, Third Edition has been updated to reflect changes in Spring 4.3 and also includes new chapters on Java-based configuration and Spring Data (covers Spring Data JPA and Spring Data MongoDB projects). The existing chapters have been revised to include information on Java-based configuration. The book also includes some new information on bean definition profiles, importing application context XML files, lazy autowiring, creating custom qualifier annotations, JSR 349 annotations, spring-messaging module, Java 8's Optional type, and s...

  19. Fish Passage Assessment: Big Canyon Creek Watershed, Technical Report 2004.

    Energy Technology Data Exchange (ETDEWEB)

    Christian, Richard

    2004-02-01

    This report presents the results of the fish passage assessment as outlined as part of the Protect and Restore the Big Canyon Creek Watershed project as detailed in the CY2003 Statement of Work (SOW). As part of the Northwest Power Planning Council's Columbia Basin Fish and Wildlife Program (FWP), this project is one of Bonneville Power Administration's (BPA) many efforts at off-site mitigation for damage to salmon and steelhead runs, their migration, and wildlife habitat caused by the construction and operation of federal hydroelectric dams on the Columbia River and its tributaries. The proposed restoration activities within the Big Canyon Creek watershed follow the watershed restoration approach mandated by the Fisheries and Watershed Program. Nez Perce Tribal Fisheries/Watershed Program vision focuses on protecting, restoring, and enhancing watersheds and treaty resources within the ceded territory of the Nez Perce Tribe under the Treaty of 1855 with the United States Federal Government. The program uses a holistic approach, which encompasses entire watersheds, ridge top to ridge top, emphasizing all cultural aspects. We strive toward maximizing historic ecosystem productive health, for the restoration of anadromous and resident fish populations. The Nez Perce Tribal Fisheries/Watershed Program (NPTFWP) sponsors the Protect and Restore the Big Canyon Creek Watershed project. The NPTFWP has the authority to allocate funds under the provisions set forth in their contract with BPA. In the state of Idaho vast numbers of relatively small obstructions, such as road culverts, block thousands of miles of habitat suitable for a variety of fish species. To date, most agencies and land managers have not had sufficient, quantifiable data to adequately address these barrier sites. The ultimate objective of this comprehensive inventory and assessment was to identify all barrier crossings within the watershed. The barriers were then prioritized according to the

  20. Next Generation Workload Management and Analysis System for Big Data

    Energy Technology Data Exchange (ETDEWEB)

    De, Kaushik [Univ. of Texas, Arlington, TX (United States)

    2017-04-24

    We report on the activities and accomplishments of a four-year project (a three-year grant followed by a one-year no cost extension) to develop a next generation workload management system for Big Data. The new system is based on the highly successful PanDA software developed for High Energy Physics (HEP) in 2005. PanDA is used by the ATLAS experiment at the Large Hadron Collider (LHC), and the AMS experiment at the space station. The program of work described here was carried out by two teams of developers working collaboratively at Brookhaven National Laboratory (BNL) and the University of Texas at Arlington (UTA). These teams worked closely with the original PanDA team – for the sake of clarity the work of the next generation team will be referred to as the BigPanDA project. Their work has led to the adoption of BigPanDA by the COMPASS experiment at CERN, and many other experiments and science projects worldwide.

  1. The source, discharge, and chemical characteristics of water from Agua Caliente Spring, Palm Springs, California

    Science.gov (United States)

    Contributors: Brandt, Justin; Catchings, Rufus D.; Christensen, Allen H.; Flint, Alan L.; Gandhok, Gini; Goldman, Mark R.; Halford, Keith J.; Langenheim, V.E.; Martin, Peter; Rymer, Michael J.; Schroeder, Roy A.; Smith, Gregory A.; Sneed, Michelle; Martin, Peter

    2011-01-01

    Agua Caliente Spring, in downtown Palm Springs, California, has been used for recreation and medicinal therapy for hundreds of years and currently (2008) is the source of hot water for the Spa Resort owned by the Agua Caliente Band of the Cahuilla Indians. The Agua Caliente Spring is located about 1,500 feet east of the eastern front of the San Jacinto Mountains on the southeast-sloping alluvial plain of the Coachella Valley. The objectives of this study were to (1) define the geologic structure associated with the Agua Caliente Spring; (2) define the source(s), and possibly the age(s), of water discharged by the spring; (3) ascertain the seasonal and longer-term variability of the natural discharge, water temperature, and chemical characteristics of the spring water; (4) evaluate whether water-level declines in the regional aquifer will influence the temperature of the spring discharge; and, (5) estimate the quantity of spring water that leaks out of the water-collector tank at the spring orifice.

  2. Spring plant phenology and false springs in the conterminous US during the 21st century

    Science.gov (United States)

    Allstadt, Andrew J.; Vavrus, Stephen J.; Heglund, Patricia J.; Pidgeon, Anna M.; Thogmartin, Wayne E.; Radeloff, Volker C.

    2015-01-01

    The onset of spring plant growth has shifted earlier in the year over the past several decades due to rising global temperatures. Earlier spring onset may cause phenological mismatches between the availability of plant resources and dependent animals, and potentially lead to more false springs, when subsequent freezing temperatures damage new plant growth. We used the extended spring indices to project changes in spring onset, defined by leaf out and by first bloom, and predicted false springs until 2100 in the conterminous United States (US) using statistically-downscaled climate projections from the Coupled Model Intercomparison Project 5 ensemble. Averaged over our study region, the median shift in spring onset was 23 days earlier in the Representative Concentration Pathway 8.5 scenario with particularly large shifts in the Western US and the Great Plains. Spatial variation in phenology was due to the influence of short-term temperature changes around the time of spring onset versus season long accumulation of warm temperatures. False spring risk increased in the Great Plains and portions of the Midwest, but remained constant or decreased elsewhere. We conclude that global climate change may have complex and spatially variable effects on spring onset and false springs, making local predictions of change difficult.

  3. Research on taxi software policy based on big data

    Directory of Open Access Journals (Sweden)

    Feng Daoming

    2017-01-01

    Full Text Available Through big data analysis, statistical analysis of a large number of factors affect the establishment of the rally car index set, By establishing a mathematical model to analyze the different space-time taxi resource “to match supply and demand” degree, combined with intelligent deployment to solve the “taxi difficult” this hot social issues. This article takes Shanghai as an example, the central park, Lu Xun park, century park three areas as the object of study. From the “sky drops fast travel intelligence platform” big data, Extracted passenger demand and the number of taxi Kongshi data. Then demand and supply of taxis to establish indicators matrix, get the degree of matching supply needs of the region. Then through the big data relevant policies of each taxi company. Using the method of cluster analysis, to find the decisive role of the three aspects of the factors, using principal component analysis, compare the advantages and disadvantages of the existing company’s programs. Finally, according to the above research to develop a reasonable taxi software related policies.

  4. Weldon Spring Site environmental report for calendar year 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-08-01

    This report describes the environmental monitoring programs at the Weldon Spring Site Remedial Action Project (WSSRAP). The objectives of these programs are to assess actual or potential exposure to contaminant effluents from the project area by providing public use scenarios and dose estimates, to demonstrate compliance with Federal and State permitted levels and regulations, and to summarize trends and/or changes in contaminant concentrations identified through environmental monitoring. Comprehensive monitoring indicated that emissions of radiological compounds in airborne and surface water discharges from the Weldon Spring site consisted primarily of Rn-220 gas, isotopes of thorium and radium, and natural uranium. Airborne Rn-220 emissions were estimated to be 42 Ci (1.6E12 Bq), while emissions from a combination of thorium, radium, and natural uranium isotopes to air and surface water were estimated to be 0.018 Ci (6.7E8 Bq), for a total of 25,000 g (25 kg). There was no measurable impact to any drinking water source.

  5. Weldon Spring Site environmental report for calendar year 1997

    International Nuclear Information System (INIS)

    1998-08-01

    This report describes the environmental monitoring programs at the Weldon Spring Site Remedial Action Project (WSSRAP). The objectives of these programs are to assess actual or potential exposure to contaminant effluents from the project area by providing public use scenarios and dose estimates, to demonstrate compliance with Federal and State permitted levels and regulations, and to summarize trends and/or changes in contaminant concentrations identified through environmental monitoring. Comprehensive monitoring indicated that emissions of radiological compounds in airborne and surface water discharges from the Weldon Spring site consisted primarily of Rn-220 gas, isotopes of thorium and radium, and natural uranium. Airborne Rn-220 emissions were estimated to be 42 Ci (1.6E12 Bq), while emissions from a combination of thorium, radium, and natural uranium isotopes to air and surface water were estimated to be 0.018 Ci (6.7E8 Bq), for a total of 25,000 g (25 kg). There was no measurable impact to any drinking water source

  6. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  7. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  8. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  9. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  10. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  11. Big Data access and infrastructure for modern biology: case studies in data repository utility.

    Science.gov (United States)

    Boles, Nathan C; Stone, Tyler; Bergeron, Charles; Kiehl, Thomas R

    2017-01-01

    Big Data is no longer solely the purview of big organizations with big resources. Today's routine tools and experimental methods can generate large slices of data. For example, high-throughput sequencing can quickly interrogate biological systems for the expression levels of thousands of different RNAs, examine epigenetic marks throughout the genome, and detect differences in the genomes of individuals. Multichannel electrophysiology platforms produce gigabytes of data in just a few minutes of recording. Imaging systems generate videos capturing biological behaviors over the course of days. Thus, any researcher now has access to a veritable wealth of data. However, the ability of any given researcher to utilize that data is limited by her/his own resources and skills for downloading, storing, and analyzing the data. In this paper, we examine the necessary resources required to engage Big Data, survey the state of modern data analysis pipelines, present a few data repository case studies, and touch on current institutions and programs supporting the work that relies on Big Data. © 2016 New York Academy of Sciences.

  12. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  13. Ground water monitoring strategies at the Weldon Spring Site, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    Meyer, K.A. Jr.

    1988-01-01

    This paper presents ground water monitoring strategies at the Weldon Spring Site in east-central Missouri. The Weldon Spring Site is a former ordnance works and uranium processing facility. In 1987, elevated levels of inorganic anions and nitroaromatics were detected in ground water beneath the site. Studies are currently underway to characterize the hydrogeologic regime and to define ground water contamination. The complex hydrogeology at the Weldon Spring Site requires innovative monitoring strategies. Combinations of fracture and conduit flow exist in the limestone bedrock. Perched zones are also present near surface impoundments. Losing streams and springs surround the site. Solving this complex combination of hydrogeologic conditions is especially challenging

  14. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  15. Relationship Between Big Five Personality Traits, Emotional Intelligence and Self-esteem Among College Students

    OpenAIRE

    Fauzia Nazir, AnamAzam, Muhammad Rafiq, Sobia Nazir, Sophia Nazir, ShaziaTasleem

    2015-01-01

    The current research study was on the “Relationship between Big Five Personality Traits & Emotional Intelligence and Self-esteem among the College Students”. This work is based on cross sectional survey research design. The convenience sample was used by including 170 female Students studying at government college kotla Arab Ali khan Gujrat, Pakistan, degree program of 3rd year and 4th year. The study variables were measured using Big Five Inventory Scale by Goldberg (1993), Emotional Intell...

  16. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  17. Work plan for the remedial investigation/feasibility study-environmental impact statement for the Weldon Spring site, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    Peterson, J.M.; MacDonell, M.M.; Haroun, L.A.; Nowadly, F.K.; Knight, W.C.; Vajda, G.F.

    1988-08-01

    The Weldon Spring Site Remedial Action Project is being conducted as a Major System Acquisition under the Surplus Facilities Management Program (SFMP) of the US Department of Energy (DOE). The major goals of the SFMP are to eliminate potential hazards to the public and the environment that are associated with contamination at SFMP sites and to make surplus real property available for other uses to the extent possible. The Weldon Spring site is located near Weldon Spring, Missouri, about 48 km (30 mi) west of St. Louis. It is surrounded by large tracts of land owned by the federal government and the state of Missouri. The site consists of four raffinate pits, an inactive chemical plant, and a contaminated quarry. The raffinate pits and chemical plant are on adjoining land about 3.2 km (2 mi) southwest of the junction of Missouri (State) Route 94 and US Route 40/61, with access from Route 94. The quarry is located in a comparatively remote area about 6.4 km (4 mi) south-southwest of the raffinate pits and chemical plant area; the quarry can also be accessed from Route 94. These areas are fenced and closed to the public. From 1941 to 1944, the US Department of the Army operated the Weldon Spring Ordnance Works, constructed on the land that is now the Weldon Spring site, for production of trinitrotoluene (TNT) and dinitrotoluene (DNT). The Army used the quarry for disposal of rubble contaminated with TNT. In the mid 1950s, 83 ha (205 acres) of the ordnance works property was transferred to the US Atomic Energy Commission (AEC); this is now the raffinate pits and chemical plant area. An additional 6 ha (15 acres) was later transferred to the AEC for expansion of waste storage capacity. 23 refs., 37 figs., 21 tabs

  18. Optimum Design of a Coil Spring for Improving the Performance of a Spring -Operated Mechanism

    International Nuclear Information System (INIS)

    Lee, Dae Woo; Sohn, Jeong Hyun; Yoo, Wan Suk

    2016-01-01

    In this study, a release test bed is designed to evaluate the dynamic behaviors of a coil spring. From the release tests, the dynamic behaviors of a coil spring are analyzed. A lumped parameter spring model was established for numerical simulation of a spring. The design variables of a coil spring are optimized by using the design of experiments approach. Two-level factorial designs are used for the design optimization, and the primary effects of the design variables are analyzed. Based on the results of the interaction analysis and design sensitivity analysis, the level of the design variables is rearranged. Finally, the mixed-level factorial design is used for the optimum design process. According to the optimum design of the opening spring, the dynamic performance of the spring-operated mechanism increases by 2.90

  19. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  20. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  1. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  2. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  3. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  4. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  5. Executive summary for the Weldon Spring Site Environmental Report for calendar year 1991

    International Nuclear Information System (INIS)

    1992-07-01

    This report is the sixth in a series of annual reports produced by the Weldon Spring Site Remedial Action Project (WSSRAP) since 1986. It reports the results of a comprehensive, year-round program to monitor the impact of the Weldon Spring site (WSS) on the surrounding region's groundwater and surface waters; air quality; vegetation and wildlife; and, through these multiple pathways, the potential for exposure to receptor human populations. Information is also presented on the environmental monitoring quality assurance program, waste management activities, audits and reviews, and special environmental studies. Data are included for both the Weldon Spring Chemical Plant and raffinate pits and the Weldon Spring Quarry. Based on the consistent exercise of quality assurance in both standard operating procedures and quality control sample collection, the WSSRAP asserts that the data presented in the WSS Environmental Report for Calendar Year 1991 accurately reflect the environmental conditions monitored at the WSS. This report presents narratives, summaries, and conclusions on environmental monitoring at the WSS and surrounding vicinity properties for the entire 1991 monitoring year. During 1991 the WSSRAP also published quarterly data reports, wherein all routine monitoring data were tabulated and presented quarterly to allow the public to review the data in a timely fashion prior to issuance of the annual report

  6. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  7. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  8. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  9. Measurements of 427 Double Stars With Speckle Interferometry: The Winter/Spring 2017 Observing Program at Brilliant Sky Observatory, Part 1

    Science.gov (United States)

    Harshaw, Richard

    2018-04-01

    In the winter and spring of 2017, an aggressive observing program of measuring close double stars with speckle interferometry and CCD imaging was undertaken at Brilliant Sky Observatory, my observing site in Cave Creek, Arizona. A total of 596 stars were observed, 8 of which were rejected for various reasons, leaving 588 pairs. Of these, 427 were observed and measured with speckle interferometry, while the remaining 161 were measured with a CCD. This paper reports the results of the observations of the 427 speckle cases. A separate paper in this issue will report the CCD measurements of the 161 other pairs.

  10. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  11. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  12. Thermography hogging the limelight at Big Sky

    Energy Technology Data Exchange (ETDEWEB)

    Plastow, C. [Fluke Electronics Canada, Mississauga, ON (Canada)

    2010-02-15

    The high levels of humidity and ammonia found at hog farms can lead to premature corrosion of electrical systems and create potential hazards, such as electrical fires. Big Sky Farms in Saskatchewan has performed on-site inspections at its 44 farms and 16 feed mills using handheld thermography technology from Fluke Electronics. Ti thermal imaging units save time and simplify inspections. The units could be used for everything, from checking out the bearings at the feed mills to electrical circuits and relays. The Ti25 is affordable and has the right features for a preventative maintenance program. Operators of Big Sky Farms use the Ti25 to inspect all circuit breakers of 600 volts or lower as well as transformers where corrosion often causes connections to break off. The units are used to look at bearings, do scanning and thermal imaging on motors. To date, the Ti25 has detected and highlighted 5 or 6 problems on transformers alone that could have been major issues. At one site, the Ti25 indicated that all 30 circuit breakers had loose connections and were overeating. Big Sky Farms fixed the problem right away before a disaster happened. In addition to reducing inspection times, the Ti25 can record all measurements and keep a record of all the readings for downloading. 2 figs.

  13. Hot springs in Hokuriku District

    Energy Technology Data Exchange (ETDEWEB)

    Sato, K. (Hot Springs Research Center, Japan)

    1971-01-01

    In the Hokuriku district including Toyama, Ishikawa, and Fukui Prefectures, hot springs of more than 25/sup 0/C were investigated. In the Toyama Prefecture, there are 14 hot springs which are located in an area from the Kurobe River to the Tateyama volcano and in the mountainous area in the southwest. In Ishikawa Prefecture there are 16 hot springs scattered in Hakusan and its vicinity, the Kaga mountains, and in the Noto peninsula. In northern Fukui Prefecture there are seven hot springs. The hot springs in Shirakawa in Gifu Prefecture are characterized as acid springs producing exhalations and H/sub 2/S. These are attributed to the Quaternary volcanoes. The hot springs of Wakura, Katayamazu, and Awara in Ishikawa Prefecture are characterized by a high Cl content which is related to Tertiary andesite. The hot springs of Daishoji, Yamanaka, Yamashiro, Kuritsu, Tatsunokuchi, Yuwaku, and Yunotani are characterized by a low HCO/sub 3/ content. The Ca and SO/sub 4/ content decreases from east to west, and the Na and Cl content increases from west to east. These fluctuations are related to the Tertiary tuff and rhyolite. The hot springs of Kuronagi, Kinshu, and Babadani, located along the Kurobe River are characterized by low levels of dissolved components and high CO/sub 2/ and HCO/sub 3/ content. These trends are related to late Paleozoic granite. Hot springs resources are considered to be connected to geothermal resources. Ten tables, graphs, and maps are provided.

  14. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  15. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  16. Interventions for treating osteoarthritis of the big toe joint.

    Science.gov (United States)

    Zammit, Gerard V; Menz, Hylton B; Munteanu, Shannon E; Landorf, Karl B; Gilheany, Mark F

    2010-09-08

    Osteoarthritis affecting of the big toe joint of the foot (hallux limitus or rigidus) is a common and painful condition. Although several treatments have been proposed, few have been adequately evaluated. To identify controlled trials evaluating interventions for osteoarthritis of the big toe joint and to determine the optimum intervention(s). Literature searches were conducted across the following electronic databases: CENTRAL; MEDLINE; EMBASE; CINAHL; and PEDro (to 14th January 2010). No language restrictions were applied. Randomised controlled trials, quasi-randomised trials, or controlled clinical trials that assessed treatment outcomes for osteoarthritis of the big toe joint. Participants of any age or gender with osteoarthritis of the big toe joint (defined either radiographically or clinically) were included. Two authors examined the list of titles and abstracts identified by the literature searches. One content area expert and one methodologist independently applied the pre-determined inclusion and exclusion criteria to the full text of identified trials. To minimise error and reduce potential bias, data were extracted independently by two content experts. Only one trial satisfactorily fulfilled the inclusion criteria and was included in this review. This trial evaluated the effectiveness of two physical therapy programs in 20 individuals with osteoarthritis of the big toe joint. Assessment outcomes included pain levels, big toe joint range of motion and plantar flexion strength of the hallux. Mean differences at four weeks follow up were 3.80 points (95% CI 2.74 to 4.86) for self reported pain, 28.30 degrees (95% CI 21.37 to 35.23) for big toe joint range of motion, and 2.80 kg (95% CI 2.13 to 3.47) for muscle strength. Although differences in outcomes between treatment and control groups were reported, the risk of bias was high. The trial failed to employ appropriate randomisation or adequate allocation concealment, used a relatively small sample and

  17. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  19. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  20. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  1. Spheres of discharge of springs

    Science.gov (United States)

    Springer, Abraham E.; Stevens, Lawrence E.

    2009-02-01

    Although springs have been recognized as important, rare, and globally threatened ecosystems, there is as yet no consistent and comprehensive classification system or common lexicon for springs. In this paper, 12 spheres of discharge of springs are defined, sketched, displayed with photographs, and described relative to their hydrogeology of occurrence, and the microhabitats and ecosystems they support. A few of the spheres of discharge have been previously recognized and used by hydrogeologists for over 80 years, but others have only recently been defined geomorphologically. A comparison of these spheres of discharge to classification systems for wetlands, groundwater dependent ecosystems, karst hydrogeology, running waters, and other systems is provided. With a common lexicon for springs, hydrogeologists can provide more consistent guidance for springs ecosystem conservation, management, and restoration. As additional comprehensive inventories of the physical, biological, and cultural characteristics are conducted and analyzed, it will eventually be possible to associate spheres of discharge with discrete vegetation and aquatic invertebrate assemblages, and better understand the habitat requirements of rare or unique springs species. Given the elevated productivity and biodiversity of springs, and their highly threatened status, identification of geomorphic similarities among spring types is essential for conservation of these important ecosystems.

  2. Associations between empathy and big five personality traits among Chinese undergraduate medical students.

    Science.gov (United States)

    Song, Yang; Shi, Meng

    2017-01-01

    Empathy promotes positive physician-patient communication and is associated with improved patient satisfaction, treatment adherence and clinical outcomes. It has been suggested that personality traits should be taken into consideration in programs designed to enhance empathy in medical education due to the association found between personality and empathy among medical students. However, the associations between empathy and big five personality traits in medical education are still underrepresented in the existing literature and relevant studies have not been conducted among medical students in China, where tensions in the physician-patient relationship have been reported as outstanding problems in the context of China's current medical reform. Thus, the main objective of this study was to examine the associations between empathy and big five personality traits among Chinese medical students. A cross-sectional study was conducted in a medical university in Northeast China in June 2016. Self-reported questionnaires including the Interpersonal Reactivity Index (IRI) and Big Five Inventory (BFI) and demographic characteristics were distributed. A total of 530 clinical medical students became our final subjects. Hierarchical regression analysis was performed to explore the effects of big five personality traits on empathy. Results of this study showed that big five personality traits accounted for 19.4%, 18.1%, 30.2% of the variance in three dimensions of empathy, namely, perspective taking, empathic concern and personal distress, respectively. Specifically, agreeableness had a strong positive association with empathic concern (β = 0.477, Ppersonal distress (β = 0.526, Ppersonal distress (β = -0.160, Pbig five personality traits were important predictors of self-reported measures of both cognitive and affective empathy among Chinese medical students. Therefore, individualized intervention strategies based on personality traits could be integrated into programs to

  3. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  4. Coil spring venting arrangement

    International Nuclear Information System (INIS)

    McCugh, R.M.

    1975-01-01

    A simple venting device for trapped gas pockets in hydraulic systems is inserted through a small access passages, operated remotely, and removed completely. The device comprises a small diameter, closely wound coil spring which is pushed through a guide temporarily inserted in the access passage. The guide has a central passageway which directs the coil spring radially upward into the pocket, so that, with the guide properly positioned for depth and properly oriented, the coil spring can be pushed up into the top of the pocket to vent it. By positioning a seal around the free end of the guide, the spring and guide are removed and the passage is sealed

  5. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  6. Comparative spring mechanics in mantis shrimp.

    Science.gov (United States)

    Patek, S N; Rosario, M V; Taylor, J R A

    2013-04-01

    Elastic mechanisms are fundamental to fast and efficient movements. Mantis shrimp power their fast raptorial appendages using a conserved network of exoskeletal springs, linkages and latches. Their appendages are fantastically diverse, ranging from spears to hammers. We measured the spring mechanics of 12 mantis shrimp species from five different families exhibiting hammer-shaped, spear-shaped and undifferentiated appendages. Across species, spring force and work increase with size of the appendage and spring constant is not correlated with size. Species that hammer their prey exhibit significantly greater spring resilience compared with species that impale evasive prey ('spearers'); mixed statistical results show that species that hammer prey also produce greater work relative to size during spring loading compared with spearers. Disabling part of the spring mechanism, the 'saddle', significantly decreases spring force and work in three smasher species; cross-species analyses show a greater effect of cutting the saddle on the spring force and spring constant in species without hammers compared with species with hammers. Overall, the study shows a more potent spring mechanism in the faster and more powerful hammering species compared with spearing species while also highlighting the challenges of reconciling within-species and cross-species mechanical analyses when different processes may be acting at these two different levels of analysis. The observed mechanical variation in spring mechanics provides insights into the evolutionary history, morphological components and mechanical behavior, which were not discernible in prior single-species studies. The results also suggest that, even with a conserved spring mechanism, spring behavior, potency and component structures can be varied within a clade with implications for the behavioral functions of power-amplified devices.

  7. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  8. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  9. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  10. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  11. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  12. How to Generate Economic and Sustainability Reports from Big Data? Qualifications of Process Industry

    Directory of Open Access Journals (Sweden)

    Esa Hämäläinen

    2017-11-01

    Full Text Available Big Data may introduce new opportunities, and for this reason it has become a mantra among most industries. This paper focuses on examining how to develop cost and sustainable reporting by utilizing Big Data that covers economic values, production volumes, and emission information. We assume strongly that this use supports cleaner production, while at the same time offers more information for revenue and profitability development. We argue that Big Data brings company-wide business benefits if data queries and interfaces are built to be interactive, intuitive, and user-friendly. The amount of information related to operations, costs, emissions, and the supply chain would increase enormously if Big Data was used in various manufacturing industries. It is essential to expose the relevant correlations between different attributes and data fields. Proper algorithm design and programming are key to making the most of Big Data. This paper introduces ideas on how to refine raw data into valuable information, which can serve many types of end users, decision makers, and even external auditors. Concrete examples are given through an industrial paper mill case, which covers environmental aspects, cost-efficiency management, and process design.

  13. Grande Ronde Basin Spring Chinook Salmon Captive Broodstock Program, 1995-2002 Summary Report.

    Energy Technology Data Exchange (ETDEWEB)

    Hoffnagle, Timothy; Carmichael, Richard; Noll, William

    2003-12-01

    survey areas in 1995 from as high as 1,205 redds in the same area in 1969 (Table 1). All streams reached low points (0-6 redds in the index areas) in the 1990's, except those in which no redds were found for several years and surveys were discontinued, such as Spring, Sheep and Indian creeks which had a total of 109 redds in 1969. The Minam and Wenaha rivers are tributaries of the Grande Ronde River located primarily in wilderness areas. Chinook salmon numbers in these two streams (based on redd counts) also decreased dramatically beginning in the early 1970's (Table 1). Since then there have been a few years of increasing numbers of redds but counts have generally been 25-40% of the number seen in the 1960's. No hatchery fish have been released into either of these streams and we monitor them during spawning ground surveys for the presence of hatchery strays. These populations will be used as a type of control for evaluating our supplementation efforts in Catherine Creek, upper Grande Ronde River and Lostine River. In this way, we can attempt to filter out the effects of downstream variables, over which we have no control, when we interpret the results of the captive broodstock program as the F1 and F2 generations spawn and complete their life cycles in the wild. The Grande Ronde Basin Captive Broodstock Program was initiated because these chinook salmon populations had reached critical levels where dramatic and unprecedented efforts were needed to prevent extinction and preserve any future options for use of endemic fish for artificial propagation programs for recovery and mitigation. This program was designed to quickly increase numbers of returning adults, while maintaining the genetic integrity of each endemic population.

  14. Mockito for Spring

    CERN Document Server

    Acharya, Sujoy

    2015-01-01

    If you are an application developer with some experience in software testing and want to learn more about testing frameworks, then this technology and book is for you. Mockito for Spring will be perfect as your next step towards becoming a competent software tester with Spring and Mockito.

  15. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  16. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  17. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  18. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  19. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  20. Physico-chemical characteristics of travertine springs and lakes, affecting the lives of lamellibranches (Ostracoda)

    International Nuclear Information System (INIS)

    Sykorova, M.

    2014-01-01

    Ostracods are very frequent fossils in the travertine, but we know little about their biodiversity, space distribution and ecological preferences in the extant travertine springs and lakes. To improve their application in Quaternary paleoecologic and paleoclimatic studies, we studied travertine springs and lakes of different physical characteristics (cold 25 grad C) and chemical composition (carbonate, sulfate, Fe) in Slovakia. Twenty-four ostracod species were observed in the travertine springs, lakes and their surroundings. Our findings provide interesting information on ostracods biodiversity in these environments. Connection between variables was evaluated by using the statistical program PCA analysis. Travertine habitats exhibit high variability in environmental parameters. (author)

  1. Big data technologies in e-learning

    Directory of Open Access Journals (Sweden)

    Gyulara A. Mamedova

    2017-01-01

    Full Text Available Recently, e-learning around the world is rapidly developing, and the main problem is to provide the students with quality educational information on time. This task cannot be solved without analyzing the large flow of information, entering the information environment of e-learning from participants in the educational process – students, lecturers, administration, etc. In this environment, there are a large number of different types of data, both structured and unstructured. Data processing is difficult to implement by traditional statistical methods. The aim of the study is to show that for the development and implementation of successful e-learning systems, it is necessary to use new technologies that would allow storing and processing large data streams.In order to store the big data, a large amount of disk space is required. It is shown that to solve this problem it is efficient to use clustered NAS (Network Area Storage technology, which allows storing information of educational institutions on NAS servers and sharing them with Internet. To process and personalize the Big Data in the environment of e-learning, it is proposed to use the technologies MapReduce, Hadoop, NoSQL and others. The article gives examples of the use of these technologies in the cloud environment. These technologies in e-learning allow achieving flexibility, scalability, availability, quality of service, security, confidentiality and ease of educational information use.Another important problem of e-learning is the identification of new, sometimes hidden, interconnection in Big Data, new knowledge (data mining, which can be used to improve the educational process and improve its management. To classify electronic educational resources, identify patterns of students with similar psychological, behavioral and intellectual characteristics, developing individualized educational programs, it is proposed to use methods of analysis of Big Data.The article shows that at

  2. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  3. Characterization of the hydrogeology of the sacred Gihon Spring, Jerusalem: a deteriorating urban karst spring

    Science.gov (United States)

    Amiel, Ronit Benami; Grodek, Tamir; Frumkin, Amos

    2010-09-01

    The Gihon Spring, Jerusalem, is important for the major monotheistic religions. Its hydrogeology and hydrochemistry is studied here in order to understand urbanization effects on karst groundwater resources, and promote better water management. High-resolution monitoring of the spring discharge, temperature and electrical conductivity, was performed, together with chemical and bacterial analysis. All these demonstrate a rapid response of the spring to rainfall events and human impact. A complex karst system is inferred, including conduit flow, fissure flow and diffuse flow. Electrical conductivity, Na+ and K+ values (2.0 mS/cm, 130 and 50 mg/l respectively) are very high compared to other nearby springs located at the town margins (0.6 mS/cm, 15 and <1 mg/l respectively), indicating considerable urban pollution in the Gihon area. The previously cited pulsating nature of the spring was not detected during the present high-resolution monitoring. This phenomenon may have ceased due to additional water sources from urban leakage and irrigation feeding the spring. The urbanization of the recharge catchment thus affects the spring water dramatically, both chemically and hydrologically. Appropriate measures should therefore be undertaken to protect the Gihon Spring and other karst aquifers threatened by rapid urbanization.

  4. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  5. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  6. OSCIL: one-dimensional spring-mass system simulator for seismic analysis of high temperature gas cooled reactor core

    International Nuclear Information System (INIS)

    Lasker, L.

    1976-01-01

    OSCIL is a program to predict the effects of seismic input on a HTGR core. The present model is a one-dimensional array of blocks with appropriate spring constants, inter-elemental and ground damping, and clearances. It can be used more generally for systems of moving masses separated by nonlinear springs and dampers

  7. OSCIL: one-dimensional spring-mass system simulator for seismic analysis of high temperature gas cooled reactor core

    Energy Technology Data Exchange (ETDEWEB)

    Lasker, L. (ed.)

    1976-01-01

    OSCIL is a program to predict the effects of seismic input on a HTGR core. The present model is a one-dimensional array of blocks with appropriate spring constants, inter-elemental and ground damping, and clearances. It can be used more generally for systems of moving masses separated by nonlinear springs and dampers.

  8. A semi-analytical study on helical springs made of shape memory polymer

    International Nuclear Information System (INIS)

    Baghani, M; Naghdabadi, R; Arghavani, J

    2012-01-01

    In this paper, the responses of shape memory polymer (SMP) helical springs under axial force are studied both analytically and numerically. In the analytical solution, we first derive the response of a cylindrical tube under torsional loadings. This solution can be used for helical springs in which both the curvature and pitch effects are negligible. This is the case for helical springs with large ratios of the mean coil radius to the cross sectional radius (spring index) and also small pitch angles. Making use of this solution simplifies the analysis of the helical springs to that of the torsion of a straight bar with circular cross section. The 3D phenomenological constitutive model recently proposed for SMPs is also reduced to the 1D shear case. Thus, an analytical solution for the torsional response of SMP tubes in a full cycle of stress-free strain recovery is derived. In addition, the curvature effect is added to the formulation and the SMP helical spring is analyzed using the exact solution presented for torsion of curved SMP tubes. In this modified solution, the effect of the direct shear force is also considered. In the numerical analysis, the 3D constitutive equations are implemented in a finite element program and a full cycle of stress-free strain recovery of an SMP (extension or compression) helical spring is simulated. Analytical and numerical results are compared and it is shown that the analytical solution gives accurate stress distributions in the cross section of the helical SMP spring besides the global load–deflection response. Some case studies are presented to show the validity of the presented analytical method. (paper)

  9. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  10. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  11. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  12. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  13. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  14. A semi-spring and semi-edge combined contact model in CDEM and its application to analysis of Jiweishan landslide

    Directory of Open Access Journals (Sweden)

    Chun Feng

    2014-02-01

    Full Text Available Continuum-based discrete element method (CDEM is an explicit numerical method used for simulation of progressive failure of geological body. To improve the efficiency of contact detection and simplify the calculation steps for contact forces, semi-spring and semi-edge are introduced in calculation. Semi-spring is derived from block vertex, and formed by indenting the block vertex into each face (24 semi-springs for a hexahedral element. The formation process of semi-edge is the same as that of semi-spring (24 semi-edges for a hexahedral element. Based on the semi-springs and semi-edges, a new type of combined contact model is presented. According to this model, six contact types could be reduced to two, i.e. the semi-spring target face contact and semi-edge target edge contact. By the combined model, the contact force could be calculated directly (the information of contact type is not necessary, and the failure judgment could be executed in a straightforward way (each semi-spring and semi-edge own their characteristic areas. The algorithm has been successfully programmed in C++ program. Some simple numerical cases are presented to show the validity and accuracy of this model. Finally, the failure mode, sliding distance and critical friction angle of Jiweishan landslide are studied with the combined model.

  15. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  16. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  17. Recent trend of administration on hot springs

    Energy Technology Data Exchange (ETDEWEB)

    Okubo, Shigeru [Environment Agency, Tokyo (Japan)

    1989-01-01

    The Environmental Agency exercises jurisdiction over Hot Spring Act, and plans to protect the source of the hot spring and to utilize it appropriately. From the aspect of utilization, hot springs are widely used as a means to remedy chronic diseases and tourist spots besides places for recuperation and repose. Statistics on Japanese hot springs showed that the number of hot spring spots and utilized-fountainhead increased in 1987, compared with the number in 1986. Considering the utilized-headspring, the number of naturally well-out springs has stabilized for 10 years while power-operated springs have increased. This is because the demand of hot springs has grown as the number of users has increased. Another reason is to keep the amount of hot water by setting up the power facility as the welled-out amount has decreased. Major point of recent administration on the hot spring is to permit excavation and utilization of hot springs. Designation of National hot spring health resorts started in 1954 in order to ensure the effective and original use of hot springs and to promote the public use of them, for the purpose of arranging the sound circumstances of hot springs. By 1988, 76 places were designated. 4 figs., 3 tabs.

  18. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  19. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  20. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  1. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  2. arXiv AlterBBN v2: A public code for calculating Big-Bang nucleosynthesis constraints in alternative cosmologies

    CERN Document Server

    Arbey, A.; Hickerson, K.P.; Jenssen, E.S.

    We present the version 2 of AlterBBN, an open public code for the calculation of the abundance of the elements from Big-Bang nucleosynthesis. It does not rely on any closed external library or program, aims at being user-friendly and allowing easy modifications, and provides a fast and reliable calculation of the Big-Bang nucleosynthesis constraints in the standard and alternative cosmologies.

  3. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  4. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  5. Spring security 3.x cookbook

    CERN Document Server

    Mankale, Anjana

    2013-01-01

    This book follows a cookbook style exploring various security solutions provided by Spring Security for various vulnerabilities and threat scenarios that web applications may be exposed to at the authentication and session level layers.This book is for all Spring-based application developers as well as Java web developers who wish to implement robust security mechanisms into web application development using Spring Security.Readers are assumed to have a working knowledge of Java web application development, a basic understanding of the Spring framework, and some knowledge of the fundamentals o

  6. Weldon spring site environmental report for calendar year 1996. Revision 0

    International Nuclear Information System (INIS)

    1997-01-01

    This Site Environmental Report for Calendar Year 1996 describes the environmental monitoring programs at the Weldon Spring Site Remedial Action Project (WSSRAP). The objectives of these programs are to assess actual or potential exposure to contaminant effluents from the project area by providing public use scenarios and dose estimates, to demonstrate compliance with Federal and State permitted levels and regulations, and to summarize trends and/or changes in contaminant concentrations identified through environmental monitoring

  7. Weldon spring site environmental report for calendar year 1996. Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-07-23

    This Site Environmental Report for Calendar Year 1996 describes the environmental monitoring programs at the Weldon Spring Site Remedial Action Project (WSSRAP). The objectives of these programs are to assess actual or potential exposure to contaminant effluents from the project area by providing public use scenarios and dose estimates, to demonstrate compliance with Federal and State permitted levels and regulations, and to summarize trends and/or changes in contaminant concentrations identified through environmental monitoring.

  8. Geochemical and hydrologic data for wells and springs in thermal-spring areas of the Appalachians

    Energy Technology Data Exchange (ETDEWEB)

    Hobba, W.A. Jr.; Chemerys, J.C.; Fisher, D.W.; Pearson, F.J. Jr.

    1976-07-01

    Current interest in geothermal potential of thermal-spring areas in the Appalachians makes all data on thermal springs and wells in these areas valuable. Presented here without interpretive comment are maps showing selected springs and wells and tables of physical and chemical data pertaining to these wells and springs. The chemical tables show compositions of gases (oxygen, nitrogen, argon, methane, carbon dioxide, and helium), isotope contents (tritium, carbon (13), and oxygen (18)), trace and minor element chemical data, and the usual complete chemical data.

  9. Child survival in big cities: the disadvantages of migrants.

    Science.gov (United States)

    Brockerhoff, M

    1995-05-01

    Data from 15 Demographic and Health Surveys are used to examine whether rural-urban migrants in developing countries experience higher child mortality after settling in towns and cities than do lifelong urban residents, and if so, what individual or household characteristics account for this. Findings indicate that children of female migrants from the countryside generally have much poorer survival chances than other urban children. This survival disadvantage is more pronounced in big cities than in smaller urban areas, among migrants who have lived in the city for many years than among recent migrants, and in urban Latin America than in urban North Africa and sub-Saharan Africa. Within big cities, higher child mortality among migrant women is clearly related to their concentration in low-quality housing, and in part to fertility patterns at early ages of children and mother's educational attainment at later ages. Excess child mortality among urban migrants may also result from factors associated with the migration process, that are outlined in this study but not included in the analysis. Evidence of moderately high levels of residential segregation of migrant women in big cities suggests that opportunities exist for urban health programs to direct interventions to this disadvantaged segment of city populations.

  10. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    Science.gov (United States)

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  11. The effects of BIG-3 on osteoblast differentiation are not dependent upon endogenously produced BMPs

    International Nuclear Information System (INIS)

    Gori, Francesca; Demay, Marie B.

    2005-01-01

    BMPs play an important role in both intramembranous and endochondral ossification. BIG-3, BMP-2-induced gene 3 kb, encodes a WD-40 repeat protein that accelerates the program of osteoblastic differentiation in vitro. To examine the potential interactions between BIG-3 and the BMP-2 pathway during osteoblastic differentiation, MC3T3-E1 cells stably transfected with BIG-3 (MC3T3E1-BIG-3), or with the empty vector (MC3T3E1-EV), were treated with noggin. Noggin treatment of pooled MC3T3E1-EV clones inhibited the differentiation-dependent increase in AP activity observed in the untreated MC3T3E1-EV clones but did not affect the increase in AP activity in the MC3T3E1-BIG-3 clones. Noggin treatment decreased the expression of Runx2 and type I collagen mRNAs and impaired mineralized matrix formation in MC3T3E1-EV clones but not in MC3T3E1-BIG-3 clones. To determine whether the actions of BIG-3 on osteoblast differentiation converged upon the BMP pathway or involved an alternate signaling pathway, Smad1 phosphorylation was examined. Basal phosphorylation of Smad1 was not altered in the MC3T3E1-BIG-3 clones. However, these clones did not exhibit the noggin-dependent decrease in phosphoSmad1 observed in the MC3T3E1-EV clones, nor did it decrease nuclear localization of phosphoSmad1. These observations suggest that BIG-3 accelerates osteoblast differentiation in MC3T3-E1 cells by inducing phosphorylation and nuclear translocation of Smad1 independently of endogenously produced BMPs

  12. Missouri Department of Natural Resources Hazardous Waste Program Weldon Spring site remedial action project. Status to date January 1998

    International Nuclear Information System (INIS)

    1998-01-01

    This document describes the progress made by the Missouri Department of Natural Resources (MDNR) during the fifth year (1997) of the Agreement in Support (AIS) in its oversight role of the Weldon Springs Site Remedial Action Project (WSSRAP). Staffing issues this year have been a challenge with the resignation of an Environmental Specialist (ES) in June 1997, and the death of Robert Stovall, an Environmental Engineer (EE) II in August 1997. Progress made during this period includes securing a contract laboratory, participation in several workgroup meetings for activities at the site, oversight of the Feasibility Study/Proposed Plan (FS/PP), coordination between the US Department of Energy and the various State regulatory programs and interactions with the local public drinking water supply agency and health departments

  13. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  14. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  15. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  16. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  17. Applying spatial analysis techniques to assess the suitability of multipurpose uses of spring water in the Jiaosi Hot Spring Region, Taiwan

    Science.gov (United States)

    Jang, Cheng-Shin

    2016-04-01

    The Jiaosi Hot Spring Region is located in northeastern Taiwan and is rich in geothermal springs. The geothermal development of the Jiaosi Hot Spring Region dates back to the 18th century and currently, the spring water is processed for various uses, including irrigation, aquaculture, swimming, bathing, foot spas, and recreational tourism. Because of the proximity of the Jiaosi Hot Spring Region to the metropolitan area of Taipei City, the hot spring resources in this region attract millions of tourists annually. Recently, the Taiwan government is paying more attention to surveying the spring water temperatures in the Jiaosi Hot Spring Region because of the severe spring water overexploitation, causing a significant decline in spring water temperatures. Furthermore, the temperature of spring water is a reliable indicator for exploring the occurrence and evolution of springs and strongly affects hydrochemical reactions, components, and magnitudes. The multipurpose uses of spring water can be dictated by the temperature of the water. Therefore, accurately estimating the temperature distribution of the spring water is critical in the Jiaosi Hot Spring Region to facilitate the sustainable development and management of the multipurpose uses of the hot spring resources. To evaluate the suitability of spring water for these various uses, this study spatially characterized the spring water temperatures of the Jiaosi Hot Spring Region by using ordinary kriging (OK), sequential Gaussian simulation (SGS), and geographical information system (GIS). First, variogram analyses were used to determine the spatial variability of spring water temperatures. Next, OK and SGS were adopted to model the spatial distributions and uncertainty of the spring water temperatures. Finally, the land use (i.e., agriculture, dwelling, public land, and recreation) was determined and combined with the estimated distributions of the spring water temperatures using GIS. A suitable development strategy

  18. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  19. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  20. Spring Recipes A Problem-solution Approach

    CERN Document Server

    Long, Josh; Mak, Gary

    2010-01-01

    With over 3 Million users/developers, Spring Framework is the leading "out of the box" Java framework. Spring addresses and offers simple solutions for most aspects of your Java/Java EE application development, and guides you to use industry best practices to design and implement your applications. The release of Spring Framework 3 has ushered in many improvements and new features. Spring Recipes: A Problem-Solution Approach, Second Edition continues upon the bestselling success of the previous edition but focuses on the latest Spring 3 features for building enterprise Java applications.

  1. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  2. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  3. Quarry geotechnical report for the Weldon Spring Site Remedial Action Project, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    1990-11-01

    This report has been prepared for the United States Department of Energy's (DOE) Weldon Spring Site Remedial Action Project (WSSRAP) by the Project Management Contractor (PMC), which is MK-Ferguson Company (MK-Ferguson) with Jacobs Engineering Group (JEG) as its designated subcontractor. The Weldon Spring site (WSS) comprises the Weldon Spring quarry area and the Weldon Spring chemical plant and raffinate pit areas. This report presents the results of geotechnical investigations conducted during 1989--1990 at the proposed Weldon Spring quarry staging and water treatment facilities in the quarry area. The facilities are intended for treatment of water removed from the quarry area. An access road and a decontamination pad will be necessary for handling and transportation of bulk waste. Results of previous geotechnical investigations performed by other geoscience and environmental engineering firms in the quarry area, were reviewed, summarized and incorporated into this report. Well logging, stratigraphy data, piezometer data, elevations, and soil characteristics are also included

  4. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  5. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  6. Design and Implementation of a Training Course on Big Data Use in Water Management

    Directory of Open Access Journals (Sweden)

    Petra Koudelova

    2017-09-01

    Full Text Available Big Data has great potential to be applied to research in the field of geosciences. Motivated by the opportunity provided by the Data Integration and Analysis System (DIAS of Japan, we organized an intensive two-week course that aims to educate participants on Big Data and its exploitation to solve water management problems. When developing and implementing the Program, we identified two main challenges: (1 assuring that the training has a lasting effect and (2 developing an interdisciplinary curriculum suitable for participants of diverse professional backgrounds. To address these challenges, we introduced several distinctive features. The Program was based on experiential learning – the participants were required to solve real problems and worked in international and multidisciplinary teams. The lectures were strictly relevant to the case-study problems. Significant time was devoted to hands-on exercises, and participants received immediate feedback on individual assignments to ensure skills development. Our evaluation of the two occasions of the Program in 2015 and 2016 indicates significant positive outcomes. The successful completion of the individual assignments confirmed that the participants gained key skills related to the usage of DIAS and other tools. The final solutions to the case-study problems showed that the participants were able to integrate and apply the obtained knowledge, indicating that the Program’s format and curriculum were effective. We found that participants used DIAS in subsequent studies and work, thus suggesting that the Program had long-lasting effects. Our experience indicates that despite time constraints, short courses can effectively encourage researchers and practitioners to explore opportunities provided by Big Data.

  7. Evaluating Gridded Spring Indices Using the USA National Phenology Network's Observational Phenology Data

    Science.gov (United States)

    Crimmins, T. M.; Gerst, K.

    2017-12-01

    The USA National Phenology Network (USA-NPN; www.usanpn.org) produces and freely delivers daily and short-term forecast maps of spring onset dates at fine spatial scale for the conterminous United States and Alaska using the Spring Indices. These models, which represent the start of biological activity in the spring season, were developed using a long-term observational record of four species of lilacs and honeysuckles contributed by volunteer observers. Three of the four species continue to be tracked through the USA-NPN's phenology observation program, Nature's Notebook. The gridded Spring Index maps have utility for a wide range of natural resource planning and management applications, including scheduling invasive species and pest detection and control activities, anticipating allergy outbreaks and planning agricultural harvest dates. However, to date, there has not been a comprehensive assessment of how well the gridded Spring Index maps accurately reflect phenological activity in lilacs and honeysuckles or other species of plants. In this study, we used observational plant phenology data maintained by the USA-NPN to evaluate how well the gridded Spring Index maps match leaf and flowering onset dates in a) the lilac and honeysuckle species used to construct the models and b) in several species of deciduous trees. The Spring Index performed strongly at predicting the timing of leaf-out and flowering in lilacs and honeysuckles. The average error between predicted and observed date of onset ranged from 5.9 to 11.4 days. Flowering models performed slightly better than leaf-out models. The degree to which the Spring Indices predicted native deciduous tree leaf and flower phenology varied by year, species, and region. Generally, the models were better predictors of leaf and flowering onset dates in the Northeastern and Midwestern US. These results reveal when and where the Spring Indices are a meaningful proxy of phenological activity across the United States.

  8. Spring/dimple instrument tube restraint

    International Nuclear Information System (INIS)

    DeMario, E.E.; Lawson, C.N.

    1993-01-01

    A nuclear fuel assembly for a pressurized water nuclear reactor has a spring and dimple structure formed in a non-radioactive insert tube placed in the top of a sensor receiving instrumentation tube thimble disposed in the fuel assembly and attached at a top nozzle, a bottom nozzle, and intermediate grids. The instrumentation tube thimble is open at the top, where the sensor or its connection extends through the cooling water for coupling to a sensor signal processor. The spring and dimple insert tube is mounted within the instrumentation tube thimble and extends downwardly adjacent the top. The springs and dimples restrain the sensor and its connections against lateral displacement causing impact with the instrumentation tube thimble due to the strong axial flow of cooling water. The instrumentation tube has a stainless steel outer sleeve and a zirconium alloy inner sleeve below the insert tube adjacent the top. The insert tube is relatively non-radioactivated inconel alloy. The opposed springs and dimples are formed on diametrically opposite inner walls of the insert tube, the springs being formed as spaced axial cuts in the insert tube, with a web of the insert tube between the cuts bowed radially inwardly for forming the spring, and the dimples being formed as radially inward protrusions opposed to the springs. 7 figures

  9. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  10. Big-Time College Sports: The Seductions and Frustrations of NCAA's Division I.

    Science.gov (United States)

    Oberlander, Susan; Lederman, Douglas

    1988-01-01

    Administrators at Southeast Missouri State University may gamble on a controversial public relations strategy that would depend on a big-time sports program to increase enrollment. Utica College, however, will return to the NCAA's Division III after spiraling sports costs and inability to gain entrance to a suitable conference. (MLW)

  11. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  12. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  13. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  14. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  15. Measurements of 161 Double Stars With a High-Speed CCD: The Winter/Spring 2017 Observing Program at Brilliant Sky Observatory, Part 2

    Science.gov (United States)

    Harshaw, Richard

    2018-04-01

    In the winter and spring of 2017, an aggressive observing program of measuring close double stars with speckle interferometry and CCD imaging was undertaken at Brilliant Sky Observatory, my observing site in Cave Creek, Arizona. A total of 596 stars were observed, 8 of which were rejected for various reasons, leaving 588 pairs. Of these, 427 were observed and measured with speckle interferometry, while the remaining 161 were measured with a CCD. This paper reports the results of the observations of the 161 CCD cases. A separate paper in this issue will report the speckle measurements of the 427 other pairs.

  16. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    Science.gov (United States)

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  17. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  18. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  19. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  20. Baseline risk evaluation for exposure to bulk wastes at the Weldon Spring Quarry, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    Haroun, L.A.; Peterson, J.M.; MacDonell, M.M.; Hlohowskyj, I.

    1990-01-01

    The US Department of Energy (DOE), under its Surplus Facilities Management Program (SFMP), is responsible for cleanup activities at the Weldon Spring site, Weldon Spring, Missouri. The site consists of a raffinate pits and chemical plant area and a quarry. This baseline risk evaluation has been prepared to support a proposed response action for management of contaminated bulk wastes in the quarry. The quarry became chemically and radioactively contaminated as a result of various wastes that were disposed of there between 1942 and 1969. This risk evaluation assesses potential impacts on human health and the environment that may result from exposure to releases of contaminants from the quarry under current site conditions. Risk assessment is a key component of the remedial investigation/feasibility study (RI/FS) process, as identified in guidance from the US Environmental Protection Agency (EPA); this process addresses sites subject to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980, as amended by the Superfund Amendments and Reauthorization Act of 1986. Response actions at the Weldon Spring quarry are subject to CERCLA requirements because the quarry is listed on the EPA's National Priorities List. The DOE is also responsible for complying with the requirements of the National Environmental Policy Act (NEPA) of 1969, which requires federal agencies to consider the environmental consequences of a proposed action as part of the decision-making process for that action. Although this document has not been prepared to fulfill specific NEPA requirements, the analyses contained herein --- along with the analyses provided in the remedial investigation, feasibility study, and other supporting documents --- are intended to meet the environmental assessment requirements of NEPA

  1. [History of hot spring bath treatment in China].

    Science.gov (United States)

    Hao, Wanpeng; Wang, Xiaojun; Xiang, Yinghong; Gu Li, A Man; Li, Ming; Zhang, Xin

    2011-07-01

    As early as the 7th century B.C. (Western Zhou Dynasty), there is a recording as 'spring which contains sulfur could treat disease' on the Wentang Stele written by WANG Bao. Wenquan Fu written by ZHANG Heng in the Easten Han Dynasty also mentioned hot spring bath treatment. The distribution of hot springs in China has been summarized by LI Daoyuan in the Northern Wei Dynasty in his Shuijingzhu which recorded hot springs in 41 places and interpreted the definition of hot spring. Bencao Shiyi (by CHEN Cangqi, Tang Dynasty) discussed the formation of and indications for hot springs. HU Zai in the Song Dynasty pointed out distinguishing hot springs according to water quality in his book Yuyin Conghua. TANG Shenwei in the Song Dynasty noted in Jingshi Zhenglei Beiji Bencao that hot spring bath treatment should be combined with diet. Shiwu Bencao (Ming Dynasty) classified hot springs into sulfur springs, arsenicum springs, cinnabar springs, aluminite springs, etc. and pointed out their individual indications. Geologists did not start the work on distribution and water quality analysis of hot springs until the first half of the 20th century. There are 972 hot springs in Wenquan Jiyao (written by geologist ZHANG Hongzhao and published in 1956). In July 1982, the First National Geothermal Conference was held and it reported that there were more than 2600 hot springs in China. Since the second half of the 20th century, hot spring sanatoriums and rehabilitation centers have been established, which promoted the development of hot spring bath treatment.

  2. RETRAN operational transient analysis of the Big Rock Point plant boiling water reactor

    International Nuclear Information System (INIS)

    Sawtelle, G.R.; Atchison, J.D.; Farman, R.F.; VandeWalle, D.J.; Bazydlo, H.G.

    1983-01-01

    Energy Incorporated used the RETRAN computer code to model and calculate nine Consumers Power Company Big Rock Point Nuclear Power Plant transients. RETRAN, a best-estimate, one-dimensional, homogeneous-flow thermal-equilibrium code, is applicable to FSAR Chapter 15 transients for Conditions 1 through IV. The BWR analyses were performed in accordance with USNRC Standard Review Plan criteria and in response to the USNRC Systematic Evaluation Program. The RETRAN Big Rock Point model was verified by comparison to plant startup test data. This paper discusses the unique modeling techniques used in RETRAN to model this steam-drum-type BWR. Transient analyses results are also presented

  3. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Science.gov (United States)

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  4. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig proteins.

    Directory of Open Access Journals (Sweden)

    Rajeev Raman

    Full Text Available BACKGROUND: Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. PRINCIPAL FINDINGS: We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th (Lig A9 and 10(th repeats (Lig A10; and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon. All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm, probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. CONCLUSIONS: We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  5. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  6. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  7. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  8. Nuclear reactor spring strip grid spacer

    International Nuclear Information System (INIS)

    Patterson, J.F.; Flora, B.S.

    1978-01-01

    A bimetallic grid spacer is described comprising a grid structure of zircaloy formed by intersecting striplike members which define fuel element openings for receiving fuel elements and spring strips made of Inconel positioned within the grid structure for cooperating with the fuel elements to maintain them in their desired position. A plurality of these spring strips extend longitudinally between sides of the grid structure, being locked in position by the grid retaining strips. The fuel rods, which are disposed in the fuel openings formed in the grid structure, are positioned by means of the springs associated with the spring strips and a plurality of dimples which extend from the zircaloy grid structure into the openings. In one embodiment the strips are disposed in a plurality of arrays with those spring strip arrays situated in opposing diagonal quadrants of the grid structure extending in the same direction and adjacent spring strip arrays in each half of the spacer extending in relatively perpendicular directions. Other variations of the spring strip arrangements for a particular fuel design are disclosed herein

  9. Sustaining Employability: A Process for Introducing Cloud Computing, Big Data, Social Networks, Mobile Programming and Cybersecurity into Academic Curricula

    Directory of Open Access Journals (Sweden)

    Razvan Bologa

    2017-12-01

    Full Text Available This article describes a process for introducing modern technological subjects into the academic curricula of non-technical universities. The process described may increase and contribute to social sustainability by enabling non-technical students’ access to the field of the Internet of Things and the broader Industry 4.0. The process has been defined and tested during a curricular reform project that took place in two large universities in Eastern Europe. In this article, the authors describe the results and impact, over multiple years, of a project financed by the European Union that aimed to introduce the following subjects into the academic curricula of business students: cloud computing, big data, mobile programming, and social networks and cybersecurity (CAMSS. The results are useful for those trying to implement similar curricular reforms, or to companies that need to manage talent pipelines.

  10. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  11. Deductive systems for BigData integration

    Directory of Open Access Journals (Sweden)

    Radu BUCEA-MANEA-TONIS

    2018-03-01

    Full Text Available The globalization is associated with an increased data to be processed from E-commerce transactions. The specialists are looking for different solutions, such as BigData, Hadoop, Datawarehoues, but it seems that the future is the predicative logic implemented through deductive database technology. It has to be done the swift from imperative languages, to not declaratively languages used for the application development. The deductive databases are very useful in the student teaching programs, too. Thus, the article makes a consistent literature review in the field and shows practical examples of using predicative logic in deductive systems, in order to integrate different kind of data types.

  12. Big data of tree species distributions

    DEFF Research Database (Denmark)

    Serra-Diaz, Josep M.; Enquist, Brian J.; Maitner, Brian

    2018-01-01

    are currently available in big databases, several challenges hamper their use, notably geolocation problems and taxonomic uncertainty. Further, we lack a complete picture of the data coverage and quality assessment for open/public databases of tree occurrences. Methods: We combined data from five major...... and data aggregation, especially from national forest inventory programs, to improve the current publicly available data.......Background: Trees play crucial roles in the biosphere and societies worldwide, with a total of 60,065 tree species currently identified. Increasingly, a large amount of data on tree species occurrences is being generated worldwide: from inventories to pressed plants. While many of these data...

  13. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  14. Epidemiology in wonderland: Big Data and precision medicine.

    Science.gov (United States)

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  15. Big Data and Analytics in Healthcare.

    Science.gov (United States)

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  16. Big Data for Business Ecosystem Players

    Directory of Open Access Journals (Sweden)

    Perko Igor

    2016-06-01

    Full Text Available In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use-namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.

  17. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  18. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  19. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  20. BIG DATA IN TAMIL: OPPORTUNITIES, BENEFITS AND CHALLENGES

    OpenAIRE

    R.S. Vignesh Raj; Babak Khazaei; Ashik Ali

    2015-01-01

    This paper gives an overall introduction on big data and has tried to introduce Big Data in Tamil. It discusses the potential opportunities, benefits and likely challenges from a very Tamil and Tamil Nadu perspective. The paper has also made original contribution by proposing the ‘big data’s’ terminology in Tamil. The paper further suggests a few areas to explore using big data Tamil on the lines of the Tamil Nadu Government ‘vision 2023’. Whilst, big data has something to offer everyone, it ...

  1. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  2. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  3. [Size distributions of aerosol during the Spring Festival in Nanjing].

    Science.gov (United States)

    Wang, Hong-Lei; Zhu, Bin; Shen, Li-Juan; Liu, Xiao-Hui; Zhang, Ze-Feng; Yang, Yang

    2014-02-01

    In order to investigate the firework burning impacts on spectrum distribution of atmospheric aerosol during the Spring Festival in Nanjing, number concentration and mass concentration of aerosol as well as mass concentration of gas pollutants were measured during January 19-31, 2012. The results indicated that the concentration of aerosol between 10-20 nm decreased, aerosol concentration in the range of 50-100 nm, 100-200 nm and 200-500 nm increased during the firework burning period comparing to those during the non-burning period. However, there was no obvious variation for aerosol between 20-50 nm and 0.5-10 microm. The spectrum distribution of number concentration was bimodal during the non-burning period and unimodal during the burning period, with the peak value shifting to large diameter section. The mass concentration presented a bimodal distribution, the value of PM2.5/PM10 and PM10/PM10 increased by 10% during the burning period. The firework burning events had big influence on the density of aerosol between 1.0-2.1 microm.

  4. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  5. 75 FR 39241 - Hooper Springs Project

    Science.gov (United States)

    2010-07-08

    ... DEPARTMENT OF ENERGY Bonneville Power Administration Hooper Springs Project AGENCY: Bonneville... (collectively referred to as the Hooper Springs Project). The new BPA substation would be called Hooper Springs... proposed project would address voltage stability and reliability concerns of two of BPA's full requirements...

  6. Adapting a rapid assessment protocol to environmentally assess palm swamp (Veredas) springs in the Cerrado biome, Brazil.

    Science.gov (United States)

    Guimarães, Ariane; de Lima Rodrigues, Aline Sueli; Malafaia, Guilherme

    2017-10-30

    The exploitation and degradation of natural environments exert intense pressure on important ecosystems worldwide. Thus, it is necessary developing or adapting assessment methods to monitor environmental changes and to generate results to be applied to environmental management programs. The Brazilian Veredas (phytophysiognomies typical to the Cerrado biome) are threatened by several human activities; thus, the aim of the present study is to adapt a rapid assessment protocol (RAP) to be applied to Veredas springs, by using the upper course of the Vai-e-Vem stream watershed (Ipameri County, Goiás State, Brazil). Therefore, several springs in the study site were visited and 11 of them were considered Veredas springs. After the RAP was adapted, the instrument was validated and used to environmentally assess the springs in order to demonstrate its applicability. The present study has provided an instrument of option to monitor Veredas springs.

  7. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. Multimedia assessment of health risks for the Weldon Spring site remedial action project

    International Nuclear Information System (INIS)

    Haroun, L.A.; MacDonell, M.M.; Peterson, J.M.; Fingleton, D.J.

    1990-01-01

    The US Department of Energy (DOE), under its Surplus Facilities Management Program (SFMP), is responsible for cleanup activities at the Weldon Spring site, Weldon Spring, Missouri. The site consists of two noncontiguous areas: the chemical plant area, which includes four raffinate pits, and the quarry. The Weldon Spring site became radioactively and chemically contaminated as a result of processing and disposal activities that took place from the 1940s through the 1960s. The US Department of the Army used the Weldon Spring site to produce dinitrotoluene (DNT) and trinitrotoluene (TNT) explosives from 1941 to 1946. The US Atomic Energy Commission (AEC, predecessor of the DOE) used the site to process uranium and thorium ore concentrates from 1957 to 1966. The quarry was used by the Army and the AEC for waste disposal beginning in the early 1940s; it was last used for disposal in 1969. Wastes placed in the quarry include TNT and DNT residues and radioactively contaminated materials. A summary of disposal activities at the quarry is presented. As part of the environmental compliance process at the Weldon Spring site, a baseline risk evaluation (BRE) was prepared to assess the potential risks associated with contamination present at the quarry. 13 refs., 2 figs., 6 tabs

  9. Purely Functional Structured Programming

    OpenAIRE

    Obua, Steven

    2010-01-01

    The idea of functional programming has played a big role in shaping today's landscape of mainstream programming languages. Another concept that dominates the current programming style is Dijkstra's structured programming. Both concepts have been successfully married, for example in the programming language Scala. This paper proposes how the same can be achieved for structured programming and PURELY functional programming via the notion of LINEAR SCOPE. One advantage of this proposal is that m...

  10. Big Data over a 100 G network at Fermilab

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; Dykstra, Dave; Slyz, Marko

    2014-01-01

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out of the laboratory of about 30 Gbit/s and on the Local area network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research and Development facility connected to the ESnet 100 G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. This work presents the new R and D facility and the continuation of the evaluation program.

  11. Big Data Over a 100G Network at Fermilab

    Science.gov (United States)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; Dykstra, Dave; Slyz, Marko

    2014-06-01

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out of the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. This work presents the new R&D facility and the continuation of the evaluation program.

  12. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  13. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    Science.gov (United States)

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  14. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  15. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  16. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  17. Quantum nature of the big bang.

    Science.gov (United States)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  18. Engineering evaluation/cost analysis for the proposed management of 15 nonprocess buildings (15 series) at the Weldon Spring Site Chemical Plant, Weldon Spring, Missouri

    International Nuclear Information System (INIS)

    MacDonell, M.M.; Peterson, J.M.

    1991-11-01

    The US Department of Energy, under its Surplus Facilities Management Program (SFMP), is responsible for cleanup activities at the Weldon Spring site, located near Weldon Spring, Missouri. The site consists of two noncontiguous areas: (1) a raffinate pits and chemical plant area and (2) a quarry. This engineering evaluation/cost analysis (EE/CA) report has been prepared to support a proposed removal action to manage 15 nonprocess buildings, identified as the 15 Series buildings, at the chemical plant on the Weldon Spring site. These buildings have been nonoperational for more than 20 years, and the deterioration that has occurred during this time has resulted in a potential threat to site workers, the general public, and the environment. The EE/CA documentation of this proposed action is consistent with guidance from the US Environmental Protection Agency (EPA) that addresses removal actions at sites subject to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980, as amended by the Superfund Amendments and Reauthorization Act of 1986. Actions at the Weldon Spring site are subject to CERCLA requirements because the site is on the EPA's National Priorities List. The objectives of this report are to (1) identify alternatives for management of the nonprocess buildings; (2) document the selection of response activities that will mitigate the potential threat to workers, the public, and the environment associated with these buildings; and (3) address environmental impact associated with the proposed action

  19. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    Science.gov (United States)

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  20. Big data processing in the cloud - Challenges and platforms

    Science.gov (United States)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  1. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  2. Victoria Stodden: Scholarly Communication in the Era of Big Data and Big Computation

    OpenAIRE

    Stodden, Victoria

    2015-01-01

    Victoria Stodden gave the keynote address for Open Access Week 2015. "Scholarly communication in the era of big data and big computation" was sponsored by the University Libraries, Computational Modeling and Data Analytics, the Department of Computer Science, the Department of Statistics, the Laboratory for Interdisciplinary Statistical Analysis (LISA), and the Virginia Bioinformatics Institute. Victoria Stodden is an associate professor in the Graduate School of Library and Information Scien...

  3. Big Data: Concept, Potentialities and Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Fernando Almeida

    2018-03-01

    Full Text Available The evolution of information systems and the growth in the use of the Internet and social networks has caused an explosion in the amount of available data relevant to the activities of the companies. Therefore, the treatment of these available data is vital to support operational, tactical and strategic decisions. This paper aims to present the concept of big data and the main technologies that support the analysis of large data volumes. The potential of big data is explored considering nine sectors of activity, such as financial, retail, healthcare, transports, agriculture, energy, manufacturing, public, and media and entertainment. In addition, the main current opportunities, vulnerabilities and privacy challenges of big data are discussed. It was possible to conclude that despite the potential for using the big data to grow in the previously identified areas, there are still some challenges that need to be considered and mitigated, namely the privacy of information, the existence of qualified human resources to work with Big Data and the promotion of a data-driven organizational culture.

  4. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  5. Human factors in Big Data

    NARCIS (Netherlands)

    Boer, J. de

    2016-01-01

    Since 2014 I am involved in various (research) projects that try to make the hype around Big Data more concrete and tangible for the industry and government. Big Data is about multiple sources of (real-time) data that can be analysed, transformed to information and be used to make 'smart' decisions.

  6. Developing bulk exchange spring magnets

    Science.gov (United States)

    Mccall, Scott K.; Kuntz, Joshua D.

    2017-06-27

    A method of making a bulk exchange spring magnet by providing a magnetically soft material, providing a hard magnetic material, and producing a composite of said magnetically soft material and said hard magnetic material to make the bulk exchange spring magnet. The step of producing a composite of magnetically soft material and hard magnetic material is accomplished by electrophoretic deposition of the magnetically soft material and the hard magnetic material to make the bulk exchange spring magnet.

  7. Slaves to Big Data. Or Are We?

    Directory of Open Access Journals (Sweden)

    Mireille Hildebrandt

    2013-10-01

    Full Text Available

    In this contribution, the notion of Big Data is discussed in relation to the monetisation of personal data. The claim of some proponents, as well as adversaries, that Big Data implies that ‘n = all’, meaning that we no longer need to rely on samples because we have all the data, is scrutinised and found to be both overly optimistic and unnecessarily pessimistic. A set of epistemological and ethical issues is presented, focusing on the implications of Big Data for our perception, cognition, fairness, privacy and due process. The article then looks into the idea of user-centric personal data management to investigate to what extent it provides solutions for some of the problems triggered by the Big Data conundrum. Special attention is paid to the core principle of data protection legislation, namely purpose binding. Finally, this contribution seeks to inquire into the influence of Big Data politics on self, mind and society, and asks how we can prevent ourselves from becoming slaves to Big Data.

  8. Will Organization Design Be Affected By Big Data?

    Directory of Open Access Journals (Sweden)

    Giles Slinger

    2014-12-01

    Full Text Available Computing power and analytical methods allow us to create, collate, and analyze more data than ever before. When datasets are unusually large in volume, velocity, and variety, they are referred to as “big data.” Some observers have suggested that in order to cope with big data (a organizational structures will need to change and (b the processes used to design organizations will be different. In this article, we differentiate big data from relatively slow-moving, linked people data. We argue that big data will change organizational structures as organizations pursue the opportunities presented by big data. The processes by which organizations are designed, however, will be relatively unaffected by big data. Instead, organization design processes will be more affected by the complex links found in people data.

  9. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  10. Big Data

    OpenAIRE

    Bútora, Matúš

    2017-01-01

    Cieľom bakalárskej práca je popísať problematiku Big Data a agregačné operácie OLAP pre podporu rozhodovania, ktoré sú na ne aplikované pomocou technológie Apache Hadoop. Prevažná časť práce je venovaná popisu práve tejto technológie. Posledná kapitola sa zaoberá spôsobom aplikovania agregačných operácií a problematikou ich realizácie. Nasleduje celkové zhodnotenie práce a možnosti využitia výsledného systému do budúcna. The aim of the bachelor thesis is to describe the Big Data issue and ...

  11. Vibro-spring particle size distribution analyser

    International Nuclear Information System (INIS)

    Patel, Ketan Shantilal

    2002-01-01

    This thesis describes the design and development of an automated pre-production particle size distribution analyser for particles in the 20 - 2000 μm size range. This work is follow up to the vibro-spring particle sizer reported by Shaeri. In its most basic form, the instrument comprises a horizontally held closed coil helical spring that is partly filled with the test powder and sinusoidally vibrated in the transverse direction. Particle size distribution data are obtained by stretching the spring to known lengths and measuring the mass of the powder discharged from the spring's coils. The size of the particles on the other hand is determined from the spring 'intercoil' distance. The instrument developed by Shaeri had limited use due to its inability to measure sample mass directly. For the device reported here, modifications are made to the original configurations to establish means of direct sample mass measurement. The feasibility of techniques for measuring the mass of powder retained within the spring are investigated in detail. Initially, the measurement of mass is executed in-situ from the vibration characteristics based on the spring's first harmonic resonant frequency. This method is often erratic and unreliable due to the particle-particle-spring wall interactions and the spring bending. An much more successful alternative is found from a more complicated arrangement in which the spring forms part of a stiff cantilever system pivoted along its main axis. Here, the sample mass is determined in the 'static mode' by monitoring the cantilever beam's deflection following the wanton termination of vibration. The system performance has been optimised through the variations of the mechanical design of the key components and the operating procedure as well as taking into account the effect of changes in the ambient temperature on the system's response. The thesis also describes the design and development of the ancillary mechanisms. These include the pneumatic

  12. Stock repurchase and Arab Spring empirical evidence from the MENA region

    Directory of Open Access Journals (Sweden)

    Foued Hamouda

    2018-03-01

    Full Text Available This paper examines how repurchase programs are used in the MENA region in the context of the political instability associated with the Arab Spring. We extend the knowledge regarding the relationship between stock repurchases and firm performance. We find that repurchase programs are used differently across countries. In fact, repurchases are negatively related to prior stock price performance. However, the market reacts more favorably to repurchases made by low market capitalization firms and by firms with high book-to-market ratio.

  13. Hydrogeochemical response of groundwater springs during central Italy earthquakes (24 August 2016 and 26-30 October 2016)

    Science.gov (United States)

    Archer, Claire; Binda, Gilberto; Terrana, Silvia; Gambillara, Roberto; Michetti, Alessandro; Noble, Paula; Petitta, Marco; Rosen, Michael; Pozzi, Andrea; Bellezza, Paolo; Brunamonte, Fabio

    2017-04-01

    Co-seismic hydrological and chemical response at groundwater springs following strong earthquakes is a significant concern in the Apennines, a region in central Italy characterized by regional karstic groundwater systems interacting with active normal faults capable of producing Mw 6.5 to 7.0 seismic events. These aquifers also provide water supply to major metropolitan areas in the region. On August 24, 2016, a Mw 6.0 earthquake hit Central Italy in the area where Latium joins Umbria, Marche and Abruzzi; this was immediately followed one hour later by a Mw 5.4 shock. The epicenter of the event was located at the segment boundary between the Mt. Vettore and Mt. Laga faults. On October 26, 2016 and on October 30, 2016, three other big shocks (Mw 5.5, Mw 6.0 and Mw 6.5) ruptured again the Vettore Fault and its NW extension. Immediately after Aug. 24, we sampled springs discharging different aquifers in the Rieti area, including the Peschiera spring, which feeds the aqueduct of Rome. Thermal springs connected with deep groundwater flowpaths were also sampled. These springs, sampled previously in 2014 and 2015, provide some pre-earthquake data. Moreover, we sampled 4 springs along the Mt. Vettore fault system: 3 small springs at Forca di Presta, close to the trace of the earthquake surface ruptures, and two in Castel Sant'Angelo sul Nera. The latter are feeding the Nera aqueduct and the Nerea S.p.A. mineral water plant, which also kindly allowed us to collect bottled water samples from the pre-seismic period. The aim of this study is to evaluate the strong earthquake sequence effects on the hydrochemistry and flow paths of groundwater from different aquifer settings based on analysis before and after seismic events. The comparison between the responses of springs ca. 40 km from the epicenter (Rieti basin) and the springs located near the epicenter (Castelsantangelo sul Nera and Forca di Presta) is especially significant for understanding the resilience of groundwater

  14. Effect of Deoxidation Process on Inclusion and Fatigue Performance of Spring Steel for Automobile Suspension

    Science.gov (United States)

    Hu, Yang; Chen, Weiqing; Wan, Changjie; Wang, Fangjun; Han, Huaibin

    2018-04-01

    55SiCrA spring steel was smelted in a vacuum induction levitation furnace. The liquid steel was treated by Si deoxidation, Al modification with Ca treatment and Al modification, and the steel samples were obtained with deformable Al2O3-SiO2-CaO-MgO inclusions closely contacted with steel matrix, Al2O3-CaO-CaS-SiO2-MgO inclusions surrounded by small voids or Al2O3(> 80 pct)-SiO2-CaO-MgO inclusions surrounded by big voids, respectively. Effect of three types of inclusions on steel fatigue cracks was studied. The perpendicular and transverse fatigue cracks around the three types of inclusions leading to fracture were found to vary in behavior. Under the applied stress amplitude of 775 MPa, the fatigue lives of the three spring steels decreased from 4.0 × 107 to 3.8 × 107, and to 3.1 × 107 cycles. For the applied stress amplitude of 750 MPa, the fatigue lives of the three spring steels decreased from 5.2 × 107 to 4.1 × 107, and to 3.4 × 107 cycles. Based on the voids around inclusions, the equivalent size of initial fatigue crack has been newly defined as √ {{{area}_{inclusion} }/{(1 - {CC)}}} , where the contraction coefficient CC of inclusion was introduced. A reliable forecast model of the critical size of inclusion leading to fracture was established by the incorporation of actual width b inclusion or diameter d inclusion of internal inclusion; the model prediction was found to be in agreement with experimental results.

  15. Force delivery of Ni-Ti coil springs.

    Science.gov (United States)

    Manhartsberger, C; Seidenbusch, W

    1996-01-01

    Sentalloy springs (GAC, Central Islip, N.Y.) of the open and closed type were investigated with a special designed device. The closed coil springs were subjected to a tensile and the open coil springs to a compression test. After a first measurement, the springs were activated for a period of 4 weeks and then reinvestigated with the same procedure. It could be shown distinctly that, with the different coil springs, the force delivery given by the producer could be achieved only within certain limits. To remain in the martensitic plateau, changed activation ranges, and for the Sentalloy coil springs white and red of the open and closed type, also changed force deliveries had to be taken into account. There was a distinct decrease in force delivery between the first and second measurement. After considering the loading curves of all the Sentalloy coil springs and choosing the right activation range respective to the force delivery, it was found that the coil springs deliver a superior clinical behavior and open new treatment possibilities.

  16. Flow-induced vibration of helical coil compression springs

    International Nuclear Information System (INIS)

    Stokes, F.E.; King, R.A.

    1983-01-01

    Helical coil compression springs are used in some nuclear fuel assembly designs to maintain holddown and to accommodate thermal expansion. In the reactor environment, the springs are exposed to flowing water, elevated temperatures and pressures, and irradiation. Flow parallel to the longitudinal axis of the spring may excite the spring coils and cause vibration. The purpose of this investigation was to determine the flow-induced vibration (FIV) response characteristics of the helical coil compression springs. Experimental tests indicate that a helical coil spring responds like a single circular cylinder in cross-flow. Two FIV excitation mechanisms control spring vibration. Namely: 1) Turbulent Buffeting causes small amplitude vibration which increases as a function of velocity squared. 2) Vortex Shedding causes large amplitude vibration when the spring natural frequency and Strouhal frequency coincide. Several methods can be used to reduce or to prevent vortex shedding large amplitude vibrations. One method is compressing the spring to a coil pitch-to-diameter ratio of 2 thereby suppressing the vibration amplitude. Another involves modifying the spring geometry to alter its stiffness and frequency characteristics. These changes result in separation of the natural and Strouhal frequencies. With an understanding of how springs respond in the flowing water environment, the spring physical parameters can be designed to avoid large amplitude vibration. (orig.)

  17. Marble Canyon spring sampling investigation

    International Nuclear Information System (INIS)

    McCulley, B.

    1985-10-01

    The Mississippian Leadville Limestone is the most permeable formation in the lower hydrostratigraphic unit underlying the salt beds of the Paradox Formation in Gibson Dome, Paradox Basin, Utah, which is being considered as a potential nuclear waste repository site. The closest downgradient outcrop of the Mississippian limestone is along the Colorado River in Marble Canyon, Arizona. This report describes the sampling and interpretation of springs in that area to assess the relative contribution of Gibson Dome-type Leadville Limestone ground water to that spring discharge. The high-volume (hundreds of liters per second or thousands of gallons per minute) springs discharging from fault zones in Marble Canyon are mixtures of water recharged west of the Colorado River on the Kaibab Plateau and east of the river in the Kaiparowits basin. No component of Gibson Dome-type Leadville Limestone ground water is evident in major and trace element chemistry or isotopic composition of the Marble Canyon Springs. A low-volume (0.3 liters per second or 5 gallons per minute) spring with some chemical and isotopic characteristics of Gibson Dome-type Leadville Limestone water diluted by Kaiparowits basin-type water issues from a travertine mound in the Bright Angel Shale on the Little Colorado River. However, the stable isotopic composition and bromide levels of that spring discharge, in addition to probable ground-water flow paths, contradict the dilution hypothesis

  18. Open-Coil Retraction Spring

    Directory of Open Access Journals (Sweden)

    Pavankumar Janardan Vibhute

    2011-01-01

    Full Text Available Sliding mechanic has become a popular method for space closure with developments in preadjusted edgewise appliance. Furthermore, various space closing auxiliaries have been developed and evaluated extensively for their clinical efficiency. Their effectiveness enhanced with optimum force magnitude and low-load deflection rate (LDR/force decay. With the advent of NiTi springs in orthodontics, LDRs have been markedly reduced. For use of NiTi, clinician has to depend upon prefabricated closed coil springs. “Open Coil Retraction Spring (OCRS” is developed utilizing NiTi open-coil spring for orthodontic space closure. This paper describes fabrication and clinical application of OCRS which have number of advantages. It sustains low LDR with optimum force magnitude. Its design is adjustable for desired length and force level. It is fail-safe for both activation and deactivation (i.e., it cannot be over activated, and decompression limit of open coil is also controlled by the operator, resp.. A possibility to offset the OCRS away from mucosa helps to reduce its soft-tissue impingement.

  19. BigDansing

    KAUST Repository

    Khayyat, Zuhair; Ilyas, Ihab F.; Jindal, Alekh; Madden, Samuel; Ouzzani, Mourad; Papotti, Paolo; Quiané -Ruiz, Jorge-Arnulfo; Tang, Nan; Yin, Si

    2015-01-01

    of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic

  20. Leveraging Mobile Network Big Data for Developmental Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Some argue that big data and big data users offer advantages to generate evidence. ... Supported by IDRC, this research focused on transportation planning in urban ... Using mobile network big data for land use classification CPRsouth 2015.

  1. Practice Variation in Big-4 Transparency Reports

    DEFF Research Database (Denmark)

    Girdhar, Sakshi; Klarskov Jeppesen, Kim

    2018-01-01

    Purpose: The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach: The study draws...... on a qualitative research approach, in which the content of transparency reports is analyzed and semi-structured interviews are conducted with key people from the Big-4 firms who are responsible for developing the transparency reports. Findings: The findings show that the content of transparency reports...... is inconsistent and the transparency reporting practice is not uniform within the Big-4 networks. Differences were found in the way in which the transparency reporting practices are coordinated globally by the respective central governing bodies of the Big-4. The content of the transparency reports...

  2. Outer grid strap protruding spring repair apparatus

    International Nuclear Information System (INIS)

    Widener, W.H.

    1987-01-01

    This patent describes a nuclear fuel assembly grid spring repair apparatus for repairing a spring formed on an outer strap of a fuel assembly grid and having a portion protruding outwardly beyond the strap, the apparatus comprising: (a) a support frame defining an opening and having means defining a guide channel extending along the opening in a first direction; (b) means mounted on the frame and being adjustable for attaching the frame to the outer strap of the support grid so that the frame opening is aligned with the outwardly protruding spring on the outer strap; (c) an outer slide having a passageway defined therethrough and being mounted in the guide channel for reciprocable movement along the frame opening in the first direction for aligning the passageway with the outwardly protruding portion of the spring on the outer strap. The outer slide also has means defining a guide way extending along the passageway in a second direction generally orthogonal to the first direction; (d) a spring reset mechanism being operable for resetting the protruding spring to a nonprotruding position relative to the outer strap when the mechanism is aligned with the protruding portion of the spring; and (e) an inner slide supporting the spring reset mechanism and being mounted to the guide way for reciprocable movement along the passageway of the outer slide in the second direction for aligning the spring reset mechanism with the protruding portion of the spring on the outer strap

  3. Low-temperature geothermal water in Utah: A compilation of data for thermal wells and springs through 1993

    Energy Technology Data Exchange (ETDEWEB)

    Blackett, R.E.

    1994-07-01

    The Geothermal Division of DOE initiated the Low-Temperature Geothermal Resources and Technology Transfer Program, following a special appropriation by Congress in 1991, to encourage wider use of lower-temperature geothermal resources through direct-use, geothermal heat-pump, and binary-cycle power conversion technologies. The Oregon Institute of Technology (OIT), the University of Utah Research Institute (UURI), and the Idaho Water Resources Research Institute organized the federally-funded program and enlisted the help of ten western states to carry out phase one. This first phase involves updating the inventory of thermal wells and springs with the help of the participating state agencies. The state resource teams inventory thermal wells and springs, and compile relevant information on each sources. OIT and UURI cooperatively administer the program. OIT provides overall contract management while UURI provides technical direction to the state teams. Phase one of the program focuses on replacing part of GEOTHERM by building a new database of low- and moderate-temperature geothermal systems for use on personal computers. For Utah, this involved (1) identifying sources of geothermal date, (2) designing a database structure, (3) entering the new date; (4) checking for errors, inconsistencies, and duplicate records; (5) organizing the data into reporting formats; and (6) generating a map (1:750,000 scale) of Utah showing the locations and record identification numbers of thermal wells and springs.

  4. Big data and biomedical informatics: a challenging opportunity.

    Science.gov (United States)

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  5. Prediction of Spring Rate and Initial Failure Load due to Material Properties of Composite Leaf Spring

    International Nuclear Information System (INIS)

    Oh, Sung Ha; Choi, Bok Lok

    2014-01-01

    This paper presented analysis methods for adapting E-glass fiber/epoxy composite (GFRP) materials to an automotive leaf spring. It focused on the static behaviors of the leaf spring due to the material composition and its fiber orientation. The material properties of the GFRP composite were directly measured based on the ASTM standard test. A reverse implementation was performed to obtain the complete set of in-situ fiber and matrix properties from the ply test results. Next, the spring rates of the composite leaf spring were examined according to the variation of material parameters such as the fiber angles and resin contents of the composite material. Finally, progressive failure analysis was conducted to identify the initial failure load by means of an elastic stress analysis and specific damage criteria. As a result, it was found that damage first occurred along the edge of the leaf spring owing to the shear stresses

  6. Anticipated Changes in Conducting Scientific Data-Analysis Research in the Big-Data Era

    Science.gov (United States)

    Kuo, Kwo-Sen; Seablom, Michael; Clune, Thomas; Ramachandran, Rahul

    2014-05-01

    A Big-Data environment is one that is capable of orchestrating quick-turnaround analyses involving large volumes of data for numerous simultaneous users. Based on our experiences with a prototype Big-Data analysis environment, we anticipate some important changes in research behaviors and processes while conducting scientific data-analysis research in the near future as such Big-Data environments become the mainstream. The first anticipated change will be the reduced effort and difficulty in most parts of the data management process. A Big-Data analysis environment is likely to house most of the data required for a particular research discipline along with appropriate analysis capabilities. This will reduce the need for researchers to download local copies of data. In turn, this also reduces the need for compute and storage procurement by individual researchers or groups, as well as associated maintenance and management afterwards. It is almost certain that Big-Data environments will require a different "programming language" to fully exploit the latent potential. In addition, the process of extending the environment to provide new analysis capabilities will likely be more involved than, say, compiling a piece of new or revised code. We thus anticipate that researchers will require support from dedicated organizations associated with the environment that are composed of professional software engineers and data scientists. A major benefit will likely be that such extensions are of higher-quality and broader applicability than ad hoc changes by physical scientists. Another anticipated significant change is improved collaboration among the researchers using the same environment. Since the environment is homogeneous within itself, many barriers to collaboration are minimized or eliminated. For example, data and analysis algorithms can be seamlessly shared, reused and re-purposed. In conclusion, we will be able to achieve a new level of scientific productivity in the

  7. Anticipated Changes in Conducting Scientific Data-Analysis Research in the Big-Data Era

    Science.gov (United States)

    Kuo, Kwo-Sen; Seablom, Michael; Clune, Thomas; Ramachandran, Rahul

    2014-01-01

    A Big-Data environment is one that is capable of orchestrating quick-turnaround analyses involving large volumes of data for numerous simultaneous users. Based on our experiences with a prototype Big-Data analysis environment, we anticipate some important changes in research behaviors and processes while conducting scientific data-analysis research in the near future as such Big-Data environments become the mainstream. The first anticipated change will be the reduced effort and difficulty in most parts of the data management process. A Big-Data analysis environment is likely to house most of the data required for a particular research discipline along with appropriate analysis capabilities. This will reduce the need for researchers to download local copies of data. In turn, this also reduces the need for compute and storage procurement by individual researchers or groups, as well as associated maintenance and management afterwards. It is almost certain that Big-Data environments will require a different "programming language" to fully exploit the latent potential. In addition, the process of extending the environment to provide new analysis capabilities will likely be more involved than, say, compiling a piece of new or revised code.We thus anticipate that researchers will require support from dedicated organizations associated with the environment that are composed of professional software engineers and data scientists. A major benefit will likely be that such extensions are of higherquality and broader applicability than ad hoc changes by physical scientists. Another anticipated significant change is improved collaboration among the researchers using the same environment. Since the environment is homogeneous within itself, many barriers to collaboration are minimized or eliminated. For example, data and analysis algorithms can be seamlessly shared, reused and re-purposed. In conclusion, we will be able to achieve a new level of scientific productivity in the Big

  8. Grande Ronde Basin Spring Chinook Salmon Captive Broodstock Program, 2008 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Hoffnagle, Timothy L.; Hair, Donald; Gee, Sally

    2009-03-31

    The Grande Ronde Basin Spring Chinook Salmon Captive Broodstock Program is designed to rapidly increase numbers of Chinook salmon in stocks that are in imminent danger of extirpation in Catherine Creek (CC), Lostine River (LR) and upper Grande Ronde River (GR). Natural parr are captured and reared to adulthood in captivity, spawned (within stocks) and their progeny reared to smoltification before being released into the natal stream of their parents. This program is co-managed by ODFW, National Marine Fisheries Service, Nez Perce Tribe and Confederated Tribes of the Umatilla Indian Reservation. Presmolt rearing was initially conducted at Lookingglass Fish Hatchery (LFH) but parr collected in 2003 and later were reared at Wallowa Fish Hatchery (WFH). Post-smolt rearing is conducted at Bonneville Fish Hatchery (BOH - freshwater) and at Manchester Research Station (MRS - saltwater). The CC and LR programs are being terminated, as these populations have achieved the goal of a consistent return of 150 naturally spawning adults, so the 2005 brood year was the last brood year collected for theses populations. The Grande Ronde River program continued with 300 fish collected each year. Currently, we are attempting to collect 150 natural parr and incorporate 150 parr collected as eggs from females with low ELISA levels from the upper Grande Ronde River Conventional Hatchery Program. This is part of a comparison of two methods of obtaining fish for a captive broodstock program: natural fish vs. those spawned in captivity. In August 2007, we collected 152 parr (BY 2006) from the upper Grande Ronde River and also have 155 Grande Ronde River parr (BY 2006) that were hatched from eyed eggs at LFH. During 2008, we were unable to collect natural parr from the upper Grande Ronde River. Therefore, we obtained 300 fish from low ELISA females from the upper Grande Ronde River Conventional Program. In October 2008 we obtained 170 eyed eggs from the upper Grande Ronde river Conventional

  9. Was the big bang hot

    International Nuclear Information System (INIS)

    Wright, E.L.

    1983-01-01

    The author considers experiments to confirm the substantial deviations from a Planck curve in the Woody and Richards spectrum of the microwave background, and search for conducting needles in our galaxy. Spectral deviations and needle-shaped grains are expected for a cold Big Bang, but are not required by a hot Big Bang. (Auth.)

  10. Passport to the Big Bang

    CERN Multimedia

    De Melis, Cinzia

    2013-01-01

    Le 2 juin 2013, le CERN inaugure le projet Passeport Big Bang lors d'un grand événement public. Affiche et programme. On 2 June 2013 CERN launches a scientific tourist trail through the Pays de Gex and the Canton of Geneva known as the Passport to the Big Bang. Poster and Programme.

  11. Business Metrics for High-Performance Homes: A Colorado Springs Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Beach, R. [IBACOS, Inc., Pittsburgh, PA (United States); Jones, A. [IBACOS, Inc., Pittsburgh, PA (United States)

    2016-04-26

    The building industry needs to understand how energy ratings can impact homebuilders. Of interest is how energy efficiency may or may not have a positive impact on homebuilders’ business success. Focusing on Colorado Springs, Colorado, as a case study, the U.S. Department of Energy’s Building America research team IBACOS suggests a win–win between a builder’s investment in energy efficiency and that builder’s ability to sell homes. Although this research did not ultimately determine why a correlation may exist, a builder’s investment in voluntary energy-efficiency programs correlated with that builder’s ability to survive the Great Recession of 2007 to 2009. This report explores the relationship between energy-efficiency ratings and the market performance of several builders in Colorado Springs.

  12. A Control Approach for Performance of Big Data Systems

    OpenAIRE

    Berekmeri , Mihaly; Serrano , Damián; Bouchenak , Sara; Marchand , Nicolas; Robu , Bogdan

    2014-01-01

    International audience; We are at the dawn of a huge data explosion therefore companies have fast growing amounts of data to process. For this purpose Google developed MapReduce, a parallel programming paradigm which is slowly becoming the de facto tool for Big Data analytics. Although to some extent its use is already wide-spread in the industry, ensuring performance constraints for such a complex system poses great challenges and its management requires a high level of expertise. This paper...

  13. Keynote: Big Data, Big Opportunities

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    The enthusiasm for big data is obscuring the complexity and diversity of data in scholarship and the challenges for stewardship. Inside the black box of data are a plethora of research, technology, and policy issues. Data are not shiny objects that are easily exchanged. Rather, data are representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Data practices are local, varying from field to field, individual to indiv...

  14. Integrating R and Hadoop for Big Data Analysis

    OpenAIRE

    Bogdan Oancea; Raluca Mariana Dragoescu

    2014-01-01

    Analyzing and working with big data could be very diffi cult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Offi cial statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools ...

  15. The challenges of big data.

    Science.gov (United States)

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  16. Pro Spring security

    CERN Document Server

    Scarioni, Carlo

    2013-01-01

    Security is a key element in the development of any non-trivial application. The Spring Security Framework provides a comprehensive set of functionalities to implement industry-standard authentication and authorization mechanisms for Java applications. Pro Spring Security will be a reference and advanced tutorial that will do the following: Guides you through the implementation of the security features for a Java web application by presenting consistent examples built from the ground-up. Demonstrates the different authentication and authorization methods to secure enterprise-level applications

  17. Big³. Editorial.

    Science.gov (United States)

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  18. 1988 Hanford riverbank springs characterization report

    International Nuclear Information System (INIS)

    Dirkes, R.L.

    1990-12-01

    This reports presents the results of a special study undertaken to characterize the riverbank springs (i.e., ground-water seepage) entering the Columbia River along the Hanford Site. Radiological and nonradiological analyses were performed. River water samples were also analyzed from upstream and downstream of the Site as well as from the immediate vicinity of the springs. In addition, irrigation return water and spring water entering the river along the shoreline opposite Hanford were analyzed. Hanford-origin contaminants were detected in spring water entering the Columbia River along the Hanford Site. The type and concentrations of contaminants in the spring water were similar to those known to exist in the ground water near the river. The location and extent of the contaminated discharges compared favorably with recent ground-water reports and predictions. Spring discharge volumes remain very small relative to the flow of the Columbia. Downstream river sampling demonstrates the impact of ground-water discharges to be minimal, and negligible in most cases. Radionuclide concentrations were below US Department of Energy Derived Concentration Guides (DCGs) with the exception 90 Sr near the 100-N Area. Tritium, while below the DCG, was detected at concentrations above the US Environmental Protection Agency drinking water standards in several springs. All other radionuclide concentrations were below drinking water standards. Nonradiological contaminants were generally undetectable in the spring water. River water contaminant concentrations, outside of the immediate discharge zones, were below drinking water standards in all cases. 19 refs., 5 figs., 12 tabs

  19. A springs actuated finger exoskeleton: From mechanical design to spring variables evaluation.

    Science.gov (United States)

    Bortoletto, Roberto; Mello, Ashley N; Piovesan, Davide

    2017-07-01

    In the context of post-stroke patients, suffering of hemiparesis of the hand, robot-aided neuro-motor rehabilitation allows for intensive rehabilitation treatments and quantitative evaluation of patients' progresses. This work presents the design and evaluation of a spring actuated finger exoskeleton. In particular, the spring variables and the interaction forces between the assembly and the hand were investigated, in order to assess the effectiveness of the proposed exoskeleton.

  20. Cloud Based Big Data Infrastructure: Architectural Components and Automated Provisioning

    OpenAIRE

    Demchenko, Yuri; Turkmen, Fatih; Blanchet, Christophe; Loomis, Charles; Laat, Caees de

    2016-01-01

    This paper describes the general architecture and functional components of the cloud based Big Data Infrastructure (BDI). The proposed BDI architecture is based on the analysis of the emerging Big Data and data intensive technologies and supported by the definition of the Big Data Architecture Framework (BDAF) that defines the following components of the Big Data technologies: Big Data definition, Data Management including data lifecycle and data structures, Big Data Infrastructure (generical...

  1. Portrait of a Geothermal Spring, Hunter's Hot Springs, Oregon.

    Science.gov (United States)

    Castenholz, Richard W

    2015-01-27

    Although alkaline Hunter's Hot Springs in southeastern Oregon has been studied extensively for over 40 years, most of these studies and the subsequent publications were before the advent of molecular methods. However, there are many field observations and laboratory experiments that reveal the major aspects of the phototrophic species composition within various physical and chemical gradients of these springs. Relatively constant temperature boundaries demark the upper boundary of the unicellular cyanobacterium, Synechococcus at 73-74 °C (the world-wide upper limit for photosynthesis), and 68-70 °C the upper limit for Chloroflexus. The upper limit for the cover of the filamentous cyanobacterium, Geitlerinema (Oscillatoria) is at 54-55 °C, and the in situ lower limit at 47-48 °C for all three of these phototrophs due to the upper temperature limit for the grazing ostracod, Thermopsis. The in situ upper limit for the cyanobacteria Pleurocapsa and Calothrix is at ~47-48 °C, which are more grazer-resistant and grazer dependent. All of these demarcations are easily visible in the field. In addition, there is a biosulfide production in some sections of the springs that have a large impact on the microbiology. Most of the temperature and chemical limits have been explained by field and laboratory experiments.

  2. Physics with Big Karl Brainstorming. Abstracts

    International Nuclear Information System (INIS)

    Machner, H.; Lieb, J.

    2000-08-01

    Before summarizing details of the meeting, a short description of the spectrometer facility Big Karl is given. The facility is essentially a new instrument using refurbished dipole magnets from its predecessor. The large acceptance quadrupole magnets and the beam optics are new. Big Karl has a design very similar as the focussing spectrometers at MAMI (Mainz), AGOR (Groningen) and the high resolution spectrometer (HRS) in Hall A at Jefferson Laboratory with ΔE/E = 10 -4 but at some lower maximum momentum. The focal plane detectors consisting of multiwire drift chambers and scintillating hodoscopes are similar. Unlike HRS, Big Karl still needs Cerenkov counters and polarimeters in its focal plane; detectors which are necessary to perform some of the experiments proposed during the brainstorming. In addition, BIG KARL allows emission angle reconstruction via track measurements in its focal plane with high resolution. In the following the physics highlights, the proposed and potential experiments are summarized. During the meeting it became obvious that the physics to be explored at Big Karl can be grouped into five distinct categories, and this summary is organized accordingly. (orig.)

  3. Soft tissue modelling with conical springs.

    Science.gov (United States)

    Omar, Nadzeri; Zhong, Yongmin; Jazar, Reza N; Subic, Aleksandar; Smith, Julian; Shirinzadeh, Bijan

    2015-01-01

    This paper presents a new method for real-time modelling soft tissue deformation. It improves the traditional mass-spring model with conical springs to deal with nonlinear mechanical behaviours of soft tissues. A conical spring model is developed to predict soft tissue deformation with reference to deformation patterns. The model parameters are formulated according to tissue deformation patterns and the nonlinear behaviours of soft tissues are modelled with the stiffness variation of conical spring. Experimental results show that the proposed method can describe different tissue deformation patterns using one single equation and also exhibit the typical mechanical behaviours of soft tissues.

  4. Sediment Budgets and Sources Inform a Novel Valley Bottom Restoration Practice Impacted by Legacy Sediment: The Big Spring Run, PA, Restoration Experiment

    Science.gov (United States)

    Walter, R. C.; Merritts, D.; Rahnis, M. A.; Gellis, A.; Hartranft, J.; Mayer, P. M.; Langland, M.; Forshay, K.; Weitzman, J. N.; Schwarz, E.; Bai, Y.; Blair, A.; Carter, A.; Daniels, S. S.; Lewis, E.; Ohlson, E.; Peck, E. K.; Schulte, K.; Smith, D.; Stein, Z.; Verna, D.; Wilson, E.

    2017-12-01

    Big Spring Run (BSR), a small agricultural watershed in southeastern Pennsylvania, is located in the Piedmont Physiographic Province, which has the highest nutrient and sediment yields in the Chesapeake Bay watershed. To effectively reduce nutrient and sediment loading it is important to monitor the effect of management practices on pollutant reduction. Here we present results of an ongoing study, begun in 2008, to understand the impact of a new valley bottom restoration strategy for reducing surface water sediment and nutrient loads. We test the hypotheses that removing legacy sediments will reduce sediment and phosphorus loads, and that restoring eco-hydrological functions of a buried Holocene wetland (Walter & Merritts 2008) will improve surface and groundwater quality by creating accommodation space to trap sediment and process nutrients. Comparisons of pre- and post-restoration gage data show that restoration lowered the annual sediment load by at least 118 t yr-1, or >75%, from the 1000 m-long restoration reach, with the entire reduction accounted for by legacy sediment removal. Repeat RTK-GPS surveys of pre-restoration stream banks verified that >90 t yr-1 of suspended sediment was from bank erosion within the restoration reach. Mass balance calculations of 137Cs data indicate 85-100% of both the pre-restoration and post-restoration suspended sediment storm load was from stream bank sources. This is consistent with trace element data which show that 80-90 % of the pre-restoration outgoing suspended sediment load at BSR was from bank erosion. Meanwhile, an inventory of fallout 137Cs activity from two hill slope transects adjacent to BSR yields average modern upland erosion rates of 2.7 t ha-1 yr-1 and 5.1 t ha-1 yr-1, showing modest erosion on slopes and deposition at toe of slopes. We conclude that upland farm slopes contribute little soil to the suspended sediment supply within this study area, and removal of historic valley bottom sediment effectively

  5. Seed bank and big sagebrush plant community composition in a range margin for big sagebrush

    Science.gov (United States)

    Martyn, Trace E.; Bradford, John B.; Schlaepfer, Daniel R.; Burke, Ingrid C.; Laurenroth, William K.

    2016-01-01

    The potential influence of seed bank composition on range shifts of species due to climate change is unclear. Seed banks can provide a means of both species persistence in an area and local range expansion in the case of increasing habitat suitability, as may occur under future climate change. However, a mismatch between the seed bank and the established plant community may represent an obstacle to persistence and expansion. In big sagebrush (Artemisia tridentata) plant communities in Montana, USA, we compared the seed bank to the established plant community. There was less than a 20% similarity in the relative abundance of species between the established plant community and the seed bank. This difference was primarily driven by an overrepresentation of native annual forbs and an underrepresentation of big sagebrush in the seed bank compared to the established plant community. Even though we expect an increase in habitat suitability for big sagebrush under future climate conditions at our sites, the current mismatch between the plant community and the seed bank could impede big sagebrush range expansion into increasingly suitable habitat in the future.

  6. Application and Prospect of Big Data in Water Resources

    Science.gov (United States)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  7. Big Data in food and agriculture

    Directory of Open Access Journals (Sweden)

    Kelly Bronson

    2016-06-01

    Full Text Available Farming is undergoing a digital revolution. Our existing review of current Big Data applications in the agri-food sector has revealed several collection and analytics tools that may have implications for relationships of power between players in the food system (e.g. between farmers and large corporations. For example, Who retains ownership of the data generated by applications like Monsanto Corproation's Weed I.D . “app”? Are there privacy implications with the data gathered by John Deere's precision agricultural equipment? Systematically tracing the digital revolution in agriculture, and charting the affordances as well as the limitations of Big Data applied to food and agriculture, should be a broad research goal for Big Data scholarship. Such a goal brings data scholarship into conversation with food studies and it allows for a focus on the material consequences of big data in society.

  8. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  9. Una aproximación a Big Data = An approach to Big Data

    OpenAIRE

    Puyol Moreno, Javier

    2014-01-01

    Big Data puede ser considerada como una tendencia en el avance de la tecnología que ha abierto la puerta a un nuevo enfoque para la comprensión y la toma de decisiones, que se utiliza para describir las enormes cantidades de datos (estructurados, no estructurados y semi- estructurados) que sería demasiado largo y costoso para cargar una base de datos relacional para su análisis. Así, el concepto de Big Data se aplica a toda la información que no puede ser procesada o analizada utilizando herr...

  10. Toward a Literature-Driven Definition of Big Data in Healthcare.

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  11. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    Science.gov (United States)

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  12. Big-Fish-Little-Pond Effect: Generalizability and Moderation--Two Sides of the Same Coin

    Science.gov (United States)

    Seaton, Marjorie; Marsh, Herbert W.; Craven, Rhonda G.

    2010-01-01

    Research evidence for the big-fish-little-pond effect (BFLPE) has demonstrated that attending high-ability schools has a negative effect on academic self-concept. Utilizing multilevel modeling with the 2003 Program for International Student Assessment database, the present investigation evaluated the generalizability and robustness of the BFLPE…

  13. Peer mentoring programs benefits in terms of civic engagement and social capital

    OpenAIRE

    Šedinová, Petra

    2014-01-01

    The main goal this diploma thesis is to explore the influence of peer mentoring programs as a tool of community intervention for children and adolescents from the point of view of civic engagement and social capital. The influence is assessed to the recipients of mentoring programs care- to children and adolescents exposed to risk factors or risk environment. This thesis is secondary analysis of Mentoring programs evaluating research in mentoring programs Big Brother Big Sisters- Pět P in Cze...

  14. Big Data and Biomedical Informatics: A Challenging Opportunity

    Science.gov (United States)

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  15. Tillage methods and mulch on water saving and yield of spring maize in Chitwan

    Directory of Open Access Journals (Sweden)

    Ishwari Prasad Upadhyay

    2016-12-01

    Full Text Available Tillage methods and mulch influences the productivity and water requirement of spring maize hence a field experiment was conducted at the National Maize Research Program, Rampur in spring seasons of 2011 and 2012 with the objectives to evaluate different tillage methods with and without mulch on water requirement and grain yield of spring maize. The experiment was laid out in two factors factorial randomized complete design with three replications. The treatments consisted of tillage methods (Permanent bed, Zero tillage and Conventional tillage and mulch (with and without. Irrigation timing was fixed as knee high stage, tasseling stage and milking/dough stage. Data on number of plants, number of ears, thousand grain weight and grain yield were recorded and analysed using GenStat. Two years combined result showed that the effect of tillage methods and mulch significant influenced grain yield and water requirement of spring maize. The maize grain yield was the highest in permanent beds with mulch (4626 kg ha-1 followed by zero tillage with mulch (3838 kg ha-1. Whereas total water applied calculated during the crop period were the highest in conventional tillage without mulch followed by conventional tillage with mulch. The permanent bed with mulch increased the yield and reduced the water requirement of spring maize in Chitwan.

  16. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  17. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text Available “Big Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  18. Getting Open Source Right for Big Data Analytics: Software Sharing, Governance, Collaboration and Most of All, Fun!

    Science.gov (United States)

    Mattmann, C. A.

    2013-12-01

    A wave of open source big data analytic infrastructure is currently shaping government, private sector, and academia. Projects are consuming, adapting, and contributing back to various ecosystems of software e.g., the Apache Hadoop project and its ecosystem of related efforts including Hive, HBase, Pig, Oozie, Ambari, Knox, Tez and Yarn, to name a few; the Berkeley AMPLab stack which includes Spark, Shark, Mesos, Tachyon, BlinkDB, MLBase, and other emerging efforts; MapR and its related stack of technologies, offerings from commercial companies building products around these tools e.g., Hortonworks Data Platform (HDP), Cloudera's CDH project, etc. Though the technologies all offer different capabilities including low latency support/in-memory, versus record oriented file I/O, high availability, support for the Map Reduce programming paradigm or other dataflow/workflow constructs, there is a common thread that binds these products - they are all released under an open source license e.g., Apache2, MIT, BSD, GPL/LGPL, etc.; all thrive in various ecosystems, such as Apache, or Berkeley AMPLab; all are developed collaboratively, and all technologies provide plug in architecture models and methodologies for allowing others to contribute, and participate via various community models. This talk will cover the open source aspects and governance aspects of the aforementioned Big Data ecosystems and point out the differences, subtleties, and implications of those differences. The discussion will be by example, using several national deployments and Big Data initiatives stemming from the Administration including DARPA's XDATA program; NASA's CMAC program; NSF's EarthCube and geosciences BigData projects. Lessons learned from these efforts in terms of the open source aspects of these technologies will help guide the AGU community in their use, deployment and understanding.

  19. Isolators Including Main Spring Linear Guide Systems

    Science.gov (United States)

    Goold, Ryan (Inventor); Buchele, Paul (Inventor); Hindle, Timothy (Inventor); Ruebsamen, Dale Thomas (Inventor)

    2017-01-01

    Embodiments of isolators, such as three parameter isolators, including a main spring linear guide system are provided. In one embodiment, the isolator includes first and second opposing end portions, a main spring mechanically coupled between the first and second end portions, and a linear guide system extending from the first end portion, across the main spring, and toward the second end portion. The linear guide system expands and contracts in conjunction with deflection of the main spring along the working axis, while restricting displacement and rotation of the main spring along first and second axes orthogonal to the working axis.

  20. The Inverted Big-Bang

    OpenAIRE

    Vaas, Ruediger

    2004-01-01

    Our universe appears to have been created not out of nothing but from a strange space-time dust. Quantum geometry (loop quantum gravity) makes it possible to avoid the ominous beginning of our universe with its physically unrealistic (i.e. infinite) curvature, extreme temperature, and energy density. This could be the long sought after explanation of the big-bang and perhaps even opens a window into a time before the big-bang: Space itself may have come from an earlier collapsing universe tha...

  1. Minsky on "Big Government"

    Directory of Open Access Journals (Sweden)

    Daniel de Santana Vasconcelos

    2014-03-01

    Full Text Available This paper objective is to assess, in light of the main works of Minsky, his view and analysis of what he called the "Big Government" as that huge institution which, in parallels with the "Big Bank" was capable of ensuring stability in the capitalist system and regulate its inherently unstable financial system in mid-20th century. In this work, we analyze how Minsky proposes an active role for the government in a complex economic system flawed by financial instability.

  2. The Multisyllabic Word Dilemma: Helping Students Build Meaning, Spell, and Read "Big" Words.

    Science.gov (United States)

    Cunningham, Patricia M.

    1998-01-01

    Looks at what is known about multisyllabic words, which is a lot more than educators knew when the previous generation of multisyllabic word instruction was created. Reviews the few studies that have carried out instructional approaches to increase students' ability to decode big words. Outlines a program of instruction, based on what is currently…

  3. Classical propagation of strings across a big crunch/big bang singularity

    International Nuclear Information System (INIS)

    Niz, Gustavo; Turok, Neil

    2007-01-01

    One of the simplest time-dependent solutions of M theory consists of nine-dimensional Euclidean space times 1+1-dimensional compactified Milne space-time. With a further modding out by Z 2 , the space-time represents two orbifold planes which collide and re-emerge, a process proposed as an explanation of the hot big bang [J. Khoury, B. A. Ovrut, P. J. Steinhardt, and N. Turok, Phys. Rev. D 64, 123522 (2001).][P. J. Steinhardt and N. Turok, Science 296, 1436 (2002).][N. Turok, M. Perry, and P. J. Steinhardt, Phys. Rev. D 70, 106004 (2004).]. When the two planes are near, the light states of the theory consist of winding M2-branes, describing fundamental strings in a particular ten-dimensional background. They suffer no blue-shift as the M theory dimension collapses, and their equations of motion are regular across the transition from big crunch to big bang. In this paper, we study the classical evolution of fundamental strings across the singularity in some detail. We also develop a simple semiclassical approximation to the quantum evolution which allows one to compute the quantum production of excitations on the string and implement it in a simplified example

  4. Weldon Spring Site environmental report for calendar year 1993

    International Nuclear Information System (INIS)

    1994-05-01

    This Site Environmental Report for Calendar Year 1993 describes the environmental monitoring programs at the Weldon Spring Site Remedial Action Project (WSSRAP). The objectives of these programs are to assess actual or potential exposure to contaminant effluents from the project area by providing public use scenarios and dose estimates, to demonstrate compliance with Federal and State permitted levels, and to summarize trends and/or changes in contaminant concentrations from environmental monitoring program. In 1993, the maximum committed dose to a hypothetical individual at the chemical plant site perimeter was 0.03 mrem (0.0003 mSv). The maximum committed dose to a hypothetical individual at the boundary of the Weldon Spring Quarry was 1.9 mrem (0.019 mSv). These scenarios assume an individual walking along the perimeter of the site-once a day at the chemical plant/raffinate pits and twice a day at the quarry-250 days per year. This hypothetical individual also consumes fish, sediment, and water from lakes and other bodies of water in the area. The collective dose, based on an effected population of 112,000 was 0.12 person-rem (0.0012 person-Sv). This calculation is based on recreational use of the August A. Busch Memorial Conservation Area and the Missouri Department of Conservation recreational trail (the Katy Trail) near the quarry. These estimates are below the U.S. Department of Energy requirement of 100 mrem (I mSv) annual committed effective dose equivalent for all exposure pathways. Results from air monitoring for the National Emission Standards for Hazardous Air Pollutants (NESHAPs) program indicated that the estimated dose was 0.38 mrem, which is below the U.S. Environmental Protection Agency (EPA) standard of 10 mrem per year

  5. The Information Panopticon in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Martin Berner

    2014-04-01

    Full Text Available Taking advantage of big data opportunities is challenging for traditional organizations. In this article, we take a panoptic view of big data – obtaining information from more sources and making it visible to all organizational levels. We suggest that big data requires the transformation from command and control hierarchies to post-bureaucratic organizational structures wherein employees at all levels can be empowered while simultaneously being controlled. We derive propositions that show how to best exploit big data technologies in organizations.

  6. The NASA Beyond Einstein Program

    Science.gov (United States)

    White, Nicholas E.

    2006-01-01

    Einstein's legacy is incomplete, his theory of General relativity raises -- but cannot answer --three profound questions: What powered the big bang? What happens to space, time, and matter at the edge of a black hole? and What is the mysterious dark energy pulling the Universe apart? The Beyond Einstein program within NASA's Office of Space Science aims to answer these questions, employing a series of missions linked by powerful new technologies and complementary approaches towards shared science goals. The Beyond Einstein program has three linked elements which advance science and technology towards two visions; to detect directly gravitational wave signals from the earliest possible moments of the BIg Bang, and to image the event horizon of a black hole. The central element is a pair of Einstein Great Observatories, Constellation-X and LISA. Constellation-X is a powerful new X-ray observatory dedicated to X-Ray Spectroscopy. LISA is the first spaced based gravitational wave detector. These powerful facilities will blaze new paths to the questions about black holes, the Big Bang and dark energy. The second element is a series of competitively selected Einstein Probes, each focused on one of the science questions and includes a mission dedicated resolving the Dark Energy mystery. The third element is a program of technology development, theoretical studies and education. The Beyond Einstein program is a new element in the proposed NASA budget for 2004. This talk will give an overview of the program and the missions contained within it.

  7. WE-H-BRB-00: Big Data in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  8. WE-H-BRB-00: Big Data in Radiation Oncology

    International Nuclear Information System (INIS)

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  9. Optimizing Hadoop Performance for Big Data Analytics in Smart Grid

    Directory of Open Access Journals (Sweden)

    Mukhtaj Khan

    2017-01-01

    Full Text Available The rapid deployment of Phasor Measurement Units (PMUs in power systems globally is leading to Big Data challenges. New high performance computing techniques are now required to process an ever increasing volume of data from PMUs. To that extent the Hadoop framework, an open source implementation of the MapReduce computing model, is gaining momentum for Big Data analytics in smart grid applications. However, Hadoop has over 190 configuration parameters, which can have a significant impact on the performance of the Hadoop framework. This paper presents an Enhanced Parallel Detrended Fluctuation Analysis (EPDFA algorithm for scalable analytics on massive volumes of PMU data. The novel EPDFA algorithm builds on an enhanced Hadoop platform whose configuration parameters are optimized by Gene Expression Programming. Experimental results show that the EPDFA is 29 times faster than the sequential DFA in processing PMU data and 1.87 times faster than a parallel DFA, which utilizes the default Hadoop configuration settings.

  10. De impact van Big Data op Internationale Betrekkingen

    NARCIS (Netherlands)

    Zwitter, Andrej

    Big Data changes our daily lives, but does it also change international politics? In this contribution, Andrej Zwitter (NGIZ chair at Groningen University) argues that Big Data impacts on international relations in ways that we only now start to understand. To comprehend how Big Data influences

  11. Developing the role of big data and analytics in health professional education.

    Science.gov (United States)

    Ellaway, Rachel H; Pusic, Martin V; Galbraith, Robert M; Cameron, Terri

    2014-03-01

    As we capture more and more data about learners, their learning, and the organization of their learning, our ability to identify emerging patterns and to extract meaning grows exponentially. The insights gained from the analyses of these large amounts of data are only helpful to the extent that they can be the basis for positive action such as knowledge discovery, improved capacity for prediction, and anomaly detection. Big Data involves the aggregation and melding of large and heterogeneous datasets while education analytics involves looking for patterns in educational practice or performance in single or aggregate datasets. Although it seems likely that the use of education analytics and Big Data techniques will have a transformative impact on health professional education, there is much yet to be done before they can become part of mainstream health professional education practice. If health professional education is to be accountable for its programs run and are developed, then health professional educators will need to be ready to deal with the complex and compelling dynamics of analytics and Big Data. This article provides an overview of these emerging techniques in the context of health professional education.

  12. Big data and analytics strategic and organizational impacts

    CERN Document Server

    Morabito, Vincenzo

    2015-01-01

    This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners...

  13. Masters of the springs

    DEFF Research Database (Denmark)

    Laursen, Steffen

    2010-01-01

    flanked by villages that relied on these water recourses for agricultural production. The springs emerged in the zone separating the cemeteries from the settlements. The freshwater springs were actively incorporated into the religious landscape of the dead, by consistently erecting mounds of a particular...... for water - a process which perhaps also is evidenced by temple constructions at Barbar, Umm al-Sujur and Abu Zaydan....

  14. Big Science and Long-tail Science

    CERN Document Server

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  15. Toward a Literature-Driven Definition of Big Data in Healthcare

    Directory of Open Access Journals (Sweden)

    Emilie Baro

    2015-01-01

    Full Text Available Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n and the number of variables (p for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n*p≥7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR data.

  16. Toward a Literature-Driven Definition of Big Data in Healthcare

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  17. Optical spring effect in nanoelectromechanical systems

    International Nuclear Information System (INIS)

    Tian, Feng; Zhou, Guangya; Du, Yu; Chau, Fook Siong; Deng, Jie

    2014-01-01

    In this Letter, we report a hybrid system consisting of nano-optical and nano-mechanical springs, in which the optical spring effect works to adjust the mechanical frequency of a nanoelectromechanical systems resonator. Nano-scale folded beams are fabricated as the mechanical springs and double-coupled one-dimensional photonic crystal cavities are used to pump the “optical spring.” The dynamic characteristics of this hybrid system are measured and analyzed at both low and high input optical powers. This study leads the physical phenomenon of optomechanics in complex nano-opto-electro-mechanical systems (NOEMS) and could benefit the future applications of NOEMS in chip-level communication and sensing

  18. Big-Eyed Bugs Have Big Appetite for Pests

    Science.gov (United States)

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  19. Big Data - What is it and why it matters.

    Science.gov (United States)

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  20. Research on information security in big data era

    Science.gov (United States)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  1. Hydrogeologic data for the Big River-Mishnock River stream-aquifer system, central Rhode Island

    Science.gov (United States)

    Craft, P.A.

    2001-01-01

    Hydrogeology, ground-water development alternatives, and water quality in the BigMishnock stream-aquifer system in central Rhode Island are being investigated as part of a long-term cooperative program between the Rhode Island Water Resources Board and the U.S. Geological Survey to evaluate the ground-water resources throughout Rhode Island. The study area includes the Big River drainage basin and that portion of the Mishnock River drainage basin upstream from the Mishnock River at State Route 3. This report presents geologic data and hydrologic and water-quality data for ground and surface water. Ground-water data were collected from July 1996 through September 1998 from a network of observation wells consisting of existing wells and wells installed for this study, which provided a broad distribution of data-collection sites throughout the study area. Streambed piezometers were used to obtain differences in head data between surface-water levels and ground-water levels to help evaluate stream-aquifer interactions throughout the study area. The types of data presented include monthly ground-water levels, average daily ground-water withdrawals, drawdown data from aquifer tests, and water-quality data. Historical water-level data from other wells within the study area also are presented in this report. Surface-water data were obtained from a network consisting of surface-water impoundments, such as ponds and reservoirs, existing and newly established partial-record stream-discharge sites, and synoptic surface-water-quality sites. Water levels were collected monthly from the surface-water impoundments. Stream-discharge measurements were made at partial-record sites to provide measurements of inflow, outflow, and internal flow throughout the study area. Specific conductance was measured monthly at partial-record sites during the study, and also during the fall and spring of 1997 and 1998 at 41 synoptic sites throughout the study area. General geologic data, such as

  2. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  3. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  4. A little big history of Tiananmen

    NARCIS (Netherlands)

    Quaedackers, E.; Grinin, L.E.; Korotayev, A.V.; Rodrigue, B.H.

    2011-01-01

    This contribution aims at demonstrating the usefulness of studying small-scale subjects such as Tiananmen, or the Gate of Heavenly Peace, in Beijing - from a Big History perspective. By studying such a ‘little big history’ of Tiananmen, previously overlooked yet fundamental explanations for why

  5. Addressing big data issues in Scientific Data Infrastructure

    NARCIS (Netherlands)

    Demchenko, Y.; Membrey, P.; Grosso, P.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2013-01-01

    Big Data are becoming a new technology focus both in science and in industry. This paper discusses the challenges that are imposed by Big Data on the modern and future Scientific Data Infrastructure (SDI). The paper discusses a nature and definition of Big Data that include such features as Volume,

  6. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  7. Big Data - Smart Health Strategies

    Science.gov (United States)

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  8. About Big Data and its Challenges and Benefits in Manufacturing

    OpenAIRE

    Bogdan NEDELCU

    2013-01-01

    The aim of this article is to show the importance of Big Data and its growing influence on companies. It also shows what kind of big data is currently generated and how much big data is estimated to be generated. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. There are also shown some major influences that big data has over one major segment in the industry (manufacturing) and the challenges that appear.

  9. Big Data Management in US Hospitals: Benefits and Barriers.

    Science.gov (United States)

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  10. Big Data Strategy for Telco: Network Transformation

    OpenAIRE

    F. Amin; S. Feizi

    2014-01-01

    Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and ...

  11. Big Data in Shipping - Challenges and Opportunities

    OpenAIRE

    Rødseth, Ørnulf Jan; Perera, Lokukaluge Prasad; Mo, Brage

    2016-01-01

    Big Data is getting popular in shipping where large amounts of information is collected to better understand and improve logistics, emissions, energy consumption and maintenance. Constraints to the use of big data include cost and quality of on-board sensors and data acquisition systems, satellite communication, data ownership and technical obstacles to effective collection and use of big data. New protocol standards may simplify the process of collecting and organizing the data, including in...

  12. Fossilization Processes in Thermal Springs

    Science.gov (United States)

    Farmer, Jack D.; Cady, Sherry; Desmarais, David J.; Chang, Sherwood (Technical Monitor)

    1995-01-01

    To create a comparative framework for the study of ancient examples, we have been carrying out parallel studies of the microbial biosedimentology, taphonomy and geochemistry of modem and sub-Recent thermal spring deposits. One goal of the research is the development of integrated litho- and taphofacies models for siliceous and travertline sinters. Thermal springs are regarded as important environments for the origin and early evolution of life on Earth, and we seek to utilize information from the fossil record to reconstruct the evolution of high temperature ecosystems. Microbial contributions to the fabric of thermal spring sinters occur when population growth rates keep pace with, or exceed rates of inorganic precipitation, allowing for the development of continuous biofilms or mats. In siliceous thermal springs, microorganisms are typically entombed while viable. Modes of preservation reflect the balance between rates of organic matter degradation, silica precipitation and secondary infilling. Subaerial sinters are initially quite porous and permeable and at temperatures higher than about 20 C, organic materials are usually degraded prior to secondary infilling of sinter frameworks. Thus, organically-preserved microfossils are rare and fossil information consists of characteristic biofabrics formed by the encrustation and underplating of microbial mat surfaces. This probably accounts for the typically low total organic carbon values observed in thermal spring deposits. In mid-temperature, (approx. 35 - 59 C) ponds and outflows, the surface morphology of tufted Phormidium mats is preserved through mat underplating by thin siliceous: crusts. Microbial taxes lead to clumping of ceils and/or preferred filament orientations that together define higher order composite fabrics in thermal spring stromatolites (e.g. network, coniform, and palisade). At lower temperatures (less than 35 C), Calothrix mats cover shallow terracette pools forming flat carpets or pustular

  13. [Relevance of big data for molecular diagnostics].

    Science.gov (United States)

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  14. European supply chain for valve springs

    Energy Technology Data Exchange (ETDEWEB)

    Barthold, G. [Scherdel GmbH, Marktredwitz (Germany); Thureborn, D.; Hallberg, M. [Haldex Garphyttan AB (Sweden); Janssen, P. [Mittal Steel Ruhrort GmbH / Mittal Steel Hochfeld GmbH (Germany)

    2005-07-01

    Forced by the Kobe earthquake in 1995 and the lack of valve spring steel on the world market due to damages of the Kobe steel plant, the development of a European supply chain has been sped up. End of 1994 a super clean valve spring steel with a reasonable quality from a European source was available. A strong relationship between the steel producer (Mittal), the wire manufacturer (Haldex Garphyttan) and the spring maker (Scherdel) was established. A working group of the three companies holds meetings on a regular basis to discuss quality and development issues. Over the last years the supply chain has achieved significant improvements in terms of cleanliness and decarburisation of the wire rod. The continuous common advancement of the valve spring quality has enabled the valve spring failures in the field to be reduced to < 0.1 ppm. The development and market launch of new grades has been prepared. (orig.)

  15. Big data in psychology: A framework for research advancement.

    Science.gov (United States)

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. 'Big data' in pharmaceutical science: challenges and opportunities.

    Science.gov (United States)

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  17. Reactive Programming in Java

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Reactive Programming in gaining a lot of excitement. Many libraries, tools, and frameworks are beginning to make use of reactive libraries. Besides, applications dealing with big data or high frequency data can benefit from this programming paradigm. Come to this presentation to learn about what reactive programming is, what kind of problems it solves, how it solves them. We will take an example oriented approach to learning the programming model and the abstraction.

  18. Characteristics Analysis and Testing of SMA Spring Actuator

    Directory of Open Access Journals (Sweden)

    Jianzuo Ma

    2013-01-01

    Full Text Available The biasing form two-way shape memory alloy (SMA actuator composed of SMA spring and steel spring is analyzed. Based on the force equilibrium equation, the relationship between load capacity of SMA spring and geometric parameters is established. In order to obtain the characteristics of SMA spring actuator, the output force and output displacement of SMA spring under different temperatures are analyzed by the theoretical model and the experimental method. Based on the shape memory effect of SMA, the relationship of the SMA spring actuator's output displacement with the temperature, the stress and strain, the material parameters, and the size parameters is established. The results indicate that the trend of theoretical results is basically consistent with the experimental data. The output displacement of SMA spring actuator is increased with the increasing temperature.

  19. Thermal algae in certain radioactive springs in Japan, (3)

    International Nuclear Information System (INIS)

    Mifune, Masaaki; Hirose, Hiroyuki.

    1982-01-01

    Shikano Hot Springs are located at five km to the south of Hamamura Station on the Sanin Line in Tottori Prefecture. The water temperature and the pH of the springs are 40.2 - 61.2 0 C, and 7.5 - 7.8, respectively. They belong to simple thermals. Hamamura Hot Springs are located in the neighbourhood of Hamamura Station. The highest radon content of the hot springs is 175.1 x 10 -10 Ci/l, and the great part of the springs belong to radioactive ones. From the viewpoint of the major ionic constituents, they are also classified under weak salt springs, sulfated salt springs, and simple thermals. Regarding the habitates of the algal flora, the water temperature and the pH of the springs are 28.0 - 68.0 0 C, and 6.8 - 7.4, respectively. The thermal algae found by Ikoma and Doi at Hamamura Hot Springs were two species of Cyanophyceae. By the authors, nine species and one variety of Cyanophyceae including Ikoma and Doi's two species were newly found at Shikano and Hamamura Hot Springs. Chlorophyceous alga was not found. The dominant thermal algae of these hot springs were Mastigocladus laminosus, and the other algae which mainly consist of Oscillatoriaceous algae. From these points, it seems that the thermal algae of Shikano and Hamamura Hot Springs belong to the normal type of thermal algae, and they are different from the thermal algae of Ikeda Mineral Springs and Masutomi Hot Springs which belong to strongly radioactive springs. (author)

  20. Spatial big data for disaster management

    Science.gov (United States)

    Shalini, R.; Jayapratha, K.; Ayeshabanu, S.; Chemmalar Selvi, G.

    2017-11-01

    Big data is an idea of informational collections that depicts huge measure of information and complex that conventional information preparing application program is lacking to manage them. Presently, big data is a widely known domain used in research, academic, and industries. It is utilized to store substantial measure of information in a solitary brought together one. Challenges integrate capture, allocation, analysis, information precise, visualization, distribution, interchange, delegation, inquiring, updating and information protection. In this digital world, to put away the information and recovering the data is enormous errand for the huge organizations and some time information ought to be misfortune due to circulated information putting away. For this issue the organization individuals are chosen to actualize the huge information to put away every one of the information identified with the organization they are put away in one enormous database that is known as large information. Remote sensor is a science getting data used to distinguish the items or break down the range from a separation. It is anything but difficult to discover the question effortlessly with the sensor. It makes geographic data from satellite and sensor information so in this paper dissect what are the structures are utilized for remote sensor in huge information and how the engineering is vary from each other and how they are identify with our investigations. This paper depicts how the calamity happens and figuring consequence of informational collection. And applied a seismic informational collection to compute the tremor calamity in view of classification and clustering strategy. The classical data mining algorithms for classification used are k-nearest, naive bayes and decision table and clustering used are hierarchical, make density based and simple k_means using XLMINER and WEKA tool. This paper also helps to predicts the spatial dataset by applying the XLMINER AND WEKA tool and

  1. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  2. Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course

    Science.gov (United States)

    Asamoah, Daniel Adomako; Sharda, Ramesh; Hassan Zadeh, Amir; Kalgotra, Pankush

    2017-01-01

    In this article, we present an experiential perspective on how a big data analytics course was designed and delivered to students at a major Midwestern university. In reference to the "MSIS 2006 Model Curriculum," we designed this course as a level 2 course, with prerequisites in databases, computer programming, statistics, and data…

  3. Thermal springs of Malaysia and their potentialdevelopment

    Science.gov (United States)

    Rahim Samsudin, Abdul; Hamzah, Umar; Rahman, Rakmi Ab.; Siwar, Chamhuri; Fauzi Mohd. Jani, Mohd; Othman, Redzuan

    The study on the potential development of hot springs for the tourism industry in Malaysiawas conducted. Out of the 40 hot springs covered, the study identified 9 hot springs having a high potential for development, 14 having medium potential and the remaining 17 having low or least potential for development. This conclusion was arrived at after considering the technical and economic feasibility of the various hot springs. Technical feasibility criteria includes geological factors, water quality, temperature and flow rate. The economic feasibility criteria considers measures such as accessibility, current and market potentials in terms of visitors, surrounding attractions and existing inventory and facilities available. A geological input indicates that high potential hot springs are located close to or within the granite body and associated with major permeable fault zones. They normally occur at low elevation adjacent to topographic highs. High potential hot springs are also characterised by high water temperature, substantial flowrate and very good water quality which is important for water-body contact activities such as soaking. Economic criteria for high potential hot springs are associated with good accessibility, good market, good surrounding attractions like rural and village setting and well developed facilities and infrastructures.

  4. Supergene destruction of a hydrothermal replacement alunite deposit at Big Rock Candy Mountain, Utah: Mineralogy, spectroscopic remote sensing, stable-isotope, and argon-age evidences

    Science.gov (United States)

    Cunningham, Charles G.; Rye, Robert O.; Rockwell, Barnaby W.; Kunk, Michael J.; Councell, Terry B.

    2005-01-01

    Big Rock Candy Mountain is a prominent center of variegated altered volcanic rocks in west-central Utah. It consists of the eroded remnants of a hypogene alunite deposit that, at ∼21 Ma, replaced intermediate-composition lava flows. The alunite formed in steam-heated conditions above the upwelling limb of a convection cell that was one of at least six spaced at 3- to 4-km intervals around the margin of a monzonite stock. Big Rock Candy Mountain is horizontally zoned outward from an alunite core to respective kaolinite, dickite, and propylite envelopes. The altered rocks are also vertically zoned from a lower pyrite–propylite assemblage upward through assemblages successively dominated by hypogene alunite, jarosite, and hematite, to a flooded silica cap. This hydrothermal assemblage is undergoing natural destruction in a steep canyon downcut by the Sevier River in Marysvale Canyon. Integrated geological, mineralogical, spectroscopic remote sensing using AVIRIS data, Ar radiometric, and stable isotopic studies trace the hypogene origin and supergene destruction of the deposit and permit distinction of primary (hydrothermal) and secondary (weathering) processes. This destruction has led to the formation of widespread supergene gypsum in cross-cutting fractures and as surficial crusts, and to natrojarosite, that gives the mountain its buff coloration along ridges facing the canyon. A small spring, Lemonade Spring, with a pH of 2.6 and containing Ca, Mg, Si, Al, Fe, Mn, Cl, and SO4, also occurs near the bottom of the canyon. The 40Ar/39Ar age (21.32±0.07 Ma) of the alunite is similar to that for other replacement alunites at Marysvale. However, the age spectrum contains evidence of a 6.6-Ma thermal event that can be related to the tectonic activity responsible for the uplift that led to the downcutting of Big Rock Candy Mountain by the Sevier River. This ∼6.6 Ma event also is present in the age spectrum of supergene natrojarosite forming today, and probably

  5. Solution of a braneworld big crunch/big bang cosmology

    International Nuclear Information System (INIS)

    McFadden, Paul L.; Turok, Neil; Steinhardt, Paul J.

    2007-01-01

    We solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed V divided by the speed of light c. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in (V/c) 2 . At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios

  6. [Big data and their perspectives in radiation therapy].

    Science.gov (United States)

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  7. Current applications of big data in obstetric anesthesiology.

    Science.gov (United States)

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  8. Volume and Value of Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  9. Using Big Book to Teach Things in My House

    OpenAIRE

    Effrien, Intan; Lailatus, Sa’diyah; Nuruliftitah Maja, Neneng

    2017-01-01

    The purpose of this study to determine students' interest in learning using the big book media. Big book is a big book from the general book. The big book contains simple words and images that match the content of sentences and spelling. From here researchers can know the interest and development of students' knowledge. As well as train researchers to remain crative in developing learning media for students.

  10. Spring harvest of corn stover

    Energy Technology Data Exchange (ETDEWEB)

    Lizotte, P.L. [Laval Univ., Quebec City, PQ (Canada). Dept. des sols et de genie agroalimentaire; Savoie, P. [Agriculture and Agri-Food Canada, Quebec City, PQ (Canada)

    2010-07-01

    Corn stover is typically left behind in the field after grain harvest. Although part of the stover should remain in the field for soil organic matter renewal and erosion protection, half of the stover could be removed sustainably. This represents about one million t dry matter (DM) of stover per year in the province of Quebec. Stover harvested in the fall is very wet. While there are applications for wet stover, the available markets currently require a dry product. Preliminary measurements have shown that stover left in the field throughout the winter becomes very dry, and a considerable amount would still be harvestable in the spring. In the spring of 2009, corn stover was harvested at 2 sites, each subdivided into 2 parcels. The first parcel was cut and raked in the fall of 2008 (fall parcel), while the second parcel was cut and raked in spring 2009. Fibre from both parcels was baled in the spring 2009. At the first site, a large square baler was used in late April to produce bales measuring 0.8 m x 0.9 m x 1.8 m. On the second site a round baler was used in late May to produce bales of 1.2 m in width by 1.45 m in diameter. On the second site, a small square baler was also used to produce bales of 0.35 m x 0.45 m x 0.60 m (spring cutting only). With the large square baler, an average of 3.9 t DM/ha was harvested equally on the fall parcel and the spring parcel, representing a 48 per cent recovery of biomass based on stover yields.

  11. Work Term Assignment Spring 2017

    Science.gov (United States)

    Sico, Mallory

    2017-01-01

    My tour in the Engineering Robotics directorate exceeded my expectations. I learned lessons about Creo, manufacturing and assembly, collaboration, and troubleshooting. During my first tour, last spring, I used Creo on a smaller project, but had limited experience with it before starting in the Dynamic Systems Test branch this spring. I gained valuable experience learning assembly design, sheet metal design and designing with intent for manufacturing and assembly. These skills came from working both on the hatch and the floor. I also learned to understand the intent of other designers on models I worked with. While redesigning the floor, I was modifying an existing part and worked to understand what the previous designer had done to make it fit with the new model. Through working with the machine shop and in the mock-up, I learned much more about manufacturing and assembly. I used a Dremel, rivet gun, belt sander, and countersink for the first time. Through taking multiple safety training for different machine shops, I learned new machine shop safety skills specific to each one. This semester also gave me new collaborative opportunities. I collaborated with engineers within my branch as well as with Human Factors and the building 10 machine shop. This experience helped me learn how to design for functionality and assembly, not only for what would be easiest in my designs. In addition to these experiences, I learned many lessons in troubleshooting. I was the first person in my office to use a Windows 10 computer. This caused unexpected issues with NASA services and programs, such as the Digital Data Management Server (DDMS). Because of this, I gained experience finding solutions to lockout and freeze issues as well as Creo specific settings. These will be useful skills to have in the future and will be implemented in future rotations. This co-op tour has motivated me more to finish my degree and pursue my academic goals. I intend to take a machining Career Gateway

  12. Big Data Analytics Methodology in the Financial Industry

    Science.gov (United States)

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  13. Final Program Report for 2010-2012: Monitoring and evaluation for conserving biological resources of the Spring Mountains National Recreation Area

    Science.gov (United States)

    Stephen J. Solem; Burton K. Pendleton; Casey Giffen; Marc Coles-Ritchie; Jeri Ledbetter; Kevin S. McKelvey; Joy Berg; Jim Menlove; Carly K. Woodlief; Luke A. Boehnke

    2013-01-01

    The Spring Mountains National Recreation Area (SMNRA) includes approximately 316,000 acres of National Forest System (NFS) lands managed by the Humboldt-Toiyabe National Forest in Clark and Nye Counties, Nevada (see fig. 1-1). The Spring Mountains have long been recognized as an island of endemism, harboring flora and fauna found nowhere else in the world. Conservation...

  14. The evolution of risk communication at the Weldon Spring site

    International Nuclear Information System (INIS)

    McCracken, S.; Sizemore, M.; Meyer, L.; MacDonell, M.; Haroun, L.

    1993-01-01

    Clear risk communication is one of the keys to establishing a positive relationship with the public at an environmental restoration site. This effort has been evolving at the Weldon Spring site over the past few years, with considerable input from the local community. The recent signing of the major cleanup decision for this site, which identifies on-site disposal as the remedy reflects the strength of the communication program that has evolved for the project

  15. Big data: survey, technologies, opportunities, and challenges.

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  16. Big Data: Survey, Technologies, Opportunities, and Challenges

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  17. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    Science.gov (United States)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  18. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  19. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...... indtil videre begrænset. Denne artikel belyser, hvordan den offentlige sundhedssektor kan genanvende og udnytte en stadig større mængde data under hensyntagen til offentlige værdier. Artiklen bygger på et casestudie af anvendelsen af store mængder sundhedsdata i Dansk AlmenMedicinsk Database (DAMD......). Analysen viser, at (gen)brug af data i nye sammenhænge er en flerspektret afvejning mellem ikke alene økonomiske rationaler og kvalitetshensyn, men også kontrol over personfølsomme data og etiske implikationer for borgeren. I DAMD-casen benyttes data på den ene side ”i den gode sags tjeneste” til...

  20. Assessment of High Rates of Precocious Male Maturation in a Spring Chinook Salmon Supplementation Hatchery Program, Annual Report 2002-2003.

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, Donald; Beckman, Brian; Cooper, Kathleen

    2003-08-01

    The Yakima River Spring Chinook Salmon Supplementation Project in Washington State is currently one of the most ambitious efforts to enhance a natural salmon population in the United States. Over the past five years we have conducted research to characterize the developmental physiology of naturally- and hatchery-reared wild progeny spring chinook salmon (Oncorhynchus tshawytscha) in the Yakima River basin. Fish were sampled at the main hatchery in Cle Elum, at remote acclimation sites and, during smolt migration, at downstream dams. Throughout these studies the maturational state of all fish was characterized using combinations of visual and histological analysis of testes, gonadosomatic index (GSI), and measurement of plasma 11-ketotestosterone (11-KT). We established that a plasma 11-KT threshold of 0.8 ng/ml could be used to designate male fish as either immature or precociously maturing approximately 8 months prior to final maturation (1-2 months prior to release as 'smolts'). Our analyses revealed that 37-49% of the hatchery-reared males from this program undergo precocious maturation at 2 years of age and a proportion of these fish appear to residualize in the upper Yakima River basin throughout the summer. An unnaturally high incidence of precocious male maturation may result in loss of potential returning anadromous adults, skewing of female: male sex ratios, ecological, and genetic impacts on wild populations and other native species. Precocious male maturation is significantly influenced by growth rate at specific times of year and future studies will be conducted to alter maturation rates through seasonal growth rate manipulations.