WorldWideScience

Sample records for grid success big

  1. BIG: a Grid Portal for Biomedical Data and Images

    Directory of Open Access Journals (Sweden)

    Giovanni Aloisio

    2004-06-01

    Full Text Available Modern management of biomedical systems involves the use of many distributed resources, such as high performance computational resources to analyze biomedical data, mass storage systems to store them, medical instruments (microscopes, tomographs, etc., advanced visualization and rendering tools. Grids offer the computational power, security and availability needed by such novel applications. This paper presents BIG (Biomedical Imaging Grid, a Web-based Grid portal for management of biomedical information (data and images in a distributed environment. BIG is an interactive environment that deals with complex user's requests, regarding the acquisition of biomedical data, the "processing" and "delivering" of biomedical images, using the power and security of Computational Grids.

  2. Intelligent Control of Micro Grid: A Big Data-Based Control Center

    Science.gov (United States)

    Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng

    2018-01-01

    In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.

  3. Technical Research on the Electric Power Big Data Platform of Smart Grid

    OpenAIRE

    Ruiguang MA; Haiyan Wang; Quanming Zhang; Yuan Liang

    2017-01-01

    Through elaborating on the associated relationship among electric power big data, cloud computing and smart grid, this paper put forward general framework of electric power big data platform based on the smart grid. The general framework of the platform is divided into five layers, namely data source layer, data integration and storage layer, data processing and scheduling layer, data analysis layer and application layer. This paper makes in-depth exploration and studies the integrated manage...

  4. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  5. Knowledge Discovery for Smart Grid Operation, Control, and Situation Awareness -- A Big Data Visualization Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gu, Yi; Jiang, Huaiguang; Zhang, Yingchen; Zhang, Jun Jason; Gao, Tianlu; Muljadi, Eduard

    2016-11-21

    In this paper, a big data visualization platform is designed to discover the hidden useful knowledge for smart grid (SG) operation, control and situation awareness. The spawn of smart sensors at both grid side and customer side can provide large volume of heterogeneous data that collect information in all time spectrums. Extracting useful knowledge from this big-data poll is still challenging. In this paper, the Apache Spark, an open source cluster computing framework, is used to process the big-data to effectively discover the hidden knowledge. A high-speed communication architecture utilizing the Open System Interconnection (OSI) model is designed to transmit the data to a visualization platform. This visualization platform uses Google Earth, a global geographic information system (GIS) to link the geological information with the SG knowledge and visualize the information in user defined fashion. The University of Denver's campus grid is used as a SG test bench and several demonstrations are presented for the proposed platform.

  6. Research on big data risk assessment of major transformer defects and faults fusing power grid, equipment and environment based on SVM

    Science.gov (United States)

    Guo, Lijuan; Yan, Haijun; Gao, Wensheng; Chen, Yun; Hao, Yongqi

    2018-01-01

    With the development of power big data, considering the wider power system data, the appropriate large data analysis method can be used to mine the potential law and value of power big data. On the basis of considering all kinds of monitoring data and defects and fault records of main transformer, the paper integrates the power grid, equipment as well as environment data and uses SVM as the main algorithm to evaluate the risk of the main transformer. It gets and compares the evaluation results under different modes, and proves that the risk assessment algorithms and schemes have certain effectiveness. This paper provides a new idea for data fusion of smart grid, and provides a reference for further big data evaluation of power grid equipment.

  7. caGrid 1.0: a Grid enterprise architecture for cancer research.

    Science.gov (United States)

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-10-11

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIG. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5.

  8. Using Globus GridFTP to Transfer and Share Big Data | Poster

    Science.gov (United States)

    By Ashley DeVine, Staff Writer, and Mark Wance, Guest Writer; photo by Richard Frederickson, Staff Photographer Transferring big data, such as the genomics data delivered to customers from the Center for Cancer Research Sequencing Facility (CCR SF), has been difficult in the past because the transfer systems have not kept pace with the size of the data. However, the situation is changing as a result of the Globus GridFTP project.

  9. Optimizing Hadoop Performance for Big Data Analytics in Smart Grid

    Directory of Open Access Journals (Sweden)

    Mukhtaj Khan

    2017-01-01

    Full Text Available The rapid deployment of Phasor Measurement Units (PMUs in power systems globally is leading to Big Data challenges. New high performance computing techniques are now required to process an ever increasing volume of data from PMUs. To that extent the Hadoop framework, an open source implementation of the MapReduce computing model, is gaining momentum for Big Data analytics in smart grid applications. However, Hadoop has over 190 configuration parameters, which can have a significant impact on the performance of the Hadoop framework. This paper presents an Enhanced Parallel Detrended Fluctuation Analysis (EPDFA algorithm for scalable analytics on massive volumes of PMU data. The novel EPDFA algorithm builds on an enhanced Hadoop platform whose configuration parameters are optimized by Gene Expression Programming. Experimental results show that the EPDFA is 29 times faster than the sequential DFA in processing PMU data and 1.87 times faster than a parallel DFA, which utilizes the default Hadoop configuration settings.

  10. Achieving privacy-preserving big data aggregation with fault tolerance in smart grid

    OpenAIRE

    Zhitao Guan; Guanlin Si

    2017-01-01

    In a smart grid, a huge amount of data is collected for various applications, such as load monitoring and demand response. These data are used for analyzing the power state and formulating the optimal dispatching strategy. However, these big energy data in terms of volume, velocity and variety raise concern over consumers’ privacy. For instance, in order to optimize energy utilization and support demand response, numerous smart meters are installed at a consumer's home to collect energy consu...

  11. Mini-grid Policy Tool-kit. Policy and business frameworks for successful mini-grid roll-outs

    International Nuclear Information System (INIS)

    Franz, Michael; Hayek, Niklas; Peterschmidt, Nico; Rohrer, Michael; Kondev, Bozhil; Adib, Rana; Cader, Catherina; Carter, Andrew; George, Peter; Gichungi, Henry; Hankins, Mark; Kappiah, Mahama; Mangwengwende, Simbarashe E.

    2014-01-01

    The Mini-grid Policy Tool-kit is for policy makers to navigate the mini-grid policy design process. It contains information on mini-grid operator models, the economics of mini-grids, and necessary policy and regulation that must be considered for successful implementation. The publication specifically focuses on Africa. Progress on extending the electricity grid in many countries has remained slow because of high costs of gird-extension and limited utility/state budgets for electrification. Mini-grids provide an affordable and cost-effective option to extend needed electricity services. Putting in place the right policy for min-grid deployment requires considerable effort but can yield significant improvement in electricity access rates as examples from Kenya, Senegal and Tanzania illustrate. The tool-kit is available in English, French and Portuguese

  12. Smart Grids as keys to a successful energy transition

    International Nuclear Information System (INIS)

    Meunier, Stephane

    2013-07-01

    This publication addresses several issues related to the role of smart grids in energy transition. The contributions discuss whether the future of smart grid markets can be found in developing countries, outline that the deployment of smart counters announces the development of smart grids in France, comment the search for a new business model for the smart grid market, and question the role of power storage as a key for the integration of renewable energies into the grid. They also address the case of French non interconnected areas which could be a laboratory to develop and test smart grids. They outline that smart grids display an economic logic against energy poverty, that smart grids in developing countries could be a lever against blackouts and electricity thefts, and that they can be a solution for the electrification of rural areas in developing countries. They present energy cooperatives as a successful model for smart grid projects. A last contribution addresses the smart management of water as a solution to preserve the resource while generating profits

  13. DETERMINANTS AFFECTING THE SUCCESS OF DISTRIBUTION GRID PROJECTS IN BINH THUAN POWER COMPANY, VIETNAM

    OpenAIRE

    Pham Van Tai* & Le Duc Thu

    2017-01-01

    The research identified the critical factors affecting the success of the distribution grid project in Binh Thuan Power Company, clarify the mutual relationship between the critical factors affecting the success of the distribution grid project in Binh Thuan Power Company and recommended and rated the solution to enhance the success of the distribution grid project in Binh Thuan Power Company. The research had found fours critical factors: External factors of project, Controlling and coordina...

  14. Kids Enjoy Grids

    CERN Multimedia

    2007-01-01

    I want to come back and work here when I'm older,' was the spontaneous reaction of one of the children invited to CERN by the Enabling Grids for E-sciencE project for a 'Grids for Kids' day at the end of January. The EGEE project is led by CERN, and the EGEE gender action team organized the day to introduce children to grid technology at an early age. The school group included both boys and girls, aged 9 to 11. All of the presenters were women. 'In general, before this visit, the children thought that scientists always wore white coats and were usually male, with wild Einstein-like hair,' said Jackie Beaver, the class's teacher at the Institut International de Lancy, a school near Geneva. 'They were surprised and pleased to see that women became scientists, and that scientists were quite 'normal'.' The half-day event included presentations about why Grids are needed, a visit of the computer centre, some online games, and plenty of time for questions. In the end, everyone agreed that it was a big success a...

  15. Successful social games and their super-power: Big data analytics

    OpenAIRE

    Ioana Roxana STIRCU

    2017-01-01

    This article is a short presentation of big data analysis and game analysis. The paper describes the case of social games, and observes the huge improvement that big data analysis has on social games and their success. It also contains a presentation of Pokemon GO, and its evolution on the market, from launch until today. A set of metrics and algorithms are proposed, that can be used to improve game features and monetization. In the last section, I apply a Naive Bayes classifier, using WEKA, ...

  16. Achieving privacy-preserving big data aggregation with fault tolerance in smart grid

    Directory of Open Access Journals (Sweden)

    Zhitao Guan

    2017-11-01

    Full Text Available In a smart grid, a huge amount of data is collected for various applications, such as load monitoring and demand response. These data are used for analyzing the power state and formulating the optimal dispatching strategy. However, these big energy data in terms of volume, velocity and variety raise concern over consumers’ privacy. For instance, in order to optimize energy utilization and support demand response, numerous smart meters are installed at a consumer's home to collect energy consumption data at a fine granularity, but these fine-grained data may contain information on the appliances and thus the consumer's behaviors at home. In this paper, we propose a privacy-preserving data aggregation scheme based on secret sharing with fault tolerance in a smart grid, which ensures that the control center obtains the integrated data without compromising privacy. Meanwhile, we also consider fault tolerance and resistance to differential attack during the data aggregation. Finally, we perform a security analysis and performance evaluation of our scheme in comparison with the other similar schemes. The analysis shows that our scheme can meet the security requirement, and it also shows better performance than other popular methods.

  17. The SIKS/BiGGrid Big Data Tutorial

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Lammerts, Evert; de Vries, A.P.

    2011-01-01

    The School for Information and Knowledge Systems SIKS and the Dutch e-science grid BiG Grid organized a new two-day tutorial on Big Data at the University of Twente on 30 November and 1 December 2011, just preceding the Dutch-Belgian Database Day. The tutorial is on top of some exciting new

  18. Spatial variability in cost and success of revegetation in a Wyoming big sagebrush community.

    Science.gov (United States)

    Boyd, Chad S; Davies, Kirk W

    2012-09-01

    The ecological integrity of the Wyoming big sagebrush (Artemisia tridentata Nutt. ssp. wyomingensis Beetle and A. Young) alliance is being severely interrupted by post-fire invasion of non-native annual grasses. To curtail this invasion, successful post-fire revegetation of perennial grasses is required. Environmental factors impacting post-fire restoration success vary across space within the Wyoming big sagebrush alliance; however, most restorative management practices are applied uniformly. Our objectives were to define probability of revegetation success over space using relevant soil-related environmental factors, use this information to model cost of successful revegetation and compare the importance of vegetation competition and soil factors to revegetation success. We studied a burned Wyoming big sagebrush landscape in southeast Oregon that was reseeded with perennial grasses. We collected soil and vegetation data at plots spaced at 30 m intervals along a 1.5 km transect in the first two years post-burn. Plots were classified as successful (>5 seedlings/m(2)) or unsuccessful based on density of seeded species. Using logistic regression we found that abundance of competing vegetation correctly predicted revegetation success on 51 % of plots, and soil-related variables correctly predicted revegetation performance on 82.4 % of plots. Revegetation estimates varied from $167.06 to $43,033.94/ha across the 1.5 km transect based on probability of success, but were more homogenous at larger scales. Our experimental protocol provides managers with a technique to identify important environmental drivers of restoration success and this process will be of value for spatially allocating logistical and capital expenditures in a variable restoration environment.

  19. Successful social games and their super-power: Big data analytics

    Directory of Open Access Journals (Sweden)

    Ioana Roxana STIRCU

    2017-07-01

    Full Text Available This article is a short presentation of big data analysis and game analysis. The paper describes the case of social games, and observes the huge improvement that big data analysis has on social games and their success. It also contains a presentation of Pokemon GO, and its evolution on the market, from launch until today. A set of metrics and algorithms are proposed, that can be used to improve game features and monetization. In the last section, I apply a Naive Bayes classifier, using WEKA, on a set of data collected from social media networks, to predict how using a game that implies walking influences the amount of daily steps a player makes.

  20. Machine learning of big data in gaining insight into successful treatment of hypertension.

    Science.gov (United States)

    Koren, Gideon; Nordon, Galia; Radinsky, Kira; Shalev, Varda

    2018-06-01

    Despite effective medications, rates of uncontrolled hypertension remain high. Treatment protocols are largely based on randomized trials and meta-analyses of these studies. The objective of this study was to test the utility of machine learning of big data in gaining insight into the treatment of hypertension. We applied machine learning techniques such as decision trees and neural networks, to identify determinants that contribute to the success of hypertension drug treatment on a large set of patients. We also identified concomitant drugs not considered to have antihypertensive activity, which may contribute to lowering blood pressure (BP) control. Higher initial BP predicts lower success rates. Among the medication options and their combinations, treatment with beta blockers appears to be more commonly effective, which is not reflected in contemporary guidelines. Among numerous concomitant drugs taken by hypertensive patients, proton pump inhibitors (PPIs), and HMG CO-A reductase inhibitors (statins) significantly improved the success rate of hypertension. In conclusions, machine learning of big data is a novel method to identify effective antihypertensive therapy and for repurposing medications already on the market for new indications. Our results related to beta blockers, stemming from machine learning of a large and diverse set of big data, in contrast to the much narrower criteria for randomized clinic trials (RCTs), should be corroborated and affirmed by other methods, as they hold potential promise for an old class of drugs which may be presently underutilized. These previously unrecognized effects of PPIs and statins have been very recently identified as effective in lowering BP in preliminary clinical observations, lending credibility to our big data results.

  1. Data privacy for the smart grid

    CERN Document Server

    Herold, Rebecca

    2015-01-01

    The Smart Grid and PrivacyWhat Is the Smart Grid? Changes from Traditional Energy Delivery Smart Grid Possibilities Business Model Transformations Emerging Privacy Risks The Need for Privacy PoliciesPrivacy Laws, Regulations, and Standards Privacy-Enhancing Technologies New Privacy Challenges IOT Big Data What Is the Smart Grid?Market and Regulatory OverviewTraditional Electricity Business SectorThe Electricity Open Market Classifications of Utilities Rate-Making ProcessesElectricity Consumer

  2. Visualization of big SPH simulations via compressed octree grids

    KAUST Repository

    Reichl, Florian

    2013-10-01

    Interactive and high-quality visualization of spatially continuous 3D fields represented by scattered distributions of billions of particles is challenging. One common approach is to resample the quantities carried by the particles to a regular grid and to render the grid via volume ray-casting. In large-scale applications such as astrophysics, however, the required grid resolution can easily exceed 10K samples per spatial dimension, letting resampling approaches appear unfeasible. In this paper we demonstrate that even in these extreme cases such approaches perform surprisingly well, both in terms of memory requirement and rendering performance. We resample the particle data to a multiresolution multiblock grid, where the resolution of the blocks is dictated by the particle distribution. From this structure we build an octree grid, and we then compress each block in the hierarchy at no visual loss using wavelet-based compression. Since decompression can be performed on the GPU, it can be integrated effectively into GPU-based out-of-core volume ray-casting. We compare our approach to the perspective grid approach which resamples at run-time into a view-aligned grid. We demonstrate considerably faster rendering times at high quality, at only a moderate memory increase compared to the raw particle set. © 2013 IEEE.

  3. The Grid challenge

    CERN Multimedia

    Lundquest, E

    2003-01-01

    At a customer panel discussion during OracleWorld in San Franciso, grid computing was being pushed as the next big thing - even if panellists couldsn't quite agree on what it is, what it will cost or when it will appear (1 page).

  4. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  5. Big Data Analytics for Demand Response: Clustering Over Space and Time

    Energy Technology Data Exchange (ETDEWEB)

    Chelmis, Charalampos [Univ. of Southern California, Los Angeles, CA (United States); Kolte, Jahanvi [Nirma Univ., Gujarat (India); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-10-29

    The pervasive deployment of advanced sensing infrastructure in Cyber-Physical systems, such as the Smart Grid, has resulted in an unprecedented data explosion. Such data exhibit both large volumes and high velocity characteristics, two of the three pillars of Big Data, and have a time-series notion as datasets in this context typically consist of successive measurements made over a time interval. Time-series data can be valuable for data mining and analytics tasks such as identifying the “right” customers among a diverse population, to target for Demand Response programs. However, time series are challenging to mine due to their high dimensionality. In this paper, we motivate this problem using a real application from the smart grid domain. We explore novel representations of time-series data for BigData analytics, and propose a clustering technique for determining natural segmentation of customers and identification of temporal consumption patterns. Our method is generizable to large-scale, real-world scenarios, without making any assumptions about the data. We evaluate our technique using real datasets from smart meters, totaling ~ 18,200,000 data points, and show the efficacy of our technique in efficiency detecting the number of optimal number of clusters.

  6. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    Science.gov (United States)

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  7. The research and application of the power big data

    Science.gov (United States)

    Zhang, Suxiang; Zhang, Dong; Zhang, Yaping; Cao, Jinping; Xu, Huiming

    2017-01-01

    Facing the increasing environment crisis, how to improve energy efficiency is the important problem. Power big data is main support tool to realize demand side management and response. With the promotion of smart power consumption, distributed clean energy and electric vehicles etc get wide application; meanwhile, the continuous development of the Internet of things technology, more applications access the endings in the grid power link, which leads to that a large number of electric terminal equipment, new energy access smart grid, and it will produce massive heterogeneous and multi-state electricity data. These data produce the power grid enterprise's precious wealth, as the power big data. How to transform it into valuable knowledge and effective operation becomes an important problem, it needs to interoperate in the smart grid. In this paper, we had researched the various applications of power big data and integrate the cloud computing and big data technology, which include electricity consumption online monitoring, the short-term power load forecasting and the analysis of the energy efficiency. Based on Hadoop, HBase and Hive etc., we realize the ETL and OLAP functions; and we also adopt the parallel computing framework to achieve the power load forecasting algorithms and propose a parallel locally weighted linear regression model; we study on energy efficiency rating model to comprehensive evaluate the level of energy consumption of electricity users, which allows users to understand their real-time energy consumption situation, adjust their electricity behavior to reduce energy consumption, it provides decision-making basis for the user. With an intelligent industrial park as example, this paper complete electricity management. Therefore, in the future, power big data will provide decision-making support tools for energy conservation and emissions reduction.

  8. 76 FR 24889 - Submission for OMB Review; Comment Request; Cancer Biomedical Informatics Grid® (caBIG®) Support...

    Science.gov (United States)

    2011-05-03

    ... to offer to their unique organizational goals and needs, so having this customized support option...; Comment Request; Cancer Biomedical Informatics Grid[supreg] (caBIG[supreg]) Support Service Provider (SSP... Grid [supreg] (caBIG [supreg]) Support Service Provider (SSP) Program (NCI). Type of Information...

  9. Implementation of grid-connected to/from off-grid transference for micro-grid inverters

    OpenAIRE

    Heredero Peris, Daniel; Chillón Antón, Cristian; Pages Gimenez, Marc; Gross, Gabriel Igor; Montesinos Miracle, Daniel

    2013-01-01

    This paper presents the transfer of a microgrid converter from/to on-grid to/from off-grid when the converter is working in two different modes. In the first transfer presented method, the converter operates as a Current Source Inverter (CSI) when on-grid and as a Voltage Source Inverter (VSI) when off-grid. In the second transfer method, the converter is operated as a VSI both, when operated on-grid and off-grid. The two methods are implemented successfully in a real pla...

  10. A Guidebook on Grid Interconnection and Islanded Operation of Mini-Grid Power Systems Up to 200 kW

    Energy Technology Data Exchange (ETDEWEB)

    Greacen, Chris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Engel, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Quetchenbach, Thomas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-04-01

    A Guidebook on Grid Interconnection and Islanded Operation of Mini-Grid Power Systems Up to 200 kW is intended to help meet the widespread need for guidance, standards, and procedures for interconnecting mini-grids with the central electric grid as rural electrification advances in developing countries, bringing these once separate power systems together. The guidebook aims to help owners and operators of renewable energy mini-grids understand the technical options available, safety and reliability issues, and engineering and administrative costs of different choices for grid interconnection. The guidebook is intentionally brief but includes a number of appendices that point the reader to additional resources for indepth information. Not included in the scope of the guidebook are policy concerns about “who pays for what,” how tariffs should be set, or other financial issues that are also paramount when “the little grid connects to the big grid.”

  11. The dynamic management system for grid resources information of IHEP

    International Nuclear Information System (INIS)

    Gu Ming; Sun Gongxing; Zhang Weiyi

    2003-01-01

    The Grid information system is an essential base for building a Grid computing environment, it collects timely the resources information of each resource in a Grid, and provides an entire information view of all resources to the other components in a Grid computing system. The Grid technology could support strongly the computing of HEP (High Energy Physics) with big science and multi-organization features. In this article, the architecture and implementation of a dynamic management system are described, as well as the grid and LDAP (Lightweight Directory Access Protocol), including Web-based design for resource information collecting, querying and modifying. (authors)

  12. Proposal for grid computing for nuclear applications

    International Nuclear Information System (INIS)

    Faridah Mohamad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim; Zukhaimira Zolkapli

    2013-01-01

    Full-text: The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process. (author)

  13. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  14. How to deal with petabytes of data: the LHC Grid project

    International Nuclear Information System (INIS)

    Britton, D; Lloyd, S L

    2014-01-01

    We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the ‘Big Data’ problem and how the Grid has been forerunner of today's cloud computing. (review article)

  15. Planet-Scale grid A particle collier leads data grid developers to unprecedented dimensions

    CERN Multimedia

    Thibodeau, Patrick

    2005-01-01

    In 2007, scientists will begin smashing protons and ions together in a massive, multinational experiment to understand what the universe looked like tiny fractions of a second after the Big Bang. The particle accelerator used in this test will release a vast flood of data on a scale unlike anything seen before, and for that scientists will need a computing grid of equally great capability

  16. Think bigger developing a successful big data strategy for your business

    CERN Document Server

    Van Rijmenam, Mark

    2014-01-01

    Big data--the enormous amount of data that is created as virtually every movement, transaction, and choice we make becomes digitized--is revolutionizing business. Written for a non-technical audience, Think Bigger covers big data trends, best practices, and security concerns--as well as key technologies like Hadoop and MapReduce, and several crucial types of analyses. Offering real-world insight and explanations, this book provides a roadmap for organizations looking to develop a profitable big data strategy...and reveals why it's not something they can leave to the I.T. department.

  17. Big Data Analytics for Dynamic Energy Management in Smart Grids

    OpenAIRE

    Diamantoulakis, Panagiotis D.; Kapinas, Vasileios M.; Karagiannidis, George K.

    2015-01-01

    The smart electricity grid enables a two-way flow of power and data between suppliers and consumers in order to facilitate the power flow optimization in terms of economic efficiency, reliability and sustainability. This infrastructure permits the consumers and the micro-energy producers to take a more active role in the electricity market and the dynamic energy management (DEM). The most important challenge in a smart grid (SG) is how to take advantage of the users' participation in order to...

  18. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  19. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  20. Visualization of big SPH simulations via compressed octree grids

    KAUST Repository

    Reichl, Florian; Treib, Marc; Westermann, Rudiger

    2013-01-01

    Interactive and high-quality visualization of spatially continuous 3D fields represented by scattered distributions of billions of particles is challenging. One common approach is to resample the quantities carried by the particles to a regular grid

  1. Autonomous Energy Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, Benjamin D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bernstein, Andrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-04

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performance while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.

  2. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  3. Addressing Data Veracity in Big Data Applications

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Computer Science; Chelmis, Charalampos [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Electrical Engineering; Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Electrical Engineering

    2014-10-27

    Big data applications such as in smart electric grids, transportation, and remote environment monitoring involve geographically dispersed sensors that periodically send back information to central nodes. In many cases, data from sensors is not available at central nodes at a frequency that is required for real-time modeling and decision-making. This may be due to physical limitations of the transmission networks, or due to consumers limiting frequent transmission of data from sensors located at their premises for security and privacy concerns. Such scenarios lead to partial data problem and raise the issue of data veracity in big data applications. We describe a novel solution to the problem of making short term predictions (up to a few hours ahead) in absence of real-time data from sensors in Smart Grid. A key implication of our work is that by using real-time data from only a small subset of influential sensors, we are able to make predictions for all sensors. We thus reduce the communication complexity involved in transmitting sensory data in Smart Grids. We use real-world electricity consumption data from smart meters to empirically demonstrate the usefulness of our method. Our dataset consists of data collected at 15-min intervals from 170 smart meters in the USC Microgrid for 7 years, totaling 41,697,600 data points.

  4. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  5. Where Big Data and Prediction Meet

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brase, Jim M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hart, Bill [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kusnezov, Dimitri [USDOE, Washington, DC (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-09-11

    Our ability to assemble and analyze massive data sets, often referred to under the title of “big data”, is an increasingly important tool for shaping national policy. This in turn has introduced issues from privacy concerns to cyber security. But as IBM’s John Kelly emphasized in the last Innovation, making sense of the vast arrays of data will require radically new computing tools. In the past, technologies and tools for analysis of big data were viewed as quite different from the traditional realm of high performance computing (HPC) with its huge models of phenomena such as global climate or supporting the nuclear test moratorium. Looking ahead, this will change with very positive benefits for both worlds. Societal issues such as global security, economic planning and genetic analysis demand increased understanding that goes beyond existing data analysis and reduction. The modeling world often produces simulations that are complex compositions of mathematical models and experimental data. This has resulted in outstanding successes such as the annual assessment of the state of the US nuclear weapons stockpile without underground nuclear testing. Ironically, while there were historically many test conducted, this body of data provides only modest insight into the underlying physics of the system. A great deal of emphasis was thus placed on the level of confidence we can develop for the predictions. As data analytics and simulation come together, there is a growing need to assess the confidence levels in both data being gathered and the complex models used to make predictions. An example of this is assuring the security or optimizing the performance of critical infrastructure systems such as the power grid. If one wants to understand the vulnerabilities of the system or impacts of predicted threats, full scales tests of the grid against threat scenarios are unlikely. Preventive measures would need to be predicated on well-defined margins of confidence in order

  6. A comprehensive WSN-based approach to efficiently manage a Smart Grid.

    Science.gov (United States)

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-10-10

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators-mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices-making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach.

  7. A socio-technical investigation of the smart grid: Implications for demand-side activities of electricity service providers

    Science.gov (United States)

    Corbett, Jacqueline Marie

    Enabled by advanced communication and information technologies, the smart grid represents a major transformation for the electricity sector. Vast quantities of data and two-way communications abilities create the potential for a flexible, data-driven, multi-directional supply and consumption network well equipped to meet the challenges of the next century. For electricity service providers ("utilities"), the smart grid provides opportunities for improved business practices and new business models; however, a transformation of such magnitude is not without risks. Three related studies are conducted to explore the implications of the smart grid on utilities' demand-side activities. An initial conceptual framework, based on organizational information processing theory, suggests that utilities' performance depends on the fit between the information processing requirements and capacities associated with a given demand-side activity. Using secondary data and multiple regression analyses, the first study finds, consistent with OIPT, a positive relationship between utilities' advanced meter deployments and demand-side management performance. However, it also finds that meters with only data collection capacities are associated with lower performance, suggesting the presence of information waste causing operational inefficiencies. In the second study, interviews with industry participants provide partial support for the initial conceptual model, new insights are gained with respect to information processing fit and information waste, and "big data" is identified as a central theme of the smart grid. To derive richer theoretical insights, the third study employs a grounded theory approach examining the experience of one successful utility in detail. Based on interviews and documentary data, the paradox of dynamic stability emerges as an essential enabler of utilities' performance in the smart grid environment. Within this context, the frames of opportunity, control, and data

  8. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  9. Visualizing big energy data

    DEFF Research Database (Denmark)

    Hyndman, Rob J.; Liu, Xueqin Amy; Pinson, Pierre

    2018-01-01

    Visualization is a crucial component of data analysis. It is always a good idea to plot the data before fitting models, making predictions, or drawing conclusions. As sensors of the electric grid are collecting large volumes of data from various sources, power industry professionals are facing th...... the challenge of visualizing such data in a timely fashion. In this article, we demonstrate several data-visualization solutions for big energy data through three case studies involving smart-meter data, phasor measurement unit (PMU) data, and probabilistic forecasts, respectively....

  10. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2010-01-01

    This book is an introduction to structured and unstructured grid methods in scientific computing, addressing graduate students, scientists as well as practitioners. Basic local and integral grid quality measures are formulated and new approaches to mesh generation are reviewed. In addition to the content of the successful first edition, a more detailed and practice oriented description of monitor metrics in Beltrami and diffusion equations is given for generating adaptive numerical grids. Also, new techniques developed by the author are presented, in particular a technique based on the inverted form of Beltrami’s partial differential equations with respect to control metrics. This technique allows the generation of adaptive grids for a wide variety of computational physics problems, including grid clustering to given function values and gradients, grid alignment with given vector fields, and combinations thereof. Applications of geometric methods to the analysis of numerical grid behavior as well as grid ge...

  11. Semantic web data warehousing for caGrid.

    Science.gov (United States)

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-10-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.

  12. The pilot way to Grid resources using glideinWMS

    CERN Document Server

    Sfiligoi, Igor; Holzman, Burt; Mhashilkar, Parag; Padhi, Sanjay; Wurthwrin, Frank

    Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the non-uniformity of compute resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.

  13. Optimal variable-grid finite-difference modeling for porous media

    International Nuclear Information System (INIS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-01-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs. (paper)

  14. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  15. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  16. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  17. Comprehensive analysis and evaluation of big data for main transformer equipment based on PCA and Apriority

    Science.gov (United States)

    Guo, Lijuan; Yan, Haijun; Hao, Yongqi; Chen, Yun

    2018-01-01

    With the power supply level of urban power grid toward high reliability development, it is necessary to adopt appropriate methods for comprehensive evaluation of existing equipment. Considering the wide and multi-dimensional power system data, the method of large data mining is used to explore the potential law and value of power system equipment. Based on the monitoring data of main transformer and the records of defects and faults, this paper integrates the data of power grid equipment environment. Apriori is used as an association identification algorithm to extract the frequent correlation factors of the main transformer, and the potential dependence of the big data is analyzed by the support and confidence. Then, the integrated data is analyzed by PCA, and the integrated quantitative scoring model is constructed. It is proved to be effective by using the test set to validate the evaluation algorithm and scheme. This paper provides a new idea for data fusion of smart grid, and provides a reference for further evaluation of big data of power grid equipment.

  18. Challenges facing production grids

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, Ruth; /Fermilab

    2007-06-01

    Today's global communities of users expect quality of service from distributed Grid systems equivalent to that their local data centers. This must be coupled to ubiquitous access to the ensemble of processing and storage resources across multiple Grid infrastructures. We are still facing significant challenges in meeting these expectations, especially in the underlying security, a sustainable and successful economic model, and smoothing the boundaries between administrative and technical domains. Using the Open Science Grid as an example, I examine the status and challenges of Grids operating in production today.

  19. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    Science.gov (United States)

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  20. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  1. Macedonian transmission grid capability and development

    International Nuclear Information System (INIS)

    Naumoski, K.; Achkoska, E.; Paunoski, A.

    2015-01-01

    The main task of the transmission grid is to guarantee evacuation of electricity from production facilities and, at the same time, supply the electricity to all customers, in a secure, reliable and qualitative manner. During the last years, transmission grid goes through the period of fast and important development, as a result of implementation of renewable and new technologies and creation of internal European electricity market. Due to these reasons, capacity of the existing grid needs to be upgraded either with optimization of existing infrastructure or constructing the new transmission projects. Among the various solutions for strengthening the grid, the one with the minimal investment expenses for construction is selected. While planning the national transmission grid, MEPSO planners apply multi-scenarios analyses, in order to handle all uncertainties, particularly in the forecasts on loads, production and exchange of electricity, location and size of the new power plants, hydrological conditions, integration of renewable sources and the evolution of the electricity market. Visions for development of European transmission grid are also considered. Special attention in the development plan is paid to modelling of power systems in the region of South-Eastern Europe and covering a wider area of the regional transmission grid with simulations of various market transactions. Macedonian transmission grid is developed to satisfy all requirements for electricity production/supply and transits, irrespective which scenario will be realized on long-term basis. Transmission development plan gives the road map for grid evolution from short-term and mid-term period towards long-term horizons (15-20 years ahead). While creating long-term visions, a big challenge in front of transmission planners is implementation of NPP. The paper gives overview of the planning process of Macedonian transmission grid,comprising: definition of scenarios,planning methodology and assessment of

  2. Planning and designing smart grids: philosophical considerations

    NARCIS (Netherlands)

    Ribeiro, P.F.; Polinder, H.; Verkerk, M.J.

    2012-01-01

    The electric power grid is a crucial part of societal infrastructure and needs constant attention to maintain its performance and reliability. European grid project investments are currently valued at over 5 billion Euros and are estimated to reach 56 billion by 2020 [2]. Successful smart grid

  3. How people are critical to the success of Big Data

    NARCIS (Netherlands)

    Steen, M.G.D.; Boer, J. de; Beurden, M.H.P.H. van

    2016-01-01

    A buzz has emerged around Big Data: an emerging field that is concerned with capturing, storing, combining, visualizing and analysing large and diverse sets of data. Realizing the societal benefits of Data Driven Innovations requires that the innovations are used and adopted by people. In fact like

  4. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  5. The role of big laboratories

    International Nuclear Information System (INIS)

    Heuer, R-D

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward. (paper)

  6. Numerical analysis of the big bounce in loop quantum cosmology

    International Nuclear Information System (INIS)

    Laguna, Pablo

    2007-01-01

    Loop quantum cosmology (LQC) homogeneous models with a massless scalar field show that the big-bang singularity can be replaced by a big quantum bounce. To gain further insight on the nature of this bounce, we study the semidiscrete loop quantum gravity Hamiltonian constraint equation from the point of view of numerical analysis. For illustration purposes, we establish a numerical analogy between the quantum bounces and reflections in finite difference discretizations of wave equations triggered by the use of nonuniform grids or, equivalently, reflections found when solving numerically wave equations with varying coefficients. We show that the bounce is closely related to the method for the temporal update of the system and demonstrate that explicit time-updates in general yield bounces. Finally, we present an example of an implicit time-update devoid of bounces and show back-in-time, deterministic evolutions that reach and partially jump over the big-bang singularity

  7. Smart Grid communication middleware comparison distributed control comparison for the internet of things

    DEFF Research Database (Denmark)

    Petersen, Bo Søborg; Bindner, Henrik W.; Poulsen, Bjarne

    2017-01-01

    are possible by their performance, which is limited by the middleware characteristics, primarily interchangeable serialization and the Publish-Subscribe messaging pattern. The earlier paper "Smart Grid Serialization Comparison" (Petersen et al. 2017) AIDS in the choice of serialization, which has a big impact...

  8. Astronomy in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Yanxia Zhang

    2015-05-01

    Full Text Available The fields of Astrostatistics and Astroinformatics are vital for dealing with the big data issues now faced by astronomy. Like other disciplines in the big data era, astronomy has many V characteristics. In this paper, we list the different data mining algorithms used in astronomy, along with data mining software and tools related to astronomical applications. We present SDSS, a project often referred to by other astronomical projects, as the most successful sky survey in the history of astronomy and describe the factors influencing its success. We also discuss the success of Astrostatistics and Astroinformatics organizations and the conferences and summer schools on these issues that are held annually. All the above indicates that astronomers and scientists from other areas are ready to face the challenges and opportunities provided by massive data volume.

  9. A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data

    Science.gov (United States)

    Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing

    2017-10-01

    Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.

  10. Authentication Method for Privacy Protection in Smart Grid Environment

    Directory of Open Access Journals (Sweden)

    Do-Eun Cho

    2014-01-01

    Full Text Available Recently, the interest in green energy is increasing as a means to resolve problems including the exhaustion of the energy source and, effective management of energy through the convergence of various fields. Therefore, the projects of smart grid which is called intelligent electrical grid for the accomplishment of low carbon green growth are being carried out in a rush. However, as the IT is centered upon the electrical grid, the shortage of IT also appears in smart grid and the complexity of convergence is aggravating the problem. Also, various personal information and payment information within the smart grid are gradually becoming big data and target for external invasion and attack; thus, there is increase in concerns for this matter. The purpose of this study is to analyze the security vulnerabilities and security requirement within smart grid and the authentication and access control method for privacy protection within home network. Therefore, we propose a secure access authentication and remote control method for user’s home device within home network environment, and we present their security analysis. The proposed access authentication method blocks the unauthorized external access and enables secure remote access to home network and its devices with a secure message authentication protocol.

  11. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  12. International Symposium on Grids and Clouds (ISGC) 2014

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).

  13. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  14. Japan. Superconductivity for Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hayakawa, K.

    2012-11-15

    Currently, many smart grid projects are running or planned worldwide. These aim at controlling the electricity supply more efficiently and more stably in a new power network system. In Japan, especially superconductivity technology development projects are carried out to contribute to the future smart grid. Japanese cable makers such as Sumitomo Electric and Furukawa Electric are leading in the production of high-temperature superconducting (HTS) power cables. The world's largest electric current and highest voltage superconductivity proving tests have been started this year. Big cities such as Tokyo will be expected to introduce the HTS power cables to reduce transport losses and to meet the increased electricity demand in the near future. Superconducting devices, HTS power cables, Superconducting Magnetic Energy Storage (SMES) and flywheels are the focus of new developments in cooperations between companies, universities and research institutes, funded by the Japanese research and development funding organization New Energy and Industrial Technology Development Organization (NEDO)

  15. Performance assessment of distributed communication architectures in smart grid.

    OpenAIRE

    Jiang, Jing; Sun, Hongjian

    2016-01-01

    The huge amount of smart meters and growing frequent data readings have become a big challenge on data acquisition and processing in smart grid advanced metering infrastructure systems. This requires a distributed communication architecture in which multiple distributed meter data management systems (MDMSs) are deployed and meter data are processed locally. In this paper, we present the network model for supporting this distributed communication architecture and propos...

  16. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    Science.gov (United States)

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative

  17. The DataGrid Project

    CERN Document Server

    Ruggieri, F

    2001-01-01

    An overview of the objectives and status of the DataGrid Project is presented, together with a brief introduction to the Grid metaphor and some references to the Grid activities and initiatives related to DataGrid. High energy physics experiments have always requested state of the art computing facilities to efficiently perform several computing activities related with the handling of large amounts of data and fairly large computing resources. Some of the ideas born inside the community to enhance the user friendliness of all the steps in the computing chain have been, sometimes, successfully applied also in other contexts: one bright example is the World Wide Web. The LHC computing challenge has triggered inside the high energy physics community, the start of the DataGrid Project. The objective of the project is to enable next generation scientific exploration requiring intensive computation and analysis of shared large-scale databases. (12 refs).

  18. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  19. Cernavoda NPP integration in the Romanian grid

    International Nuclear Information System (INIS)

    Prodan, R.C.

    2000-01-01

    The intention of this material is to present our point of view about some specific matters that arise from having a relatively large power production unit (706 MW) connected to a National Grid in which the second largest units are only 330 MW. The material consists in three major parts. In the first section is presented the 'big picture' of the Romanian National Grid. The second section covers the role played by CNPP in the grid power balance and frequency/voltage adjustment. CNPP is located at the base of the daily load curve and thus not normally participating at frequency adjustment. CNPP also has a contribution in increasing the dynamic stability of the National Grid. The third section is a more detailed presentation of CNPP behavior during grid upsets, with reference to the reactor and turbine control systems, and also the types of transients that our plant could induce to the grid due to internal malfunctions. The over-all unit control is based on the 'reactor power constant' policy, all the fluctuations in the power output to the grid being compensated by the Boiler Pressure Control System. Some features of the Turbine Electro-Hydraulic Control System and how it interacts with the Boiler Pressure Control Sys. will also be presented. The types of transients that CNPP could experience are reactor power setbacks (automatic ramped power reductions), reactor power step-backs (fast controlled power reduction) and unit trips, which are the most severe. There are two ways from the grid point of view to deal with such transients; to compensate the power loss by increasing the production and to disconnect unimportant power consumers. These actions are taken both automatically and manually (some details will be presented). (author)

  20. Improving the Success of Strategic Management Using Big Data.

    Science.gov (United States)

    Desai, Sapan S; Wilkerson, James; Roberts, Todd

    2016-01-01

    Strategic management involves determining organizational goals, implementing a strategic plan, and properly allocating resources. Poor access to pertinent and timely data misidentifies clinical goals, prevents effective resource allocation, and generates waste from inaccurate forecasting. Loss of operational efficiency diminishes the value stream, adversely impacts the quality of patient care, and hampers effective strategic management. We have pioneered an approach using big data to create competitive advantage by identifying trends in clinical practice, accurately anticipating future needs, and strategically allocating resources for maximum impact.

  1. Grid-Tied Photovoltaic Power System

    Science.gov (United States)

    Eichenberg, Dennis J.

    2011-01-01

    A grid-tied photovoltaic (PV) power system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility. Operating costs of a PV power system are low compared to conventional power technologies. This method can displace the highest-cost electricity during times of peak demand in most climatic regions, and thus reduce grid loading. Net metering is often used, in which independent power producers such as PV power systems are connected to the utility grid via the customers main service panels and meters. When the PV power system is generating more power than required at that location, the excess power is provided to the utility grid. The customer pays the net of the power purchased when the on-site power demand is greater than the onsite power production, and the excess power is returned to the utility grid. Power generated by the PV system reduces utility demand, and the surplus power aids the community. Modern PV panels are readily available, reliable, efficient, and economical, with a life expectancy of at least 25 years. Modern electronics have been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy equal to the modern PV panels. The grid-tied PV power system was successfully designed and developed, and this served to validate the basic principles developed, and the theoretical work that was performed. Grid-tied PV power systems are reliable, maintenance- free, long-life power systems, and are of significant value to NASA and the community. Of particular value are the analytical tools and capabilities that have been successfully developed. Performance predictions can be made confidently for grid-tied PV systems of various scales. The work was done under the NASA Hybrid Power Management (HPM

  2. Ambiguities in the grid-inefficiency correction for Frisch-Grid Ionization Chambers

    International Nuclear Information System (INIS)

    Al-Adili, A.; Hambsch, F.-J.; Bencardino, R.; Oberstedt, S.; Pomp, S.

    2012-01-01

    Ionization chambers with Frisch grids have been very successfully applied to neutron-induced fission-fragment studies during the past 20 years. They are radiation resistant and can be easily adapted to the experimental conditions. The use of Frisch grids has the advantage to remove the angular dependency from the charge induced on the anode plate. However, due to the Grid Inefficiency (GI) in shielding the charges, the anode signal remains slightly angular dependent. The correction for the GI is, however, essential to determine the correct energy of the ionizing particles. GI corrections can amount to a few percent of the anode signal. Presently, two contradicting correction methods are considered in literature. The first method adding the angular-dependent part of the signal to the signal pulse height; the second method subtracting the former from the latter. Both additive and subtractive approaches were investigated in an experiment where a Twin Frisch-Grid Ionization Chamber (TFGIC) was employed to detect the spontaneous fission fragments (FF) emitted by a 252 Cf source. Two parallel-wire grids with different wire spacing (1 and 2 mm, respectively), were used individually, in the same chamber side. All the other experimental conditions were unchanged. The 2 mm grid featured more than double the GI of the 1 mm grid. The induced charge on the anode in both measurements was compared, before and after GI correction. Before GI correction, the 2 mm grid resulted in a lower pulse-height distribution than the 1 mm grid. After applying both GI corrections to both measurements only the additive approach led to consistent grid independent pulse-height distributions. The application of the subtractive correction on the contrary led to inconsistent, grid-dependent results. It is also shown that the impact of either of the correction methods is small on the FF mass distributions of 235 U(n th , f).

  3. Big Data Management in US Hospitals: Benefits and Barriers.

    Science.gov (United States)

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  4. Fixing the Big Bang Theory's Lithium Problem

    Science.gov (United States)

    Kohler, Susanna

    2017-02-01

    How did our universe come into being? The Big Bang theory is a widely accepted and highly successful cosmological model of the universe, but it does introduce one puzzle: the cosmological lithium problem. Have scientists now found a solution?Too Much LithiumIn the Big Bang theory, the universe expanded rapidly from a very high-density and high-temperature state dominated by radiation. This theory has been validated again and again: the discovery of the cosmic microwave background radiation and observations of the large-scale structure of the universe both beautifully support the Big Bang theory, for instance. But one pesky trouble-spot remains: the abundance of lithium.The arrows show the primary reactions involved in Big Bang nucleosynthesis, and their flux ratios, as predicted by the authors model, are given on the right. Synthesizing primordial elements is complicated! [Hou et al. 2017]According to Big Bang nucleosynthesis theory, primordial nucleosynthesis ran wild during the first half hour of the universes existence. This produced most of the universes helium and small amounts of other light nuclides, including deuterium and lithium.But while predictions match the observed primordial deuterium and helium abundances, Big Bang nucleosynthesis theory overpredicts the abundance of primordial lithium by about a factor of three. This inconsistency is known as the cosmological lithium problem and attempts to resolve it using conventional astrophysics and nuclear physics over the past few decades have not been successful.In a recent publicationled by Suqing Hou (Institute of Modern Physics, Chinese Academy of Sciences) and advisorJianjun He (Institute of Modern Physics National Astronomical Observatories, Chinese Academy of Sciences), however, a team of scientists has proposed an elegant solution to this problem.Time and temperature evolution of the abundances of primordial light elements during the beginning of the universe. The authors model (dotted lines

  5. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    CERN Multimedia

    2008-01-01

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  6. Entity information life cycle for big data master data management and information integration

    CERN Document Server

    Talburt, John R

    2015-01-01

    Entity Information Life Cycle for Big Data walks you through the ins and outs of managing entity information so you can successfully achieve master data management (MDM) in the era of big data. This book explains big data's impact on MDM and the critical role of entity information management system (EIMS) in successful MDM. Expert authors Dr. John R. Talburt and Dr. Yinle Zhou provide a thorough background in the principles of managing the entity information life cycle and provide practical tips and techniques for implementing an EIMS, strategies for exploiting distributed processing to hand

  7. Police Spatial Big Data Location Code and Its Application Prospect

    Directory of Open Access Journals (Sweden)

    HU Xiaoguang

    2016-12-01

    Full Text Available The rich decision-making basis are provided for police work by police spatial big data. But some challenges are also brought by it, such as:large data integration complex, multi scale information related difficulties, the location identification is not unique. Thus, how to make the data better service to the police work reform and development is a problem need to be study. In this paper, we propose location identification method to solve the existing problems. Based on subdivision grid, we design the location encoding method of police spatial big data, and choose domicile location identification as a case. Finally, the prospect of its application is presented. So, a new idea is proposed to solve the problem existing in the police spatial data organization and application.

  8. Informatics Solutions for Prosumers connected to Smart Grids

    Directory of Open Access Journals (Sweden)

    Simona Vasilica OPREA

    2015-03-01

    Full Text Available This paper gives a brief overview about electricity consumption optimization based on consumption profiles of electricity prosumers that are connected to smart grids. The main object of this approach is identification of informatics solutions for electricity consumption optimization in order to decrease electricity bill. In this way, larger scale integration of renewable energy sources is allowed therefore entire society will gain benefits. This paper describes the main objectives of such informatics system and stages for its implementation. The system will analyze the specific profile and behavior of each electricity consumer or prosumer, automatically assist him to make right decisions and offer optimal advice for usage of controllable and non-controllable appliances. It will serve, based on big data transfer from electricity consumers or prosumers, as a powerful tool for grid operators that will be able to better plan their resources.

  9. The open science grid

    International Nuclear Information System (INIS)

    Pordes, R.

    2004-01-01

    The U.S. LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and active experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The Open Science Grid (OSG) model builds upon the successful approach of last year's joint Grid2003 project. The Grid3 shared infrastructure has for over eight months provided significant computational resources and throughput to a range of applications, including ATLAS and CMS data challenges, SDSS, LIGO, and biology analyses, and computer science demonstrators and experiments. To move towards LHC-scale data management, access and analysis capabilities, we must increase the scale, services, and sustainability of the current infrastructure by an order of magnitude or more. Thus, we must achieve a significant upgrade in its functionalities and technologies. The initial OSG partners will build upon a fully usable, sustainable and robust grid. Initial partners include the US LHC collaborations, DOE and NSF Laboratories and Universities and Trillium Grid projects. The approach is to federate with other application communities in the U.S. to build a shared infrastructure open to other sciences and capable of being modified and improved to respond to needs of other applications, including CDF, D0, BaBar, and RHIC experiments. We describe the application-driven, engineered services of the OSG, short term plans and status, and the roadmap for a consortium, its partnerships and national focus

  10. SLGRID: spectral synthesis software in the grid

    Science.gov (United States)

    Sabater, J.; Sánchez, S.; Verdes-Montenegro, L.

    2011-11-01

    SLGRID (http://www.e-ciencia.es/wiki/index.php/Slgrid) is a pilot project proposed by the e-Science Initiative of Andalusia (eCA) and supported by the Spanish e-Science Network in the frame of the European Grid Initiative (EGI). The aim of the project was to adapt the spectral synthesis software Starlight (Cid-Fernandes et al. 2005) to the Grid infrastructure. Starlight is used to estimate the underlying stellar populations (their ages and metallicities) using an optical spectrum, hence, it is possible to obtain a clean nebular spectrum that can be used for the diagnostic of the presence of an Active Galactic Nucleus (Sabater et al. 2008, 2009). The typical serial execution of the code for big samples of galaxies made it ideal to be integrated into the Grid. We obtain an improvement on the computational time of order N, being N the number of nodes available in the Grid. In a real case we obtained our results in 3 hours with SLGRID instead of the 60 days spent using Starlight in a PC. The code has already been ported to the Grid. The first tests were made within the e-CA infrastrusture and, later, itwas tested and improved with the colaboration of the CETA-CIEMAT. The SLGRID project has been recently renewed. In a future it is planned to adapt the code for the reduction of data from Integral Field Units where each dataset is composed of hundreds of spectra. Electronic version of the poster at http://www.iaa.es/~jsm/SEA2010

  11. FermiGrid - experience and future plans

    International Nuclear Information System (INIS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Timm, S.; Yocum, D.

    2007-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems

  12. Big data in energy. Final project

    International Nuclear Information System (INIS)

    Fraysse, Clemence; Plaisance, Brice

    2015-01-01

    Within the context of development of the use of always more abundant digital data in energy production, distribution and consumption networks, for instance as real time input of Smart Grids, the authors propose a description of the present energy sector, of its recent evolutions, of its actors and of its future challenges. They focus on the case of France, but also make reference to other countries where these evolutions of the energy sector are already further advanced. They discuss the evolutions generated by the emergence of the Bid Data on the whole value chain. They also discuss the various challenges associated with these transformations, notably for energy transition, for a better integration of renewable energies into the national energy grid, but also in terms of emergence of an energy related data services sector, and in terms of upheaval of business models. They finally discuss the various obstacles that the Big Data revolution will have to face and overcome to deeply transform the energy sector, notably the risk of a malevolent use of data, and of a loss of confidence from the consumer

  13. MrGrid: a portable grid based molecular replacement pipeline.

    Directory of Open Access Journals (Sweden)

    Jason W Schmidberger

    Full Text Available BACKGROUND: The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. METHODOLOGY/PRINCIPAL FINDINGS: MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. CONCLUSIONS: MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success.

  14. Benefits, Challenges and Tools of Big Data Management

    Directory of Open Access Journals (Sweden)

    Fernando L. F. Almeida

    2017-10-01

    Full Text Available Big Data is one of the most predominant field of knowledge and research that has generated high repercussion in the process of digital transformation of organizations in recent years. The Big Data's main goal is to improve work processes through analysis and interpretation of large amounts of data. Knowing how Big Data works, its benefits, challenges and tools, are essential elements for business success. Our study performs a systematic review on Big Data field adopting a mind map approach, which allows us to easily and visually identify its main elements and dependencies. The findings identified and mapped a total of 12 main branches of benefits, challenges and tools, and also a total of 52 sub branches in each of the main areas of the model.

  15. A decision support system using combined-classifier for high-speed data stream in smart grid

    Science.gov (United States)

    Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun

    2016-11-01

    Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.

  16. Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration; Karpenko, D

    2013-01-01

    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability...

  17. Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration; Karpenko, D

    2014-01-01

    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability...

  18. Study of Security Attributes of Smart Grid Systems- Current Cyber Security Issues

    Energy Technology Data Exchange (ETDEWEB)

    Wayne F. Boyer; Scott A. McBride

    2009-04-01

    This document provides information for a report to congress on Smart Grid security as required by Section 1309 of Title XIII of the Energy Independence and Security Act of 2007. The security of any future Smart Grid is dependent on successfully addressing the cyber security issues associated with the nation’s current power grid. Smart Grid will utilize numerous legacy systems and technologies that are currently installed. Therefore, known vulnerabilities in these legacy systems must be remediated and associated risks mitigated in order to increase the security and success of the Smart Grid. The implementation of Smart Grid will include the deployment of many new technologies and multiple communication infrastructures. This report describes the main technologies that support Smart Grid and summarizes the status of implementation into the existing U.S. electrical infrastructure.

  19. FermiGrid-experience and future plans

    International Nuclear Information System (INIS)

    Chadwick, K; Berman, E; Canal, P; Hesselroth, T; Garzoglio, G; Levshina, T; Sergeev, V; Sfiligoi, I; Sharma, N; Timm, S; Yocum, D R

    2008-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems

  20. Distributed Grid Experiences in CMS DC04

    CERN Document Server

    Fanfani, A; Grandi, C; Legrand, I; Suresh, S; Campana, S; Donno, F; Jank, W; Sinanis, N; Sciabà, A; García-Abia, P; Hernández, J; Ernst, M; Anzar, A; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatine, N; Perelmutov, T; Pordes, R; Ratnikova, N; Weigand, J; Wu, Y; Colling, D J; MacEvoy, B; Tallini, H; Wakefield, L; De Filippis, N; Donvito, G; Maggi, G; Bonacorsi, D; Dell'Agnello, L; Martelli, B; Biasotto, M; Fantinel, S; Corvo, M; Fanzago, F; Mazzucato, M; Tuura, L; Martin, T; Letts, J; Bockjoo, K; Prescott, C; Rodríguez, J; Zahn, A; Bradley, D

    2005-01-01

    In March-April 2004 the CMS experiment undertook a Data Challenge (DC04). During the previous 8 months CMS undertook a large simulated event production. The goal of the challenge was to run CMS reconstruction for sustained period at 25Hz in put rate, distribute the data to the CMS Tier-1 centers and analyze them at remote sites. Grid environments developed in Europe by the LHC Computing Grid (LCG) and in the US with Grid2003 were utilized to complete the aspects of the challenge. A description of the experiences, successes and lessons learned from both experiences with grid infrastructure is presented.

  1. WE-H-BRB-00: Big Data in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  2. WE-H-BRB-00: Big Data in Radiation Oncology

    International Nuclear Information System (INIS)

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  3. Parallel Sn Sweeps on Unstructured Grids: Algorithms for Prioritization, Grid Partitioning, and Cycle Detection

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence

    2005-01-01

    The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster

  4. From testbed to reality grid computing steps up a gear

    CERN Multimedia

    2004-01-01

    "UK plans for Grid computing changed gear this week. The pioneering European DataGrid (EDG) project came to a successful conclusion at the end of March, and on 1 April a new project, known as Enabling Grids for E-Science in Europe (EGEE), begins" (1 page)

  5. Banking Wyoming big sagebrush seeds

    Science.gov (United States)

    Robert P. Karrfalt; Nancy Shaw

    2013-01-01

    Five commercially produced seed lots of Wyoming big sagebrush (Artemisia tridentata Nutt. var. wyomingensis (Beetle & Young) S.L. Welsh [Asteraceae]) were stored under various conditions for 5 y. Purity, moisture content as measured by equilibrium relative humidity, and storage temperature were all important factors to successful seed storage. Our results indicate...

  6. Addressing the Big-Earth-Data Variety Challenge with the Hierarchical Triangular Mesh

    Science.gov (United States)

    Rilee, Michael L.; Kuo, Kwo-Sen; Clune, Thomas; Oloso, Amidu; Brown, Paul G.; Yu, Honfeng

    2016-01-01

    We have implemented an updated Hierarchical Triangular Mesh (HTM) as the basis for a unified data model and an indexing scheme for geoscience data to address the variety challenge of Big Earth Data. We observe that, in the absence of variety, the volume challenge of Big Data is relatively easily addressable with parallel processing. The more important challenge in achieving optimal value with a Big Data solution for Earth Science (ES) data analysis, however, is being able to achieve good scalability with variety. With HTM unifying at least the three popular data models, i.e. Grid, Swath, and Point, used by current ES data products, data preparation time for integrative analysis of diverse datasets can be drastically reduced and better variety scaling can be achieved. In addition, since HTM is also an indexing scheme, when it is used to index all ES datasets, data placement alignment (or co-location) on the shared nothing architecture, which most Big Data systems are based on, is guaranteed and better performance is ensured. Moreover, our updated HTM encoding turns most geospatial set operations into integer interval operations, gaining further performance advantages.

  7. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  8. Disrupt your mindset to transform your business with Big Data: a guide to strategic thinking

    DEFF Research Database (Denmark)

    Rydén, Pernille; Ringberg, Torsten; Østergaard Jacobsen, Per

    “Mindsets eat Big Data strategy for breakfast!” Billions of dollars are spent investing in Big Data technology, yet the yields on these investments are often unsatisfying. Discover why a successful Big Data strategy and business transformation always starts with the right managerial mindset. We...

  9. Grid-supported Medical Digital Library.

    Science.gov (United States)

    Kosiedowski, Michal; Mazurek, Cezary; Stroinski, Maciej; Weglarz, Jan

    2007-01-01

    Secure, flexible and efficient storing and accessing digital medical data is one of the key elements for delivering successful telemedical systems. To this end grid technologies designed and developed over the recent years and grid infrastructures deployed with their use seem to provide an excellent opportunity for the creation of a powerful environment capable of delivering tools and services for medical data storage, access and processing. In this paper we present the early results of our work towards establishing a Medical Digital Library supported by grid technologies and discuss future directions of its development. These works are part of the "Telemedycyna Wielkopolska" project aiming to develop a telemedical system for the support of the regional healthcare.

  10. Simulation Experiments: Better data, not just big data

    OpenAIRE

    Sanchez, Susan M.

    2014-01-01

    Proceeding WSC '14 Proceedings of the 2014 Winter Simulation Conference Pages 805-816 Data mining tools have been around for several decades, but the term “big data” has only recently captured widespread attention. Numerous success stories have been promulgated as organizations have sifted through massive volumes of data to find interesting patterns that are, in turn, transformed into actionable information. Yet a key drawback to the big data paradigm is that it relies on obser...

  11. Predicting Academic Success: General Intelligence, "Big Five" Personality Traits, and Work Drive

    Science.gov (United States)

    Ridgell, Susan D.; Lounsbury, John W.

    2004-01-01

    General intelligence, Big Five personality traits, and the construct Work Drive were studied in relation to two measures of collegiate academic performance: a single course grade received by undergraduate students in an introductory psychology course, and self-reported GPA. General intelligence and Work Drive were found to be significantly…

  12. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  13. Integrating R and Hadoop for Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2014-06-01

    Full Text Available Analyzing and working with big data could be very difficult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Official statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools successfully and wide spread used for storage and processing of big data sets on clusters of commodity hardware is Hadoop. Hadoop framework contains libraries, a distributed file-system (HDFS, a resource-management platform and implements a version of the MapReduce programming model for large scale data processing. In this paper we investigate the possibilities of integrating Hadoop with R which is a popular software used for statistical computing and data visualization. We present three ways of integrating them: R with Streaming, Rhipe and RHadoop and we emphasize the advantages and disadvantages of each solution.

  14. A proposed framework of big data readiness in public sectors

    Science.gov (United States)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  15. Big data analytics to improve cardiovascular care: promise and challenges.

    Science.gov (United States)

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  16. BigData as a Driver for Capacity Building in Astrophysics

    Science.gov (United States)

    Shastri, Prajval

    2015-08-01

    Exciting public interest in astrophysics acquires new significance in the era of Big Data. Since Big Data involves advanced technologies of both software and hardware, astrophysics with Big Data has the potential to inspire young minds with diverse inclinations - i.e., not just those attracted to physics but also those pursuing engineering careers. Digital technologies have become steadily cheaper, which can enable expansion of the Big Data user pool considerably, especially to communities that may not yet be in the astrophysics mainstream, but have high potential because of access to thesetechnologies. For success, however, capacity building at the early stages becomes key. The development of on-line pedagogical resources in astrophysics, astrostatistics, data-mining and data visualisation that are designed around the big facilities of the future can be an important effort that drives such capacity building, especially if facilitated by the IAU.

  17. Mini-grid based off-grid electrification to enhance electricity access in developing countries: What policies may be required?

    International Nuclear Information System (INIS)

    Bhattacharyya, Subhes C.; Palit, Debajit

    2016-01-01

    With 1.2 billion people still lacking electricity access by 2013, electricity access remains a major global challenge. Although mini-grid based electrification has received attention in recent times, their full exploitation requires policy support covering a range of areas. Distilling the experience from a five year research project, OASYS South Asia, this paper presents the summary of research findings and shares the experience from four demonstration activities. It suggests that cost-effective universal electricity service remains a challenge and reaching the universal electrification target by 2030 will remain a challenge for the less developed countries. The financial, organisational and governance weaknesses hinder successful implementation of projects in many countries. The paper then provides 10 policy recommendations to promote mini-grids as a complementary route to grid extension to promote electricity access for successful outcomes. - Highlights: •The academic and action research activities undertaken through OASYS South Asia Project are reported. •Evidence produced through a multi-dimensional participatory framework supplemented by four demonstration projects. •Funding and regulatory challenges militate against universal electrification objectives by 2030. •Innovative business approaches linking local mini-grids and livelihood opportunities exist. •Enabling policies are suggested to exploit such options.

  18. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  19. Big Data Analytics in Chemical Engineering.

    Science.gov (United States)

    Chiang, Leo; Lu, Bo; Castillo, Ivan

    2017-06-07

    Big data analytics is the journey to turn data into insights for more informed business and operational decisions. As the chemical engineering community is collecting more data (volume) from different sources (variety), this journey becomes more challenging in terms of using the right data and the right tools (analytics) to make the right decisions in real time (velocity). This article highlights recent big data advancements in five industries, including chemicals, energy, semiconductors, pharmaceuticals, and food, and then discusses technical, platform, and culture challenges. To reach the next milestone in multiplying successes to the enterprise level, government, academia, and industry need to collaboratively focus on workforce development and innovation.

  20. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  1. Simulating Smoke Filling in Big Halls by Computational Fluid Dynamics

    Directory of Open Access Journals (Sweden)

    W. K. Chow

    2011-01-01

    Full Text Available Many tall halls of big space volume were built and, to be built in many construction projects in the Far East, particularly Mainland China, Hong Kong, and Taiwan. Smoke is identified to be the key hazard to handle. Consequently, smoke exhaust systems are specified in the fire code in those areas. An update on applying Computational Fluid Dynamics (CFD in smoke exhaust design will be presented in this paper. Key points to note in CFD simulations on smoke filling due to a fire in a big hall will be discussed. Mathematical aspects concerning of discretization of partial differential equations and algorithms for solving the velocity-pressure linked equations are briefly outlined. Results predicted by CFD with different free boundary conditions are compared with those on room fire tests. Standards on grid size, relaxation factors, convergence criteria, and false diffusion should be set up for numerical experiments with CFD.

  2. Grid interoperability: joining grid information systems

    International Nuclear Information System (INIS)

    Flechl, M; Field, L

    2008-01-01

    A grid is defined as being 'coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations'. Over recent years a number of grid projects, many of which have a strong regional presence, have emerged to help coordinate institutions and enable grids. Today, we face a situation where a number of grid projects exist, most of which are using slightly different middleware. Grid interoperation is trying to bridge these differences and enable Virtual Organizations to access resources at the institutions independent of their grid project affiliation. Grid interoperation is usually a bilateral activity between two grid infrastructures. Recently within the Open Grid Forum, the Grid Interoperability Now (GIN) Community Group is trying to build upon these bilateral activities. The GIN group is a focal point where all the infrastructures can come together to share ideas and experiences on grid interoperation. It is hoped that each bilateral activity will bring us one step closer to the overall goal of a uniform grid landscape. A fundamental aspect of a grid is the information system, which is used to find available grid services. As different grids use different information systems, interoperation between these systems is crucial for grid interoperability. This paper describes the work carried out to overcome these differences between a number of grid projects and the experiences gained. It focuses on the different techniques used and highlights the important areas for future standardization

  3. What if the big bang didn't happen?

    International Nuclear Information System (INIS)

    Narlikar, J.

    1991-01-01

    Although it has wide support amongst cosmologists, the big bang theory of the origin of the Universe is brought into question in this article because of several recent observations. The large red shift observed in quasars does not fit with Hubble's Law which is so successful for galaxies. Some quasars appear to be linked to companion galaxies by filaments and, again, anomalous red shifts have been observed. The cosmic microwave background, or relic radiation, seems to be too uniform to fit with the big bang model. Lastly, the dark matter, necessary to explain the coalescing of galaxies and clusters, has yet to be established experimentally. A new alternative to the big bang model is offered based on recent work on cosmic grains. (UK)

  4. Reliability engineering analysis of ATLAS data reprocessing campaigns

    International Nuclear Information System (INIS)

    Vaniachine, A; Golubkov, D; Karpenko, D

    2014-01-01

    During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.

  5. Adaptive grid generation in a patient-specific cerebral aneurysm

    Science.gov (United States)

    Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce

  6. Big data challenges: Impact, potential responses and research needs

    OpenAIRE

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterpr...

  7. Evolutions of the power grid: vector of sustainable development

    International Nuclear Information System (INIS)

    Croguennoc, Alain; Dalle, Bernard; Archambault, R.; Bony, P.E.; Bouneau, C.; Bourguignon, B.; Bouvier, D.; Beltran, A.; Chauvancy, A.; Chevassus-au-Louis, B.; Citi, S.; Claverie, B.; Colomb, B.; Dallet, M.; Deveaux, L.; Dubus, D.; Fresnedo, S.; Herreros, J.; Herz, O.; Isoard, J.; Jaussaud, E.; Laffaye, H.; Laroche, J.P.; Lasserre, D.; Lebas, N.; Lebranchu, D.; Leclerc, F.; Lemoine, D.; Leydier, C.; Malique, F.; Mazoyer, F.; Meudic, M.A.; Nguefeu, S.; Pajot, S.; Pilate, J.M.; Real, G.; Rosso, F.; Waeraas de Saint Martin, G.; Schwartzmann, S.; Serres, E.

    2011-01-01

    The abatement of energy consumption and the mass production of renewable energy are the main actions implemented to meet the climate challenge. In this context, the share of electric power in the final consumption is continuously growing up. Adapting the power transportation grid is necessary if we want to take up the challenge of the new paradigm 'new power generation sites, fatal and intermittent energy'. If this grid is the symbol of a modern and well-equipped France, it is also the revealer of our society antagonisms: change without disruption, progress without sacrifice of heritage and natural landscapes. Industries and grid operators will have to show some innovation and anticipation if they want this deep change to be a success. The French power grid has permitted the regional development in the past and will allow to take up the climate challenge in the future. Its inevitable position in the success of past transformations makes it an essential vector of sustainable development. (J.S.)

  8. Pengembangan Aplikasi Antarmuka Layanan Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Gede Karya

    2017-11-01

    Full Text Available In the 2016 Higher Competitive Grants Research (Hibah Bersaing Dikti, we have been successfully developed models, infrastructure and modules of Hadoop-based big data analysis application. It has also successfully developed a virtual private network (VPN network that allows integration and access to the infrastructure from outside the FTIS Computer Laboratorium. Infrastructure and application modules of analysis are then wanted to be presented as services to small and medium enterprises (SMEs in Indonesia. This research aims to develop application of big data analysis service interface integrated with Hadoop-Cluster. The research begins with finding appropriate methods and techniques for scheduling jobs, calling for ready-made Java Map-Reduce (MR application modules, and techniques for tunneling input / output and meta-data construction of service request (input and service output. The above methods and techniques are then developed into a web-based service application, as well as an executable module that runs on Java and J2EE based programming environment and can access Hadoop-Cluster in the FTIS Computer Lab. The resulting application can be accessed by the public through the site http://bigdata.unpar.ac.id. Based on the test results, the application has functioned well in accordance with the specifications and can be used to perform big data analysis. Keywords: web based service, big data analysis, Hadop, J2EE Abstrak Pada penelitian Hibah Bersaing Dikti tahun 2016 telah berhasil dikembangkan model, infrastruktur dan modul-modul aplikasi big data analysis berbasis Hadoop. Selain itu juga telah berhasil dikembangkan jaringan virtual private network (VPN yang memungkinkan integrasi dan akses infrastruktur tersebut dari luar Laboratorium Komputer FTIS. Infrastruktur dan modul aplikasi analisis tersebut selanjutnya ingin dipresentasikan sebagai layanan kepada usaha kecil dan menengah (UKM di Indonesia. Penelitian ini bertujuan untuk mengembangkan

  9. Military Simulation Big Data: Background, State of the Art, and Challenges

    Directory of Open Access Journals (Sweden)

    Xiao Song

    2015-01-01

    Full Text Available Big data technology has undergone rapid development and attained great success in the business field. Military simulation (MS is another application domain producing massive datasets created by high-resolution models and large-scale simulations. It is used to study complicated problems such as weapon systems acquisition, combat analysis, and military training. This paper firstly reviewed several large-scale military simulations producing big data (MS big data for a variety of usages and summarized the main characteristics of result data. Then we looked at the technical details involving the generation, collection, processing, and analysis of MS big data. Two frameworks were also surveyed to trace the development of the underlying software platform. Finally, we identified some key challenges and proposed a framework as a basis for future work. This framework considered both the simulation and big data management at the same time based on layered and service oriented architectures. The objective of this review is to help interested researchers learn the key points of MS big data and provide references for tackling the big data problem and performing further research.

  10. Creating value in health care through big data: opportunities and policy implications.

    Science.gov (United States)

    Roski, Joachim; Bo-Linn, George W; Andrews, Timothy A

    2014-07-01

    Big data has the potential to create significant value in health care by improving outcomes while lowering costs. Big data's defining features include the ability to handle massive data volume and variety at high velocity. New, flexible, and easily expandable information technology (IT) infrastructure, including so-called data lakes and cloud data storage and management solutions, make big-data analytics possible. However, most health IT systems still rely on data warehouse structures. Without the right IT infrastructure, analytic tools, visualization approaches, work flows, and interfaces, the insights provided by big data are likely to be limited. Big data's success in creating value in the health care sector may require changes in current polices to balance the potential societal benefits of big-data approaches and the protection of patients' confidentiality. Other policy implications of using big data are that many current practices and policies related to data use, access, sharing, privacy, and stewardship need to be revised. Project HOPE—The People-to-People Health Foundation, Inc.

  11. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    Science.gov (United States)

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Big is not always beautiful - small can be a short cut to blue oceans

    DEFF Research Database (Denmark)

    Kvistgaard, Peter

    2007-01-01

    Often it is claimed that big investments are the only way to success in tourism and the experience economy. Only by building some of the world's biggest hotels - like the ones in Dubai or Las Vegas where hotels with 3-4,000 rooms are not uncommon - success can be achieved. It is understandable...... that hotels have to be big in Las Vegas in order to secure a good return on investment. It is also understandable that they build big hotels when 37 million people came to visit and 22,000 conventions were held in Las Vegas in 2004 according to the official website of Las Vegas (www.lasvegasnevada.gov/factsstatistics/funfacts.htm)....

  13. Big Data is a powerful tool for environmental improvements in the construction business

    Science.gov (United States)

    Konikov, Aleksandr; Konikov, Gregory

    2017-10-01

    The work investigates the possibility of applying the Big Data method as a tool to implement environmental improvements in the construction business. The method is recognized as effective in analyzing big volumes of heterogeneous data. It is noted that all preconditions exist for this method to be successfully used for resolution of environmental issues in the construction business. It is proven that the principal Big Data techniques (cluster analysis, crowd sourcing, data mixing and integration) can be applied in the sphere in question. It is concluded that Big Data is a truly powerful tool to implement environmental improvements in the construction business.

  14. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  15. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  16. The Human Genome Project: big science transforms biology and medicine

    OpenAIRE

    Hood, Leroy; Rowen, Lee

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called ‘big science’ - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and a...

  17. Smart grid strategy - the future intelligent energy system. [Denmark]; Smart grid-strategi - fremtidens intelligente energisystem

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-04-15

    The Government's Smart Grid Strategy brings Danish consumers a big step closer to managing their own energy consumption. The strategy, combines electricity meters read on an hourly basis with variable tariffs and a data hub. It will make it possible for consumers to use the power when it is least expensive. ''Today we set the course for developing a smart energy network that will reduce the cost of converting to sustainable energy, cut electricity bills and create brand new products consumers will welcome,'' says Minister of Climate, Energy and Building Martin Lidegaard. Encouraging consumers to use energy more efficiently is a key aspect of the strategy. The remote-read electricity meters are crucial if consumers are to play a role in optimising the flexible energy network. (LN)

  18. Data Grid tools: enabling science on big distributed data

    Energy Technology Data Exchange (ETDEWEB)

    Allcock, Bill [Mathematics and Computer Science, Argonne National Laboratory, Argonne, IL 60439 (United States); Chervenak, Ann [Information Sciences Institute, University of Southern California, Marina del Rey, CA 90291 (United States); Foster, Ian [Mathematics and Computer Science, Argonne National Laboratory, Argonne, IL 60439 (United States); Department of Computer Science, University of Chicago, Chicago, IL 60615 (United States); Kesselman, Carl [Information Sciences Institute, University of Southern California, Marina del Rey, CA 90291 (United States); Livny, Miron [Department of Computer Science, University of Wisconsin, Madison, WI 53705 (United States)

    2005-01-01

    A particularly demanding and important challenge that we face as we attempt to construct the distributed computing machinery required to support SciDAC goals is the efficient, high-performance, reliable, secure, and policy-aware management of large-scale data movement. This problem is fundamental to diverse application domains including experimental physics (high energy physics, nuclear physics, light sources), simulation science (climate, computational chemistry, fusion, astrophysics), and large-scale collaboration. In each case, highly distributed user communities require high-speed access to valuable data, whether for visualization or analysis. The quantities of data involved (terabytes to petabytes), the scale of the demand (hundreds or thousands of users, data-intensive analyses, real-time constraints), and the complexity of the infrastructure that must be managed (networks, tertiary storage systems, network caches, computers, visualization systems) make the problem extremely challenging. Data management tools developed under the auspices of the SciDAC Data Grid Middleware project have become the de facto standard for data management in projects worldwide. Day in and day out, these tools provide the 'plumbing' that allows scientists to do more science on an unprecedented scale in production environments.

  19. Data Grid tools: enabling science on big distributed data

    International Nuclear Information System (INIS)

    Allcock, Bill; Chervenak, Ann; Foster, Ian; Kesselman, Carl; Livny, Miron

    2005-01-01

    A particularly demanding and important challenge that we face as we attempt to construct the distributed computing machinery required to support SciDAC goals is the efficient, high-performance, reliable, secure, and policy-aware management of large-scale data movement. This problem is fundamental to diverse application domains including experimental physics (high energy physics, nuclear physics, light sources), simulation science (climate, computational chemistry, fusion, astrophysics), and large-scale collaboration. In each case, highly distributed user communities require high-speed access to valuable data, whether for visualization or analysis. The quantities of data involved (terabytes to petabytes), the scale of the demand (hundreds or thousands of users, data-intensive analyses, real-time constraints), and the complexity of the infrastructure that must be managed (networks, tertiary storage systems, network caches, computers, visualization systems) make the problem extremely challenging. Data management tools developed under the auspices of the SciDAC Data Grid Middleware project have become the de facto standard for data management in projects worldwide. Day in and day out, these tools provide the 'plumbing' that allows scientists to do more science on an unprecedented scale in production environments

  20. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    Science.gov (United States)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm

  1. Flexible Power Regulation and Current-limited Control of Grid-connected Inverter under Unbalanced Grid Voltage Faults

    DEFF Research Database (Denmark)

    Guo, Xiaoqiang; Liu, Wenzhao; Lu, Zhigang

    2017-01-01

    The grid-connected inverters may experience excessive current stress in case of unbalanced grid voltage Fault Ride Through (FRT), which significantly affects the reliability of the power supply system. In order to solve the problem, the inherent mechanisms of the excessive current phenomenon...... with the conventional FRT solutions are discussed. The quantitative analysis of three phase current peak values are conducted and a novel current-limited control strategy is proposed to achieve the flexible active and reactive power regulation and successful FRT in a safe current operation area with the aim...

  2. What if the big bang didn't happen

    Energy Technology Data Exchange (ETDEWEB)

    Narlikar, J. (Inter-University Centre for Astronomy and Astrophysics, Pune (India))

    1991-03-02

    Although it has wide support amongst cosmologists, the big bang theory of the origin of the Universe is brought into question in this article because of several recent observations. The large red shift observed in quasars does not fit with Hubble's Law which is so successful for galaxies. Some quasars appear to be linked to companion galaxies by filaments and, again, anomalous red shifts have been observed. The cosmic microwave background, or relic radiation, seems to be too uniform to fit with the big bang model. Lastly, the dark matter, necessary to explain the coalescing of galaxies and clusters, has yet to be established experimentally. A new alternative to the big bang model is offered based on recent work on cosmic grains. (UK).

  3. Smart Grids Cyber Security Issues and Challenges

    OpenAIRE

    Imen Aouini; Lamia Ben Azzouz

    2015-01-01

    The energy need is growing rapidly due to the population growth and the large new usage of power. Several works put considerable efforts to make the electricity grid more intelligent to reduce essentially energy consumption and provide efficiency and reliability of power systems. The Smart Grid is a complex architecture that covers critical devices and systems vulnerable to significant attacks. Hence, security is a crucial factor for the success and the wide deployment of...

  4. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  5. Towards Dynamic Authentication in the Grid — Secure and Mobile Business Workflows Using GSet

    Science.gov (United States)

    Mangler, Jürgen; Schikuta, Erich; Witzany, Christoph; Jorns, Oliver; Ul Haq, Irfan; Wanek, Helmut

    Until now, the research community mainly focused on the technical aspects of Grid computing and neglected commercial issues. However, recently the community tends to accept that the success of the Grid is crucially based on commercial exploitation. In our vision Foster's and Kesselman's statement "The Grid is all about sharing." has to be extended by "... and making money out of it!". To allow for the realization of this vision the trust-worthyness of the underlying technology needs to be ensured. This can be achieved by the use of gSET (Gridified Secure Electronic Transaction) as a basic technology for trust management and secure accounting in the presented Grid based workflow. We present a framework, conceptually and technically, from the area of the Mobile-Grid, which justifies the Grid infrastructure as a viable platform to enable commercially successful business workflows.

  6. Near-Body Grid Adaption for Overset Grids

    Science.gov (United States)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  7. Modeling regeneration responses of big sagebrush (Artemisia tridentata) to abiotic conditions

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Ecosystems dominated by big sagebrush, Artemisia tridentata Nuttall (Asteraceae), which are the most widespread ecosystems in semiarid western North America, have been affected by land use practices and invasive species. Loss of big sagebrush and the decline of associated species, such as greater sage-grouse, are a concern to land managers and conservationists. However, big sagebrush regeneration remains difficult to achieve by restoration and reclamation efforts and there is no regeneration simulation model available. We present here the first process-based, daily time-step, simulation model to predict yearly big sagebrush regeneration including relevant germination and seedling responses to abiotic factors. We estimated values, uncertainty, and importance of 27 model parameters using a total of 1435 site-years of observation. Our model explained 74% of variability of number of years with successful regeneration at 46 sites. It also achieved 60% overall accuracy predicting yearly regeneration success/failure. Our results identify specific future research needed to improve our understanding of big sagebrush regeneration, including data at the subspecies level and improved parameter estimates for start of seed dispersal, modified wet thermal-time model of germination, and soil water potential influences. We found that relationships between big sagebrush regeneration and climate conditions were site specific, varying across the distribution of big sagebrush. This indicates that statistical models based on climate are unsuitable for understanding range-wide regeneration patterns or for assessing the potential consequences of changing climate on sagebrush regeneration and underscores the value of this process-based model. We used our model to predict potential regeneration across the range of sagebrush ecosystems in the western United States, which confirmed that seedling survival is a limiting factor, whereas germination is not. Our results also suggested that modeled

  8. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  9. Electricity tariff systems for informatics system design regarding consumption optimization in smart grids

    Directory of Open Access Journals (Sweden)

    Simona Vasilica OPREA

    2016-01-01

    Full Text Available High volume of data is gathered via sensors and recorded by smart meters. These data are processed at the electricity consumer and grid operators' side by big data analytics. Electricity consumption optimization offers multiple advantages for both consumers and grid operators. At the electricity customer level, by optimizing electricity consumption savings are significant, but the main benefits will come from indirect aspects such as avoiding onerous grid investments, higher volume of renewable energy sources' integration, less polluted environment etc. In order to optimize electricity consumption, advanced tariff systems are essential due to the financial incentive they provide for electricity consumers' behaviour change. In this paper several advanced tariff systems are described in details. These systems are applied in England, Spain, Italy, France, Norway and Germany. These systems are compared from characteristics, advantages/disadvantages point of view. Then, different tariff systems applied in Romania are presented. Romanian tariff systems have been designed for various electricity consumers' types. Different tariff systems applied by grid operators or electricity suppliers will be included in the database model that is part of an informatics system for electricity consumption optimization.

  10. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  11. NSTAR Smart Grid Pilot

    Energy Technology Data Exchange (ETDEWEB)

    Rabari, Anil [NSTAR Electric, Manchester, NH (United States); Fadipe, Oloruntomi [NSTAR Electric, Manchester, NH (United States)

    2014-03-31

    NSTAR Electric & Gas Corporation (“the Company”, or “NSTAR”) developed and implemented a Smart Grid pilot program beginning in 2010 to demonstrate the viability of leveraging existing automated meter reading (“AMR”) deployments to provide much of the Smart Grid functionality of advanced metering infrastructure (“AMI”), but without the large capital investment that AMI rollouts typically entail. In particular, a central objective of the Smart Energy Pilot was to enable residential dynamic pricing (time-of-use “TOU” and critical peak rates and rebates) and two-way direct load control (“DLC”) by continually capturing AMR meter data transmissions and communicating through customer-sited broadband connections in conjunction with a standardsbased home area network (“HAN”). The pilot was supported by the U.S. Department of Energy’s (“DOE”) through the Smart Grid Demonstration program. NSTAR was very pleased to not only receive the funding support from DOE, but the guidance and support of the DOE throughout the pilot. NSTAR is also pleased to report to the DOE that it was able to execute and deliver a successful pilot on time and on budget. NSTAR looks for future opportunities to work with the DOE and others in future smart grid projects.

  12. USE OF BIG DATA ANALYTICS FOR CUSTOMER RELATIONSHIP MANAGEMENT: POINT OF PARITY OR SOURCE OF COMPETITIVE ADVANTAGE?

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas; Zablah, Alex R.; Straub, Detmar W.

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (CA use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: 1. What are the key antecedents of big data customer analytics use? 2. How, and to what extent, does big data...

  13. The MammoGrid Project Grids Architecture

    CERN Document Server

    McClatchey, Richard; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri; Buncic, Predrag; Clatchey, Richard Mc; Buncic, Predrag; Manset, David; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri

    2003-01-01

    The aim of the recently EU-funded MammoGrid project is, in the light of emerging Grid technology, to develop a European-wide database of mammograms that will be used to develop a set of important healthcare applications and investigate the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. The MammoGrid consortium intends to use a Grid model to enable distributed computing that spans national borders. This Grid infrastructure will be used for deploying novel algorithms as software directly developed or enhanced within the project. Using the MammoGrid clinicians will be able to harness the use of massive amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical "virtual organisations". This is achieved through the use of Grid-compliant services [1] for managing (versions of) massively distributed files of mammograms, for handling the distri...

  14. Campus Grids: Bringing Additional Computational Resources to HEP Researchers

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Bockelman, Brian; Swanson, David

    2012-01-01

    It is common at research institutions to maintain multiple clusters that represent different owners or generations of hardware, or that fulfill different needs and policies. Many of these clusters are consistently under utilized while researchers on campus could greatly benefit from these unused capabilities. By leveraging principles from the Open Science Grid it is now possible to utilize these resources by forming a lightweight campus grid. The campus grids framework enables jobs that are submitted to one cluster to overflow, when necessary, to other clusters within the campus using whatever authentication mechanisms are available on campus. This framework is currently being used on several campuses to run HEP and other science jobs. Further, the framework has in some cases been expanded beyond the campus boundary by bridging campus grids into a regional grid, and can even be used to integrate resources from a national cyberinfrastructure such as the Open Science Grid. This paper will highlight 18 months of operational experiences creating campus grids in the US, and the different campus configurations that have successfully utilized the campus grid infrastructure.

  15. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  16. Uudised : Rannap sai raha lõpuks kätte. No Big Silence esitleb uut heliplaati

    Index Scriptorium Estoniae

    2000-01-01

    Pärnus toimunud lühima öö laulukonkursil võitis peaauhinna R. Rannap. Ans. No-Big-Silence on soojendusbändiks 2. juulil Tallinna Lauluväljakul toimuval Iron Maideni kontserdil. Ans. No-Big-Silence esitleb oma uut heliplaati "successful, bitch and beautiful"

  17. PInCom project: SaaS Big Data Platform for and Communication Channels

    Directory of Open Access Journals (Sweden)

    Juan Manuel Lombardo

    2016-03-01

    Full Text Available The problem of optimization will be addressed in this article, based on the premise that the successful implementation of Big Data solutions requires as a determining factor not only effective -it is assumed- but the efficiency of the responsiveness of management information get the best value offered by the digital and technological environment for gaining knowledge. In adopting Big Data strategies should be identified storage technologies and appropriate extraction to enable professionals and companies from different sectors to realize the full potential of the data. A success story is the solution PInCom: Intelligent-Communications Platform that aims customer loyalty by sending multimedia communications across heterogeneous transmission channels.

  18. PARALLEL PROCESSING OF BIG POINT CLOUDS USING Z-ORDER-BASED PARTITIONING

    Directory of Open Access Journals (Sweden)

    C. Alis

    2016-06-01

    Full Text Available As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112 is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest

  19. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    Science.gov (United States)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  20. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Fields, Brian D.; Olive, Keith A.

    2006-01-01

    We present an overview of the standard model of big bang nucleosynthesis (BBN), which describes the production of the light elements in the early universe. The theoretical prediction for the abundances of D, 3 He, 4 He, and 7 Li is discussed. We emphasize the role of key nuclear reactions and the methods by which experimental cross section uncertainties are propagated into uncertainties in the predicted abundances. The observational determination of the light nuclides is also discussed. Particular attention is given to the comparison between the predicted and observed abundances, which yields a measurement of the cosmic baryon content. The spectrum of anisotropies in the cosmic microwave background (CMB) now independently measures the baryon density to high precision; we show how the CMB data test BBN, and find that the CMB and the D and 4 He observations paint a consistent picture. This concordance stands as a major success of the hot big bang. On the other hand, 7 Li remains discrepant with the CMB-preferred baryon density; possible explanations are reviewed. Finally, moving beyond the standard model, primordial nucleosynthesis constraints on early universe and particle physics are also briefly discussed

  1. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  2. GridPix detectors: Production and beam test results

    International Nuclear Information System (INIS)

    Koppert, W.J.C.; Bakel, N. van; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; Graaf, H. van der; Hartjes, F.; Hessey, N.P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.

    2013-01-01

    The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip

  3. GridPix detectors: Production and beam test results

    Science.gov (United States)

    Koppert, W. J. C.; van Bakel, N.; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; van der Graaf, H.; Hartjes, F.; Hessey, N. P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.

    2013-12-01

    The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip.

  4. Towards Big Earth Data Analytics: The EarthServer Approach

    Science.gov (United States)

    Baumann, Peter

    2013-04-01

    Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data

  5. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  6. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  7. Increasing the value of geospatial informatics with open approaches for Big Data

    Science.gov (United States)

    Percivall, G.; Bermudez, L. E.

    2017-12-01

    Open approaches to big data provide geoscientists with new capabilities to address problems of unmatched size and complexity. Consensus approaches for Big Geo Data have been addressed in multiple international workshops and testbeds organized by the Open Geospatial Consortium (OGC) in the past year. Participants came from government (NASA, ESA, USGS, NOAA, DOE); research (ORNL, NCSA, IU, JPL, CRIM, RENCI); industry (ESRI, Digital Globe, IBM, rasdaman); standards (JTC 1/NIST); and open source software communities. Results from the workshops and testbeds are documented in Testbed reports and a White Paper published by the OGC. The White Paper identifies the following set of use cases: Collection and Ingest: Remote sensed data processing; Data stream processing Prepare and Structure: SQL and NoSQL databases; Data linking; Feature identification Analytics and Visualization: Spatial-temporal analytics; Machine Learning; Data Exploration Modeling and Prediction: Integrated environmental models; Urban 4D models. Open implementations were developed in the Arctic Spatial Data Pilot using Discrete Global Grid Systems (DGGS) and in Testbeds using WPS and ESGF to publish climate predictions. Further development activities to advance open implementations of Big Geo Data include the following: Open Cloud Computing: Avoid vendor lock-in through API interoperability and Application portability. Open Source Extensions: Implement geospatial data representations in projects from Apache, Location Tech, and OSGeo. Investigate parallelization strategies for N-Dimensional spatial data. Geospatial Data Representations: Schemas to improve processing and analysis using geospatial concepts: Features, Coverages, DGGS. Use geospatial encodings like NetCDF and GeoPackge. Big Linked Geodata: Use linked data methods scaled to big geodata. Analysis Ready Data: Support "Download as last resort" and "Analytics as a service". Promote elements common to "datacubes."

  8. The Grid-Enabled NMR Spectroscopy

    International Nuclear Information System (INIS)

    Lawenda, M.; Meyer, N.; Stroinski, M.; Popenda, L.; Gdaniec, Z.; Adamiak, R.W.

    2005-01-01

    The laboratory equipment used for experimental work is very expensive and unique as well. Only big regional or national centers could afford to purchase and use it, but on a very limited scale. That is a real problem that disqualifies all other research groups not having direct access to these instruments. Therefore the proposed framework plays a crucial role in equalizing the chances of all research groups. The Virtual Laboratory (VLab) project focuses its activity on embedding laboratory equipments in grid environments (handling HPC and visualization), touching some crucial issues not solved yet. In general the issues concern the standardization of the laboratory equipment definition to treat it as a simple grid resource, supporting the end user under the term of the workflow definition, introducing the accounting issues and prioritizing jobs which follow experiments on equipments. Nowadays, we have a lot of various equipments, which can be accessed remotely via network, but only on the way allowing the local management console/display to move through the network to make a simpler access. To manage an experimental and post-processing data as well as store them in a organized way, a special Digital Science Library was developed. The project delivers a framework to enable the usage of many different scientific facilities. The physical layer of the architecture includes the existing high-speed network like PIONIER in Poland, and the HPC and visualization infrastructure. The application, in fact the framework, can be used in all experimental disciplines, where access to physical equipments are crucial, e.g., chemistry (spectrometer), radio astronomy (radio telescope), and medicine (CAT scanner). The poster presentation will show how we deployed the concept in chemistry, supporting these disciplines with grid environment and embedding the Bruker Avance 600 MHz and Varian 300 MHz spectrometers. (author)

  9. Innovating Big Data Computing Geoprocessing for Analysis of Engineered-Natural Systems

    Science.gov (United States)

    Rose, K.; Baker, V.; Bauer, J. R.; Vasylkivska, V.

    2016-12-01

    Big data computing and analytical techniques offer opportunities to improve predictions about subsurface systems while quantifying and characterizing associated uncertainties from these analyses. Spatial analysis, big data and otherwise, of subsurface natural and engineered systems are based on variable resolution, discontinuous, and often point-driven data to represent continuous phenomena. We will present examples from two spatio-temporal methods that have been adapted for use with big datasets and big data geo-processing capabilities. The first approach uses regional earthquake data to evaluate spatio-temporal trends associated with natural and induced seismicity. The second algorithm, the Variable Grid Method (VGM), is a flexible approach that presents spatial trends and patterns, such as those resulting from interpolation methods, while simultaneously visualizing and quantifying uncertainty in the underlying spatial datasets. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analyses to efficiently consume and utilize large geospatial data in these custom analytical algorithms through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom `Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation via custom multi-step Hadoop applications, performance benchmarking comparisons, and Hadoop

  10. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  11. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  12. Smart grid

    International Nuclear Information System (INIS)

    Choi, Dong Bae

    2001-11-01

    This book describes press smart grid from basics to recent trend. It is divided into ten chapters, which deals with smart grid as green revolution in energy with introduction, history, the fields, application and needed technique for smart grid, Trend of smart grid in foreign such as a model business of smart grid in foreign, policy for smart grid in U.S.A, Trend of smart grid in domestic with international standard of smart grid and strategy and rood map, smart power grid as infrastructure of smart business with EMS development, SAS, SCADA, DAS and PQMS, smart grid for smart consumer, smart renewable like Desertec project, convergence IT with network and PLC, application of an electric car, smart electro service for realtime of electrical pricing system, arrangement of smart grid.

  13. GridCom, Grid Commander: graphical interface for Grid jobs and data management

    International Nuclear Information System (INIS)

    Galaktionov, V.V.

    2011-01-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  14. Kibana dashboards for Grid services

    CERN Document Server

    Di Marino, Emiliano

    2015-01-01

    The Platform Service (PS) section of the CERN runs a number of grid services used by thousands of users worldwide. So as to manage these services even more successfully, plan their future, be able to rapidly address problems and easily provide information to the users, it would be useful to have dashboards reporting technical status data. The team of the PS section has put together a monitoring infrastructure based on Kibana/ElasticSearch and found that it can be possible to design dashboards and write collectors rather easily. The main goal of this summer student project is to create more of such dashboards to help PS section to monitor CERN’s grid services.

  15. A Combined Fault Diagnosis Method for Power Transformer in Big Data Environment

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2017-01-01

    Full Text Available The fault diagnosis method based on dissolved gas analysis (DGA is of great significance to detect the potential faults of the transformer and improve the security of the power system. The DGA data of transformer in smart grid have the characteristics of large quantity, multiple types, and low value density. In view of DGA big data’s characteristics, the paper first proposes a new combined fault diagnosis method for transformer, in which a variety of fault diagnosis models are used to make a preliminary diagnosis, and then the support vector machine is used to make the second diagnosis. The method adopts the intelligent complementary and blending thought, which overcomes the shortcomings of single diagnosis model in transformer fault diagnosis, and improves the diagnostic accuracy and the scope of application of the model. Then, the training and deployment strategy of the combined diagnosis model is designed based on Storm and Spark platform, which provides a solution for the transformer fault diagnosis in big data environment.

  16. Optimal operation control of low-voltage grids with a high share of distributed power generation[Dissertation 17063]; Optimierte Betriebsfuehrung von Niederspannungsnetzen mit einem hohen Anteil an dezentraler Erzeugung

    Energy Technology Data Exchange (ETDEWEB)

    Malte, C. T.

    2007-07-01

    targets during development were that the system is able to manage autonomously a selected low-voltage grid including the installed (controllable) grid devices in order to improve power quality as well as to guarantee an economically optimised operation of the grid. Therefore, this system simplifies the integration of more and more DG units into already existing distribution grids and generates at the same time an economical and technical benefit for the concerned grid operator. All essential algorithms for the operation of PoMS have been developed within this PhD thesis. The approaches used in this work have been designed specially to fit for the application in limited low-voltage grid segments, e.g. area grids or industrial grids. It is a big advantage that the algorithms have been designed in such a general and scalable way, so that they can be used in a slightly modified form also for the optimisation of larger grids. From the very beginning the aim of the project was not only to design the system theoretically but also to test it under real conditions in an existing low-voltage grid. For that a fix time slot was given that had to be met under all circumstances. Therefore, the big challenge in the framework of this PhD thesis was not only to develop appropriate algorithms, but also to do this in the given time. With the successful test of PoMS it could be demonstrated that the developed algorithms are practical and allow an economically optimised grid management under real conditions. Further, it could be shown that PoMS can be used even for the operation of permanently islanded grids as well as for the operation of temporary islanded grids due to faults or interruptions on higher voltage levels ('Fault Ride Through'). (author)

  17. Optimal operation control of low-voltage grids with a high share of distributed power generation[Dissertation 17063]; Optimierte Betriebsfuehrung von Niederspannungsnetzen mit einem hohen Anteil an dezentraler Erzeugung

    Energy Technology Data Exchange (ETDEWEB)

    Malte, C T

    2007-07-01

    able to manage autonomously a selected low-voltage grid including the installed (controllable) grid devices in order to improve power quality as well as to guarantee an economically optimised operation of the grid. Therefore, this system simplifies the integration of more and more DG units into already existing distribution grids and generates at the same time an economical and technical benefit for the concerned grid operator. All essential algorithms for the operation of PoMS have been developed within this PhD thesis. The approaches used in this work have been designed specially to fit for the application in limited low-voltage grid segments, e.g. area grids or industrial grids. It is a big advantage that the algorithms have been designed in such a general and scalable way, so that they can be used in a slightly modified form also for the optimisation of larger grids. From the very beginning the aim of the project was not only to design the system theoretically but also to test it under real conditions in an existing low-voltage grid. For that a fix time slot was given that had to be met under all circumstances. Therefore, the big challenge in the framework of this PhD thesis was not only to develop appropriate algorithms, but also to do this in the given time. With the successful test of PoMS it could be demonstrated that the developed algorithms are practical and allow an economically optimised grid management under real conditions. Further, it could be shown that PoMS can be used even for the operation of permanently islanded grids as well as for the operation of temporary islanded grids due to faults or interruptions on higher voltage levels ('Fault Ride Through'). (author)

  18. Big Data in Drug Discovery.

    Science.gov (United States)

    Brown, Nathan; Cambruzzi, Jean; Cox, Peter J; Davies, Mark; Dunbar, James; Plumbley, Dean; Sellwood, Matthew A; Sim, Aaron; Williams-Jones, Bryn I; Zwierzyna, Magdalena; Sheppard, David W

    2018-01-01

    Interpretation of Big Data in the drug discovery community should enhance project timelines and reduce clinical attrition through improved early decision making. The issues we encounter start with the sheer volume of data and how we first ingest it before building an infrastructure to house it to make use of the data in an efficient and productive way. There are many problems associated with the data itself including general reproducibility, but often, it is the context surrounding an experiment that is critical to success. Help, in the form of artificial intelligence (AI), is required to understand and translate the context. On the back of natural language processing pipelines, AI is also used to prospectively generate new hypotheses by linking data together. We explain Big Data from the context of biology, chemistry and clinical trials, showcasing some of the impressive public domain sources and initiatives now available for interrogation. © 2018 Elsevier B.V. All rights reserved.

  19. Enabling Technologies for Smart Grid Integration and Interoperability of Electric Vehicles

    DEFF Research Database (Denmark)

    Martinenas, Sergejus

    Conventional, centralized power plants are being replaced by intermittent, distributed renewable energy sources, thus raising the concern about the stability of the power grid in its current state. All the while, electrification of all forms of transportation is increasing the load...... for successful EV integration into the smart grid, as a smart, mobile distributed energy resource. The work is split into three key topics: enabling technologies, grid service applications and interoperability issues. The current state of e-mobility technologies is surveyed. Technologies and protocols...... EVs to not only mitigate their own effects on the grid, but also provide value to grid operators, locally as well as system wide. Finally, it is shown that active integration of EVs into the smart grid, is not only achievable, but is well on its way to becoming a reality....

  20. BIG DATA ANALYTICS USE IN CUSTOMER RELATIONSHIP MANAGEMENT: ANTECEDENTS AND PERFORMANCE IMPLICATIONS

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2016-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study aims to (1) determine whether organizational BD use improves customer-centric and financial outcomes, and (2) identify the factors influencing BD use. Drawing primarily from market...

  1. Astrophysical S-factor for destructive reactions of lithium-7 in big bang nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Komatsubara, Tetsuro; Kwon, YoungKwan; Moon, JunYoung; Kim, Yong-Kyun [Rare Isotope Science Project, Institute for Basic Science, Daejeon (Korea, Republic of); Moon, Chang-Bum [Hoseo University, Asan, Chungnam (Korea, Republic of); Ozawa, Akira; Sasa, Kimikazu; Onishi, Takahiro; Yuasa, Toshiaki; Okada, Shunsuke; Saito, Yuta [Division of Physics, University of Tsukuba, Tsukuba, Ibaraki (Japan); Hayakawa, Takehito; Shizuma, Toshiyuki [Japan Atomic Energy Agency, Shirakata Shirane, Tokai, Ibaraki (Japan); Kubono, Shigeru [RIKEN, Hirosawa, Wako, Saitama (Japan); Kusakabe, Motohiko [School of Liberal Arts and Science, Korea Aerospace University (Korea, Republic of); Kajino, Toshitaka [National Astronomical Observatory, Osawa, Mitaka, Tokyo (Japan)

    2014-05-02

    One of the most prominent success with the Big Bang models is the precise reproduction of mass abundance ratio for {sup 4}He. In spite of the success, abundances of lithium isotopes are still inconsistent between observations and their calculated results, which is known as lithium abundance problem. Since the calculations were based on the experimental reaction data together with theoretical estimations, more precise experimental measurements may improve the knowledge of the Big Bang nucleosynthesis. As one of the destruction process of lithium-7, we have performed measurements for the reaction cross sections of the {sup 7}L({sup 3}He,p){sup 9}Be reaction.

  2. Big Data solutions on a small scale: Evaluating accessible high-performance computing for social research

    OpenAIRE

    Murthy, Dhiraj; Bowman, S. A.

    2014-01-01

    Though full of promise, Big Data research success is often contingent on access to the newest, most advanced, and often expensive hardware systems and the expertise needed to build and implement such systems. As a result, the accessibility of the growing number of Big Data-capable technology solutions has often been the preserve of business analytics. Pay as you store/process services like Amazon Web Services have opened up possibilities for smaller scale Big Data projects. There is high dema...

  3. A gridded air counter for measuring exoelectrons

    International Nuclear Information System (INIS)

    Nagase, Makoto; Chiba, Yoshiya; Kirihata, Humiaki.

    1980-01-01

    A gridded air counter with a quenching circuit is described, which serves to detect low-energy electrons such as thermionic electrons, photoelectrons and exoelectrons emitted into the atmospheric air. The air counter consists of a loop-shaped anode and two grids provided for quenching the gas discharge and for protecting the electron emitter from the positive ion bombardment. The quenching circuit with a high input sensitivity of 5 mV detects the initiation gas discharge caused by an incident electron and immediately supplies a rectangular wave pulse of 300 V in amplitude and of more than 3 msec in width to the quenching grid near the anode. Simultaneously, the voltage of the suppressor grid is brought down and kept at -30 V against the earthed sample for the same period of time. Performance of the gridded air counter was examined by use of photoelectrons emitted from an abraded aluminum plate. The quenching action was successfully accomplished in the anode voltage range from 3.65 to 3.95 kV. The photoelectrons emitted into the atmosphere could be counted stably by use of this counter. (author)

  4. Enabling Campus Grids with Open Science Grid Technology

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Pordes, Ruth; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  5. Big data: the management revolution.

    Science.gov (United States)

    McAfee, Andrew; Brynjolfsson, Erik

    2012-10-01

    Big data, the authors write, is far more powerful than the analytics of the past. Executives can measure and therefore manage more precisely than ever before. They can make better predictions and smarter decisions. They can target more-effective interventions in areas that so far have been dominated by gut and intuition rather than by data and rigor. The differences between big data and analytics are a matter of volume, velocity, and variety: More data now cross the internet every second than were stored in the entire internet 20 years ago. Nearly real-time information makes it possible for a company to be much more agile than its competitors. And that information can come from social networks, images, sensors, the web, or other unstructured sources. The managerial challenges, however, are very real. Senior decision makers have to learn to ask the right questions and embrace evidence-based decision making. Organizations must hire scientists who can find patterns in very large data sets and translate them into useful business information. IT departments have to work hard to integrate all the relevant internal and external sources of data. The authors offer two success stories to illustrate how companies are using big data: PASSUR Aerospace enables airlines to match their actual and estimated arrival times. Sears Holdings directly analyzes its incoming store data to make promotions much more precise and faster.

  6. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  7. Internet, maratona di supercalcolo con Grid

    CERN Multimedia

    2005-01-01

    At CERN, stream of data medium 600 megabytes per second during 10 consecutive days; the supercomputer Grid made a success its first challenge: eight supercalculation centers maintained a continuous flow of data coming from Cern and going to seven differents centers in Europe and the USA (1½ page)

  8. The big data phenomenon: The business and public impact

    Directory of Open Access Journals (Sweden)

    Chroneos-Krasavac Biljana

    2016-01-01

    Full Text Available The subject of the research in this paper is the emergence of big data phenomenon and application of big data technologies for business' needs with the specific emphasis on marketing and trade. The purpose of the research is to make a comprehensive overview of different discussions about the characteristics, application possibilities, achievements, constraints and the future of big data development. Based on the relevant literature, the concept of big data is presented and the potential of large impact of big data on business activities is discussed. One of the key findings indicates that the most prominent change that big data brings to the business arena is the appearance of new business models, as well as revisions of the existing ones. Substantial part of the paper is devoted to the marketing and marketing research which are under the strong impact of big data. The most exciting outcomes of the research in this domain concerns the new abilities in profiling the customers. In addition to the vast amount of structured data which are used in marketing for a long period, big data initiatives suggest the inclusion of semi-structured and unstructured data, opening up the room for substantial improvements in customer profile analysis. Considering the usage of information communication technologies (ICT as a prerequisite for big data project success, the concept of Networked Readiness Index (NRI is presented and the position of Serbia and regional countries in NRI framework is analyzed. The main outcome of the analysis points out that Serbia, with its NRI score took the lowest position in the region, excluding Albania. Also, Serbia is lagging behind the appropriate EU mean values regarding all observed composite indicators - pillars. Further on, this analysis reveals the domains of ICT usage in Serbia, which could be focused for an improvement and where incentives can be made. These domains are: political and regulatory environment, business and

  9. A big data geospatial analytics platform - Physical Analytics Integrated Repository and Services (PAIRS)

    Science.gov (United States)

    Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.

    2015-12-01

    A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data

  10. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Science.gov (United States)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  11. Challenges and potential solutions for big data implementations in developing countries.

    Science.gov (United States)

    Luna, D; Mayan, J C; García, M J; Almerares, A A; Househ, M

    2014-08-15

    The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: "big data", "developing countries", "data mining", "health information systems", and "computing methodologies". A thematic review of selected articles was performed. There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

  12. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  13. A Parallel Multiblock Structured Grid Method with Automated Interblocked Unstructured Grids for Chemically Reacting Flows

    Science.gov (United States)

    Spiegel, Seth Christian

    An automated method for using unstructured grids to patch non- C0 interfaces between structured blocks has been developed in conjunction with a finite-volume method for solving chemically reacting flows on unstructured grids. Although the standalone unstructured solver, FVFLO-NCSU, is capable of resolving flows for high-speed aeropropulsion devices with complex geometries, unstructured-mesh algorithms are inherently inefficient when compared to their structured counterparts. However, the advantages of structured algorithms in developing a flow solution in a timely manner can be negated by the amount of time required to develop a mesh for complex geometries. The global domain can be split up into numerous smaller blocks during the grid-generation process to alleviate some of the difficulties in creating these complex meshes. An even greater abatement can be found by allowing the nodes on abutting block interfaces to be nonmatching or non-C 0 continuous. One code capable of solving chemically reacting flows on these multiblock grids is VULCAN, which uses a nonconservative approach for patching non-C0 block interfaces. The developed automated unstructured-grid patching algorithm has been installed within VULCAN to provide it the capability of a fully conservative approach for patching non-C0 block interfaces. Additionally, the FVFLO-NCSU solver algorithms have been deeply intertwined with the VULCAN source code to solve chemically reacting flows on these unstructured patches. Finally, the CGNS software library was added to the VULCAN postprocessor so structured and unstructured data can be stored in a single compact file. This final upgrade to VULCAN has been successfully installed and verified using test cases with particular interest towards those involving grids with non- C0 block interfaces.

  14. Quality Assurance Framework for Mini-Grids

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United States); Burman, Kari [National Renewable Energy Lab. (NREL), Golden, CO (United States); Singh, Mohit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Esterly, Sean [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mutiso, Rose [US Department of Energy, Washington, DC (United States); McGregor, Caroline [US Department of Energy, Washington, DC (United States)

    2016-11-01

    will drive investment and scale-up in this sector. The QAF implementation process also defines a set of implementation guidelines that help the deployment of mini-grids on a regional or national scale, helping to insure successful rapid deployment of these relatively new remote energy options. Note that the QAF is technology agnostic, addressing both alternating current (AC) and direct current (DC) mini-grids, and is also applicable to renewable, fossil-fuel, and hybrid systems.

  15. A HYBRID SOLAR WIND MODEL OF THE CESE+HLL METHOD WITH A YIN-YANG OVERSET GRID AND AN AMR GRID

    International Nuclear Information System (INIS)

    Feng Xueshang; Zhang Shaohua; Xiang Changqing; Yang Liping; Jiang Chaowei; Wu, S. T.

    2011-01-01

    A hybrid three-dimensional (3D) MHD model for solar wind study is proposed in the present paper with combined grid systems and solvers. The computational domain from the Sun to Earth space is decomposed into the near-Sun and off-Sun domains, which are respectively constructed with a Yin-Yang overset grid system and a Cartesian adaptive mesh refinement (AMR) grid system and coupled with a domain connection interface in the overlapping region between the near-Sun and off-Sun domains. The space-time conservation element and solution element method is used in the near-Sun domain, while the Harten-Lax-Leer method is employed in the off-Sun domain. The Yin-Yang overset grid can avoid well-known singularity and polar grid convergence problems and its body-fitting property helps achieve high-quality resolution near the solar surface. The block structured AMR Cartesian grid can automatically capture far-field plasma flow features, such as heliospheric current sheets and shock waves, and at the same time, it can save significant computational resources compared to the uniformly structured Cartesian grid. A numerical study of the solar wind structure for Carrington rotation 2069 shows that the newly developed hybrid MHD solar wind model successfully produces many realistic features of the background solar wind, in both the solar corona and interplanetary space, by comparisons with multiple solar and interplanetary observations.

  16. The GENIUS Grid Portal and robot certificates: a new tool for e-Science.

    Science.gov (United States)

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-06-16

    Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to

  17. Enabling campus grids with open science grid technology

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Derek [Nebraska U.; Bockelman, Brian [Nebraska U.; Swanson, David [Nebraska U.; Fraser, Dan [Argonne; Pordes, Ruth [Fermilab

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  18. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  19. Ten years of European Grids: What have we learnt?

    International Nuclear Information System (INIS)

    Burke, Stephen

    2011-01-01

    The European DataGrid project started in 2001, and was followed by the three phases of EGEE and the recent transition to EGI. This paper discusses the history of both middleware development and Grid operations in these projects, and in particular the impact on the development of the LHC Computing Grid. It considers to what extent the initial ambitions have been realised, which aspects have been successful and what lessons can be derived from the things which were less so, both in technical and sociological terms. In particular it considers the middleware technologies used for data management, workload management, information systems and security, and the difficulties of operating a highly distributed worldwide production infrastructure, drawing on practical experience with many aspects of the various Grid projects over the last decade.

  20. Technical and operational organisation of the 'Swiss Marketplace': GridCode CH

    International Nuclear Information System (INIS)

    Imhof, K.; Baumann, R.

    2001-01-01

    This article describes the minimum requirements placed on the operators of electricity grids by the planned Swiss Electricity Market Law. These are compiled in the Swiss Grid Code - GridCode CH. The various players in an open electricity market such as generating companies, power brokers, those responsible for balance groups, grid operators, system co-ordinators, the operators of fine distribution networks and the final consumer and the roles they play are examined. The history of the development of the Grid Code, which contains technical and operational regulations for the successful co-operation of the market players, is reviewed. The contractual obligations of the partners involved and, in particular, regulations concerning metering, measured value designation and the provision of data are discussed

  1. Reclamation after oil and gas development does not speed up succession or plant community recovery in big sagebrush ecosystems in Wyoming

    Science.gov (United States)

    Rottler, Caitlin M.; Burke, Ingrid C.; Palmquist, Kyle A.; Bradford, John B.; Lauenroth, William K.

    2018-01-01

    Article for intended outlet: Restoration Ecology. Abstract: Reclamation is an application of treatment(s) following a disturbance to promote succession and accelerate the return of target conditions. Previous studies have framed reclamation in the context of succession by studying its effectiveness in re-establishing late-successional plant communities. Re-establishment of these plant communities is especially important and potentially challenging in regions such as drylands and shrub steppe ecosystems where succession proceeds slowly. Dryland shrub steppe ecosystems are frequently associated with areas rich in fossil-fuel energy sources, and as such the need for effective reclamation after disturbance from fossil-fuel-related energy development is great. Past research in this field has focused primarily on coal mines; few researchers have studied reclamation after oil and gas development. To address this research gap and to better understand the effect of reclamation on rates of succession in dryland shrub steppe ecosystems, we sampled oil and gas wellpads and adjacent undisturbed big sagebrush plant communities in Wyoming, USA and quantified the extent of recovery for major functional groups on reclaimed and unreclaimed (recovered via natural succession) wellpads relative to the undisturbed plant community. Reclamation increased the rate of recovery for all forb and grass species as a group and for perennial grasses, but did not affect other functional groups. Rather, analyses comparing recovery to environmental variables and time since wellpad abandonment showed that recovery of other groups were affected primarily by soil texture and time since wellpad abandonment. This is consistent with studies in other ecosystems where reclamation has been implemented, suggesting that reclamation may not help re-establish late-successional plant communities more quickly than they would re-establish naturally.

  2. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  3. Big Data solutions on a small scale: Evaluating accessible high-performance computing for social research

    Directory of Open Access Journals (Sweden)

    Dhiraj Murthy

    2014-11-01

    Full Text Available Though full of promise, Big Data research success is often contingent on access to the newest, most advanced, and often expensive hardware systems and the expertise needed to build and implement such systems. As a result, the accessibility of the growing number of Big Data-capable technology solutions has often been the preserve of business analytics. Pay as you store/process services like Amazon Web Services have opened up possibilities for smaller scale Big Data projects. There is high demand for this type of research in the digital humanities and digital sociology, for example. However, scholars are increasingly finding themselves at a disadvantage as available data sets of interest continue to grow in size and complexity. Without a large amount of funding or the ability to form interdisciplinary partnerships, only a select few find themselves in the position to successfully engage Big Data. This article identifies several notable and popular Big Data technologies typically implemented using large and extremely powerful cloud-based systems and investigates the feasibility and utility of development of Big Data analytics systems implemented using low-cost commodity hardware in basic and easily maintainable configurations for use within academic social research. Through our investigation and experimental case study (in the growing field of social Twitter analytics, we found that not only are solutions like Cloudera’s Hadoop feasible, but that they can also enable robust, deep, and fruitful research outcomes in a variety of use-case scenarios across the disciplines.

  4. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  5. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  6. Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs

    OpenAIRE

    Mohd Usama; Mengchen Liu; Min Chen

    2017-01-01

    At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and ...

  7. Big data mining: In-database Oracle data mining over hadoop

    Science.gov (United States)

    Kovacheva, Zlatinka; Naydenova, Ina; Kaloyanova, Kalinka; Markov, Krasimir

    2017-07-01

    Big data challenges different aspects of storing, processing and managing data, as well as analyzing and using data for business purposes. Applying Data Mining methods over Big Data is another challenge because of huge data volumes, variety of information, and the dynamic of the sources. Different applications are made in this area, but their successful usage depends on understanding many specific parameters. In this paper we present several opportunities for using Data Mining techniques provided by the analytical engine of RDBMS Oracle over data stored in Hadoop Distributed File System (HDFS). Some experimental results are given and they are discussed.

  8. Grid Integration of Wind Farms

    Science.gov (United States)

    Giæver Tande, John Olav

    2003-07-01

    This article gives an overview of grid integration of wind farms with respect to impact on voltage quality and power system stability. The recommended procedure for assessing the impact of wind turbines on voltage quality in distribution grids is presented. The procedure uses the power quality characteristic data of wind turbines to determine the impact on slow voltage variations, flicker, voltage dips and harmonics. The detailed assessment allows for substantially more wind power in distribution grids compared with previously used rule-of-thumb guidelines. Power system stability is a concern in conjunction with large wind farms or very weak grids. Assessment requires the use of power system simulation tools, and wind farm models for inclusion in such tools are presently being developed. A fixed-speed wind turbine model is described. The model may be considered a good starting point for development of more advanced models, hereunder the concept of variable-speed wind turbines with a doubly fed induction generator is briefly explained. The use of dynamic wind farm models as part of power system simulation tools allows for detailed studies and development of innovative grid integration techniques. It is demonstrated that the use of reactive compensation may relax the short-term voltage stability limit and allow integration of significantly more wind power, and that application of automatic generation control technology may be an efficient means to circumvent thermal transmission capacity constraints. The continuous development of analysis tools and technology for cost-effective and secure grid integration is an important aid to ensure the increasing use of wind energy. A key factor for success, however, is the communication of results and gained experience, and in this regard it is hoped that this article may contribute.

  9. Significance of Supply Logistics in Big Cities

    Directory of Open Access Journals (Sweden)

    Mario Šafran

    2012-10-01

    Full Text Available The paper considers the concept and importance of supplylogistics as element in improving storage, supply and transportof goods in big cities. There is always room for improvements inthis segmenl of economic activities, and therefore continuousoptimisation of the cargo flows from the manufacturer to theend user is impor1a11t. Due to complex requirements in thecargo supply a11d the "spoiled" end users, modem cities represe/ll great difficulties and a big challenge for the supply organisers.The consumers' needs in big cities have developed over therecent years i11 such a way that they require supply of goods severaltimes a day at precisely determined times (orders are receivedby e-mail, and the information transfer is therefore instantaneous.In order to successfully meet the consumers'needs in advanced economic systems, advanced methods ofgoods supply have been developed and improved, such as 'justin time'; ''door-to-door", and "desk-to-desk". Regular operationof these systems requires supply logistics 1vhiclz includes thetotalthroughpw of materials, from receiving the raw materialsor reproduction material to the delive1y of final products to theend users.

  10. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  11. Priming the Pump for Big Data at Sentara Healthcare.

    Science.gov (United States)

    Kern, Howard P; Reagin, Michael J; Reese, Bertram S

    2016-01-01

    Today's healthcare organizations are facing significant demands with respect to managing population health, demonstrating value, and accepting risk for clinical outcomes across the continuum of care. The patient's environment outside the walls of the hospital and physician's office-and outside the electronic health record (EHR)-has a substantial impact on clinical care outcomes. The use of big data is key to understanding factors that affect the patient's health status and enhancing clinicians' ability to anticipate how the patient will respond to various therapies. Big data is essential to delivering sustainable, highquality, value-based healthcare, as well as to the success of new models of care such as clinically integrated networks (CINs) and accountable care organizations.Sentara Healthcare, based in Norfolk, Virginia, has been an early adopter of the technologies that have readied us for our big data journey: EHRs, telehealth-supported electronic intensive care units, and telehealth primary care support through MDLIVE. Although we would not say Sentara is at the cutting edge of the big data trend, it certainly is among the fast followers. Use of big data in healthcare is still at an early stage compared with other industries. Tools for data analytics are maturing, but traditional challenges such as heightened data security and limited human resources remain the primary focus for regional health systems to improve care and reduce costs. Sentara primarily makes actionable use of big data in our CIN, Sentara Quality Care Network, and at our health plan, Optima Health. Big data projects can be expensive, and justifying the expense organizationally has often been easier in times of crisis. We have developed an analytics strategic plan separate from but aligned with corporate system goals to ensure optimal investment and management of this essential asset.

  12. LexGrid: a framework for representing, storing, and querying biomedical terminologies from simple to sublime.

    Science.gov (United States)

    Pathak, Jyotishman; Solbrig, Harold R; Buntrock, James D; Johnson, Thomas M; Chute, Christopher G

    2009-01-01

    Many biomedical terminologies, classifications, and ontological resources such as the NCI Thesaurus (NCIT), International Classification of Diseases (ICD), Systematized Nomenclature of Medicine (SNOMED), Current Procedural Terminology (CPT), and Gene Ontology (GO) have been developed and used to build a variety of IT applications in biology, biomedicine, and health care settings. However, virtually all these resources involve incompatible formats, are based on different modeling languages, and lack appropriate tooling and programming interfaces (APIs) that hinder their wide-scale adoption and usage in a variety of application contexts. The Lexical Grid (LexGrid) project introduced in this paper is an ongoing community-driven initiative, coordinated by the Mayo Clinic Division of Biomedical Statistics and Informatics, designed to bridge this gap using a common terminology model called the LexGrid model. The key aspect of the model is to accommodate multiple vocabulary and ontology distribution formats and support of multiple data stores for federated vocabulary distribution. The model provides a foundation for building consistent and standardized APIs to access multiple vocabularies that support lexical search queries, hierarchy navigation, and a rich set of features such as recursive subsumption (e.g., get all the children of the concept penicillin). Existing LexGrid implementations include the LexBIG API as well as a reference implementation of the HL7 Common Terminology Services (CTS) specification providing programmatic access via Java, Web, and Grid services.

  13. GridCom, Grid Commander: graphical interface for Grid jobs and data management; GridCom, Grid Commander: graficheskij interfejs dlya raboty s zadachami i dannymi v gride

    Energy Technology Data Exchange (ETDEWEB)

    Galaktionov, V V

    2011-07-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  14. Big Data and Data Science in Critical Care.

    Science.gov (United States)

    Sanchez-Pinto, L Nelson; Luo, Yuan; Churpek, Matthew M

    2018-05-09

    The digitalization of the healthcare system has resulted in a deluge of clinical Big Data and has prompted the rapid growth of data science in medicine. Data science, which is the field of study dedicated to the principled extraction of knowledge from complex data, is particularly relevant in the critical care setting. The availability of large amounts of data in the intensive care unit, the need for better evidence-based care, and the complexity of critical illness makes the use of data science techniques and data-driven research particularly appealing to intensivists. Despite the increasing number of studies and publications in the field, so far there have been few examples of data science projects that have resulted in successful implementations of data-driven systems in the intensive care unit. However, given the expected growth in the field, intensivists should be familiar with the opportunities and challenges of Big Data and data science. In this paper, we review the definitions, types of algorithms, applications, challenges, and future of Big Data and data science in critical care. Copyright © 2018. Published by Elsevier Inc.

  15. Progress in Grid Generation: From Chimera to DRAGON Grids

    Science.gov (United States)

    Liou, Meng-Sing; Kao, Kai-Hsiung

    1994-01-01

    Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are

  16. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  17. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  18. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  19. Fog Computing: An Overview of Big IoT Data Analytics

    Directory of Open Access Journals (Sweden)

    Muhammad Rizwan Anawar

    2018-01-01

    Full Text Available A huge amount of data, generated by Internet of Things (IoT, is growing up exponentially based on nonstop operational states. Those IoT devices are generating an avalanche of information that is disruptive for predictable data processing and analytics functionality, which is perfectly handled by the cloud before explosion growth of IoT. Fog computing structure confronts those disruptions, with powerful complement functionality of cloud framework, based on deployment of micro clouds (fog nodes at proximity edge of data sources. Particularly big IoT data analytics by fog computing structure is on emerging phase and requires extensive research to produce more proficient knowledge and smart decisions. This survey summarizes the fog challenges and opportunities in the context of big IoT data analytics on fog networking. In addition, it emphasizes that the key characteristics in some proposed research works make the fog computing a suitable platform for new proliferating IoT devices, services, and applications. Most significant fog applications (e.g., health care monitoring, smart cities, connected vehicles, and smart grid will be discussed here to create a well-organized green computing paradigm to support the next generation of IoT applications.

  20. Impact of Degraded Communication on Interdependent Power Systems: The Application of Grid Splitting

    Directory of Open Access Journals (Sweden)

    Di-An Tian

    2016-08-01

    Full Text Available Communication is increasingly present for managing and controlling critical infrastructures strengthening their cyber interdependencies. In electric power systems, grid splitting is a topical communication-critical application. It amounts to separating a power system into islands in response to an impending instability, e.g., loss of generator synchronism due to a component fault, by appropriately disconnecting transmission lines and grouping synchronous generators. The successful application of grid splitting depends on the communication infrastructure to collect system-wide synchronized measurements and to relay the command to open line switches. Grid splitting may be ineffective if communication is degraded and its outcome may also depend on the system loading conditions. This paper investigates the effects of degraded communication and load variability on grid splitting. To this aim, a communication delay model is coupled with a transient electrical model and applied to the IEEE 39-Bus and the IEEE 118-Bus Test System. Case studies show that the loss of generator synchronism following a fault is mitigated by timely splitting the network into islands. On the other hand, the results show that communication delays and increased network flows can degrade the performance of grid splitting. The developed framework enables the identification of the requirements of the dedicated communication infrastructure for a successful grid-splitting procedure.

  1. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  2. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  3. On transferring the grid technology to the biomedical community.

    Science.gov (United States)

    Mohammed, Yassene; Sax, Ulrich; Dickmann, Frank; Lippert, Joerg; Solodenko, Juri; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which resulted in the Grid. The inter domain transfer process of this technology has been an intuitive process. Some difficulties facing the life science community can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies that have achieved certain stability. Grid and Cloud solutions are technologies that are still in flux. We illustrate how Grid computing creates new difficulties for the technology transfer process that are not considered in Bozeman's model. We show why the success of health Grids should be measured by the qualified scientific human capital and opportunities created, and not primarily by the market impact. With two examples we show how the Grid technology transfer theory corresponds to the reality. We conclude with recommendations that can help improve the adoption of Grid solutions into the biomedical community. These results give a more concise explanation of the difficulties most life science IT projects are facing in the late funding periods, and show some leveraging steps which can help to overcome the "vale of tears".

  4. Grid3: An Application Grid Laboratory for Science

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    level services required by the participating experiments. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. The Grid3 infrastructure was deployed from grid level services provided by groups and applications within the collaboration. The services were organized into four distinct "grid level services" including: Grid3 Packaging, Monitoring and Information systems, User Authentication and the iGOC Grid Operatio...

  5. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  6. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  7. Grid-connected to/from off-grid transference for micro-grid inverters

    OpenAIRE

    Heredero Peris, Daniel; Chillón Antón, Cristian; Pages Gimenez, Marc; Gross, Gabriel Igor; Montesinos Miracle, Daniel

    2013-01-01

    This paper compares two methods for controlling the on-line transference from connected to stand-alone mode and vice versa in converters for micro-grids. The first proposes a method where the converter changes from CSI (Current Source Inverter) in grid-connected mode to VSI (Voltage Source Inverter) in off-grid. In the second method, the inverter always works as a non-ideal voltage source, acting as VSI, using AC droop control strategy.

  8. A Riding-through Technique for Seamless Transition between Islanded and Grid-Connected Modes of Droop-Controlled Inverters

    DEFF Research Database (Denmark)

    Hu, Shang-hung; Lee, Tzung-Lin; Kuo, Chun-Yi

    2016-01-01

    This paper presents a seamless transition method for a droop-controlled inverter. The droop control is suitable to make the inverter work as a voltage source in both islanded and grid-connected modes, however, the transfer between theses modes can result in a big inrush current that may damage...... a smooth transition between them, requiring neither synchronization signals nor grid-side information. The control algorithm and design procedure are presented. Experimental results from a laboratory prototype validate the effectiveness of the proposed method....... the system. The proposed method allows the droop-controlled inverter to improve the transient response when transferring between modes, by detecting the inrush current, activating a current control loop during transients, and then transferring back to droop-controlled mode smoothly by using a virtual...

  9. The GridSite Web/Grid security system

    International Nuclear Information System (INIS)

    McNab, Andrew; Li Yibiao

    2010-01-01

    We present an overview of the current status of the GridSite toolkit, describing the security model for interactive and programmatic uses introduced in the last year. We discuss our experiences of implementing these internal changes and how they and previous rounds of improvements have been prompted by requirements from users and wider security trends in Grids (such as CSRF). Finally, we explain how these have improved the user experience of GridSite-based websites, and wider implications for portals and similar web/grid sites.

  10. Current Grid operation and future role of the Grid

    Science.gov (United States)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  11. Current Grid operation and future role of the Grid

    International Nuclear Information System (INIS)

    Smirnova, O

    2012-01-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  12. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  13. Reliable Grid Condition Detection and Control of Single-Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai

    standards addressed to the grid-connected systems will harmonize the combination of the DPGS and the classical power plants. Consequently, the major tasks of this thesis were to develop new grid condition detection techniques and intelligent control in order to allow the DPGS not only to deliver power...... to the utility grid but also to sustain it. This thesis was divided into two main parts, namely "Grid Condition Detection" and "Control of Single-Phase DPGS". In the first part, the main focus was on reliable Phase Locked Loop (PLL) techniques for monitoring the grid voltage and on grid impedance estimation...... techniques. Additionally, a new technique for detecting the islanding mode has been developed and successfully tested. In the second part, the main reported research was concentrated around adaptive current controllers based on the information provided by the grid condition detection techniques. To guarantee...

  14. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  15. LES analysis of the flow in a simplified PWR assembly with mixing grid

    Science.gov (United States)

    Bieder, Ulrich; Falk, Francois

    2014-06-01

    The flow in fuel assemblies of PWRs with mixing grids has been analyzed with CFD calculations by numerous authors. The comparisons between calculation and experiment are focused on the flow in the near wake of the mixing grid, i.e. on the flow in the first 10 to 20 hydraulic diameters (dh) downstream of the grid. In the study presented here, the comparison between the measurements in the AGATE facility (5x5 tube bundle) and TrioU calculations is done for the whole distance between two successive mixing grids that is up to 0.6m downstream of the grid. The AGATE experiments have originally not been designed for CFD validation but to characterize different types of mixing grids. Nevertheless, the quality of the experimental data allows the quantitative comparison between measurement and calculation. The conclusions of the comparison are summarized below: Linear turbulent viscosity models seem to work rather well as long as the cross flow velocity in the rod gaps is advection controlled, that is directly downstream of the mixing grid, Further downstream, when the cross flow velocity is reduced and isotropic turbulence becomes a more and more important mixing phenomena, linear viscosity models will fail, The mixing grid affects the cross flow velocity up to the successive grid at a distance of about 50dh. The flow in fuel assemblies is never similar to that in undisturbed rod bundles. The test section of the AGATE facility has been discretized on 300 million control volumes by using a staggered grid approach on tetrahedral meshes. 20 days of CPU on 4600 nodes of the HPC machine CURIE of the CCRT was necessary to calculate the statistics of the turbulent flow, in particular the mean velocity and the RMS of the turbulent fluctuations.

  16. LES analysis of the flow in a simplified PWR assembly with mixing grid

    International Nuclear Information System (INIS)

    Bieder, Ulrich; Fauchet, Gauthier; Falk, Francois

    2014-01-01

    The flow in fuel assemblies of Pressurized Water Reactors (PWR) with mixing grids has been analysed with Computational Fluid Dynamics (CFD) by numerous authors. The comparisons between calculation and experiment are mostly focused on the flow in the near wake of the mixing grid, i.e. on the flow in the first 5 to 10 hydraulic diameters (dh) downstream of the grid. In the study presented here, the comparison between the measurements in the AGATE facility (5 * 5 tube bundle) and Trio-U calculations is done for the whole distance between two successive mixing grids that is up to about 50 d h downstream of the grid. The AGATE experiments have originally not been designed for CFD validation but to characterize different types of mixing grids. Nevertheless, the quality of the experimental data allows the quantitative comparison between measurement and calculation. The conclusions of the comparison are summarized below: Linear turbulent viscosity models seem to work rather well as long as the cross flow velocity in the rod gaps is advection controlled, that is directly downstream of the mixing grid, Further downstream, when the cross flow velocity is reduced and anisotropic turbulence becomes a more and more important mixing phenomena, linear viscosity models can fail, The mixing grid affects the cross flow velocity up to the successive grid. The flow in fuel assemblies is never similar to that in undisturbed rod bundles. The test section of the AGATE facility has been discretized on 300 million control volumes by using a staggered grid approach on tetrahedral meshes. 20 days of CPU on 4600 cores of the High Performance Computer (HPC) cluster CURIE of the Centre de Calcul, Recherche et Technologie (CCRT) were necessary to converge the statistics of the turbulent fluctuations, completely converge the mean velocity and incompletely converge the RMS of the turbulent fluctuations. (authors)

  17. LES analysis of the flow in a simplified PWR assembly with mixing grid

    International Nuclear Information System (INIS)

    Bieder, U.; Falk, F.

    2013-01-01

    The flow in fuel assemblies of PWRs with mixing grids has been analyzed with CFD calculations by numerous authors. The comparisons between calculation and experiment are focused on the flow in the near wake of the mixing grid, i.e. on the flow in the first 10 to 20 hydraulic diameters (d h ) downstream of the grid. In the study presented here, the comparison between the measurements in the AGATE facility (5*5 tube bundle) and Trio U calculations is done for the whole distance between two successive mixing grids that is up to 0.6 m downstream of the grid. The AGATE experiments have originally not been designed for CFD validation but to characterize different types of mixing grids. Nevertheless, the quality of the experimental data allows the quantitative comparison between measurement and calculation. The conclusions of the comparison are summarized below. First, the linear turbulent viscosity models seem to work rather well as long as the cross flow velocity in the rod gaps is advection controlled, that is directly downstream of the mixing grid. Secondly, further downstream, when the cross flow velocity is reduced and isotropic turbulence becomes a more and more important mixing phenomena, linear viscosity models will fail. Thirdly, the mixing grid affects the cross flow velocity up to the successive grid at a distance of about 50 d h . The flow in fuel assemblies is never similar to that in undisturbed rod bundles. The test section of the AGATE facility has been discretized on 300 million control volumes by using a staggered grid approach on tetrahedral meshes. 20 days of CPU on 4600 nodes of the HPC machine CURIE of the CCRT (Computer Center for Research and Technology - France) was necessary to calculate the statistics of the turbulent flow, in particular the mean velocity and the RMS of the turbulent fluctuations. (authors)

  18. Ten questions concerning integrating smart buildings into the smart grid

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, Thomas M.; Boudreau, Marie-Claude; Helsen, Lieve; Henze, Gregor; Mohammadpour, Javad; Noonan, Doug; Patteeuw, Dieter; Pless, Shanti; Watson, Richard T.

    2016-11-01

    Recent advances in information and communications technology (ICT) have initiated development of a smart electrical grid and smart buildings. Buildings consume a large portion of the total electricity production worldwide, and to fully develop a smart grid they must be integrated with that grid. Buildings can now be 'prosumers' on the grid (both producers and consumers), and the continued growth of distributed renewable energy generation is raising new challenges in terms of grid stability over various time scales. Buildings can contribute to grid stability by managing their overall electrical demand in response to current conditions. Facility managers must balance demand response requests by grid operators with energy needed to maintain smooth building operations. For example, maintaining thermal comfort within an occupied building requires energy and, thus an optimized solution balancing energy use with indoor environmental quality (adequate thermal comfort, lighting, etc.) is needed. Successful integration of buildings and their systems with the grid also requires interoperable data exchange. However, the adoption and integration of newer control and communication technologies into buildings can be problematic with older legacy HVAC and building control systems. Public policy and economic structures have not kept up with the technical developments that have given rise to the budding smart grid, and further developments are needed in both technical and non-technical areas.

  19. BLAST in Gid (BiG): A Grid-Enabled Software Architecture and Implementation of Parallel and Sequential BLAST

    International Nuclear Information System (INIS)

    Aparicio, G.; Blanquer, I.; Hernandez, V.; Segrelles, D.

    2007-01-01

    The integration of High-performance computing tools is a key issue in biomedical research. Many computer-based applications have been migrated to High-Performance computers to deal with their computing and storage needs such as BLAST. However, the use of clusters and computing farm presents problems in scalability. The use of a higher layer of parallelism that splits the task into highly independent long jobs that can be executed in parallel can improve the performance maintaining the efficiency. Grid technologies combined with parallel computing resources are an important enabling technology. This work presents a software architecture for executing BLAST in a International Grid Infrastructure that guarantees security, scalability and fault tolerance. The software architecture is modular an adaptable to many other high-throughput applications, both inside the field of bio computing and outside. (Author)

  20. Ethics, Big Data, and Analytics: A Model for Application.

    OpenAIRE

    Willis, James E, III

    2013-01-01

    The use of big data and analytics to predict student success presents unique ethical questions for higher education administrators relating to the nature of knowledge; in education, "to know" entails an obligation to act on behalf of the student. The Potter Box framework can help administrators address these questions and provide a framework for action.

  1. OGC and Grid Interoperability in enviroGRIDS Project

    Science.gov (United States)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  2. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    International Nuclear Information System (INIS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-01-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it 'multi-tier'. The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  3. VOSpace: a Prototype for Grid 2.0

    Science.gov (United States)

    Graham, M. J.; Morris, D.; Rixon, G.

    2007-10-01

    As Grid 1.0 was characterized by distributed computation, so Grid 2.0 will be characterized by distributed data and the infrastructure needed to support and exploit it: the emerging success of Amazon S3 is already testimony to this. VOSpace is the IVOA interface standard for accessing distributed data. Although the base definition (VOSpace 1.0) only relates to flat, unconnected data stores, subsequent versions will add additional layers of functionality. In this paper, we consider how incorporating popular web concepts such as folksonomies (tagging), social networking, and data-spaces could lead to a much richer data environment than provided by a traditional collection of networked data stores.

  4. The impact of big data and business analytics on supply chain management

    Directory of Open Access Journals (Sweden)

    Hans W. Ittmann

    2015-05-01

    Objective: This article endeavours to highlight the evolving nature of the supply chain management (SCM environment, to identify how the two major trends (‘big data’ and analytics will impact SCM in future, to show the benefits that can be derived if these trends are embraced and to make recommendations to supply chain managers. Method: The importance of extracting value from the huge amounts of data available in the SCM area is stated. ‘Big data’ and analytics are defined and the impact of these in various SCM applications clearly illustrated. Results: It is shown, through examples, how the SCM area can be impacted by these new trends and developments. In these examples ‘big data’ analytics have already been embraced, used and implemented successfully. Big data is a reality and using analytics to extract value from the data has the potential to make a huge impact. Conclusion: It is strongly recommended that supply chain managers take note of these two trends, since better use of ‘big data’ analytics can ensure that they keep abreast with developments and changes which can assist in enhancing business competitiveness.

  5. Privacy protection in HealthGrid: distributing encryption management over the VO.

    Science.gov (United States)

    Torres, Erik; de Alfonso, Carlos; Blanquer, Ignacio; Hernández, Vicente

    2006-01-01

    Grid technologies have proven to be very successful in tackling challenging problems in which data access and processing is a bottleneck. Notwithstanding the benefits that Grid technologies could have in Health applications, privacy leakages of current DataGrid technologies due to the sharing of data in VOs and the use of remote resources, compromise its widespreading. Privacy control for Grid technology has become a key requirement for the adoption of Grids in the Healthcare sector. Encrypted storage of confidential data effectively reduces the risk of disclosure. A self-enforcing scheme for encrypted data storage can be achieved by combining Grid security systems with distributed key management and classical cryptography techniques. Virtual Organizations, as the main unit of user management in Grid, can provide a way to organize key sharing, access control lists and secure encryption management. This paper provides programming models and discusses the value, costs and behavior of such a system implemented on top of one of the latest Grid middlewares. This work is partially funded by the Spanish Ministry of Science and Technology in the frame of the project Investigación y Desarrollo de Servicios GRID: Aplicación a Modelos Cliente-Servidor, Colaborativos y de Alta Productividad, with reference TIC2003-01318.

  6. Smart Grid Enabled EVSE

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2015-01-12

    The combined team of GE Global Research, Federal Express, National Renewable Energy Laboratory, and Consolidated Edison has successfully achieved the established goals contained within the Department of Energy’s Smart Grid Capable Electric Vehicle Supply Equipment funding opportunity. The final program product, shown charging two vehicles in Figure 1, reduces by nearly 50% the total installed system cost of the electric vehicle supply equipment (EVSE) as well as enabling a host of new Smart Grid enabled features. These include bi-directional communications, load control, utility message exchange and transaction management information. Using the new charging system, Utilities or energy service providers will now be able to monitor transportation related electrical loads on their distribution networks, send load control commands or preferences to individual systems, and then see measured responses. Installation owners will be able to authorize usage of the stations, monitor operations, and optimally control their electricity consumption. These features and cost reductions have been developed through a total system design solution.

  7. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  8. Next Generation Workload Management and Analysis System for Big Data

    Energy Technology Data Exchange (ETDEWEB)

    De, Kaushik [Univ. of Texas, Arlington, TX (United States)

    2017-04-24

    We report on the activities and accomplishments of a four-year project (a three-year grant followed by a one-year no cost extension) to develop a next generation workload management system for Big Data. The new system is based on the highly successful PanDA software developed for High Energy Physics (HEP) in 2005. PanDA is used by the ATLAS experiment at the Large Hadron Collider (LHC), and the AMS experiment at the space station. The program of work described here was carried out by two teams of developers working collaboratively at Brookhaven National Laboratory (BNL) and the University of Texas at Arlington (UTA). These teams worked closely with the original PanDA team – for the sake of clarity the work of the next generation team will be referred to as the BigPanDA project. Their work has led to the adoption of BigPanDA by the COMPASS experiment at CERN, and many other experiments and science projects worldwide.

  9. Toward developing more realistic groundwater models using big data

    Science.gov (United States)

    Vahdat Aboueshagh, H.; Tsai, F. T. C.; Bhatta, D.; Paudel, K.

    2017-12-01

    Rich geological data is the backbone of developing realistic groundwater models for groundwater resources management. However, constructing realistic groundwater models can be challenging due to inconsistency between different sources of geological, hydrogeological and geophysical data and difficulty in processing big data to characterize the subsurface environment. This study develops a framework to utilize a big geological dataset to create a groundwater model for the Chicot Aquifer in the southwestern Louisiana, which borders on the Gulf of Mexico at south. The Chicot Aquifer is the principal source of fresh water in southwest Louisiana, underlying an area of about 9,000 square miles. Agriculture is the largest groundwater consumer in this region and overpumping has caused significant groundwater head decline and saltwater intrusion from the Gulf and deep formations. A hydrostratigraphy model was constructed using around 29,000 electrical logs and drillers' logs as well as screen lengths of pumping wells through a natural neighbor interpolation method. These sources of information have different weights in terms of accuracy and trustworthy. A data prioritization procedure was developed to filter untrustworthy log information, eliminate redundant data, and establish consensus of various lithological information. The constructed hydrostratigraphy model shows 40% sand facies, which is consistent with the well log data. The hydrostratigraphy model confirms outcrop areas of the Chicot Aquifer in the north of the study region. The aquifer sand formation is thinning eastward to merge into Atchafalaya River alluvial aquifer and coalesces to the underlying Evangeline aquifer. A grid generator was used to convert the hydrostratigraphy model into a MODFLOW grid with 57 layers. A Chicot groundwater model was constructed using the available hydrologic and hydrogeological data for 2004-2015. Pumping rates for irrigation wells were estimated using the crop type and acreage

  10. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  11. A Study on Structural Strength of Irradiated Spacer Grid for PWR Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Y. G.; Baek, S. J.; Kim, D. S.; Yoo, B. O.; Ahn, S. B.; Chun, Y. B. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, J. I.; Kim, Y. H.; Lee, J. J. [KEPCO NF, Daejeon (Korea, Republic of)

    2014-10-15

    A fuel assembly consists of an array of fuel rods, spacer grids, guide thimbles, instrumentation tubes, and top and bottom nozzles. In PWR (Pressurized light Water Reactor) fuel assemblies, the spacer grids support the fuel rods by the friction forces between the fuel rods and springs/dimples. Under irradiation, the spacer grids supporting the fuel rods absorb vibration impacts due to the reactor coolant flow, and also bear static and dynamic loads during operation inside the nuclear reactor and transportation for spent fuel storage. Thus, it is important to understand the characteristics of deformation behavior and the change in structural strength of an irradiated spacer grid.. In the present study, the static compression test of a spacer grid was conducted to investigate the structural strength of the irradiated spacer grid in a hot cell at IMEF (Irradiated Materials Examination Facility) of KAERI. To evaluate the structural strength of an irradiated spacer grid, hot cell tests were carried out at IMEF of KAERI. The fuel assembly was dismantled and the irradiated spacer grid was obtained for the compression test. The apparatus for measuring the compression strength of the irradiated spacer grid was developed and installed successfully in the hot cell.

  12. Fiberglass Grids as Sustainable Reinforcement of Historic Masonry

    Science.gov (United States)

    Righetti, Luca; Edmondson, Vikki; Corradi, Marco; Borri, Antonio

    2016-01-01

    Fiber-reinforced composite (FRP) materials have gained an increasing success, mostly for strengthening, retrofitting and repair of existing historic masonry structures and may cause a significant enhancement of the mechanical properties of the reinforced members. This article summarizes the results of previous experimental activities aimed at investigating the effectiveness of GFRP (Glass Fiber Reinforced Polymers) grids embedded into an inorganic mortar to reinforce historic masonry. The paper also presents innovative results on the relationship between the durability and the governing material properties of GFRP grids. Measurements of the tensile strength were made using specimens cut off from GFRP grids before and after ageing in aqueous solution. The tensile strength of a commercially available GFRP grid has been tested after up 450 days of storage in deionized water and NaCl solution. A degradation in tensile strength and Young’s modulus up to 30.2% and 13.2% was recorded, respectively. This degradation indicated that extended storage in a wet environment may cause a decrease in the mechanical properties. PMID:28773725

  13. Grid for Earth Science Applications

    Science.gov (United States)

    Petitdidier, Monique; Schwichtenberg, Horst

    2013-04-01

    The civil society at large has addressed to the Earth Science community many strong requirements related in particular to natural and industrial risks, climate changes, new energies. The main critical point is that on one hand the civil society and all public ask for certainties i.e. precise values with small error range as it concerns prediction at short, medium and long term in all domains; on the other hand Science can mainly answer only in terms of probability of occurrence. To improve the answer or/and decrease the uncertainties, (1) new observational networks have been deployed in order to have a better geographical coverage and more accurate measurements have been carried out in key locations and aboard satellites. Following the OECD recommendations on the openness of research and public sector data, more and more data are available for Academic organisation and SMEs; (2) New algorithms and methodologies have been developed to face the huge data processing and assimilation into simulations using new technologies and compute resources. Finally, our total knowledge about the complex Earth system is contained in models and measurements, how we put them together has to be managed cleverly. The technical challenge is to put together databases and computing resources to answer the ES challenges. However all the applications are very intensive computing. Different compute solutions are available and depend on the characteristics of the applications. One of them is Grid especially efficient for independent or embarrassingly parallel jobs related to statistical and parametric studies. Numerous applications in atmospheric chemistry, meteorology, seismology, hydrology, pollution, climate and biodiversity have been deployed successfully on Grid. In order to fulfill requirements of risk management, several prototype applications have been deployed using OGC (Open geospatial Consortium) components with Grid middleware. The Grid has permitted via a huge number of runs to

  14. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends.

    Science.gov (United States)

    Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by

  15. The smart kiosk substation for Smart Grids; Die intelligente Ortsnetzstation fuer das Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Thomas [Schneider Electric Energy GmbH, Frankfurt (Germany); Vaupel, Steffen [Schneider Electric Energy GmbH, Kassel (Germany)

    2012-07-01

    The changes in the energy supply towards current and future needs call for new technologies and solutions resulting in the ''Smart Grid''. The smart kiosk substation describes an essential component for the additionally required optimization of the energy distribution networks - a complete functional unit of an economic and efficient compact substation, which successfully operates within the framework of a pilot project since the beginning of this year. In addition to an adjustable 630-kVA-local distribution transformer, control and signalling functions, to manage fault situations are included, allowing for the optimization of outage times. Measurement of network quality and an economic network protection complete the range of services. As is customary during the development of new products, high availability and free of maintenance (through utilization of standard components) whilst complying with current standards and regulations are being taken into account. Along with the demand for regenerative feed-ins to feed reactive power into the network as required, the regulating device allows for a much improved use of the voltage limits - in the low voltage grid as well. It is to be expected that the network expansion of the low voltage grid thus can be significantly optimized through these possibilities of regulation. (orig.)

  16. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  17. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  18. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  19. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  20. Climate simulations and services on HPC, Cloud and Grid infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  1. Gridded Species Distribution, Version 1: Global Amphibians Presence Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Amphibians Presence Grids of the Gridded Species Distribution, Version 1 is a reclassified version of the original grids of amphibian species distribution...

  2. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  3. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  4. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  5. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  6. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  7. Chimera Grid Tools

    Science.gov (United States)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  8. Promotional drivers for grid-connected PV

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Polo, A.; Hass, R.; Suna, D.

    2009-03-15

    This report for the International Energy Agency (IEA) made by Task 10 of the Photovoltaic Power Systems (PVPS) programme takes a look at promotional measures for grid-connected photovoltaic systems. The mission of the Photovoltaic Power Systems Programme is to enhance the international collaboration efforts which accelerate the development and deployment of photovoltaic solar energy. The objective of Task 10 is stated as being to enhance the opportunities for wide-scale, solution-oriented application of photovoltaics in the urban environment. The paper discusses the core objective of this study which was to analyse the success of various governmental regulatory programs and governmental and non-governmental marketing programs for grid-connected PV systems. To meet this objective, a review of the most important past and current programs around the world was conducted. The theoretical bases of supply and demand are explained and the types of existing strategies are documented in a second Section. In Chapter 3, various programs around the world are described. Chapter 4 focuses on defining success criteria which will be used for the analysis of the programs. Finally, the major conclusions drawn complete this analysis.

  9. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  10. Grid: From EGEE to EGI and from INFN-Grid to IGI

    International Nuclear Information System (INIS)

    Giselli, A.; Mazzuccato, M.

    2009-01-01

    In the last fifteen years the approach of the computational Grid has changed the way to use computing resources. Grid computing has raised interest worldwide in academia, industry, and government with fast development cycles. Great efforts, huge funding and resources have been made available through national, regional and international initiatives aiming at providing Grid infrastructures, Grid core technologies, Grid middle ware and Grid applications. The Grid software layers reflect the architecture of the services developed so far by the most important European and international projects. In this paper Grid e-Infrastructure story is given, detailing European, Italian and international projects such as EGEE, INFN-Grid and NAREGI. In addition the sustainability issue in the long-term perspective is described providing plans by European and Italian communities with EGI and IGI.

  11. From the grid to the smart grid, topologically

    Science.gov (United States)

    Pagani, Giuliano Andrea; Aiello, Marco

    2016-05-01

    In its more visionary acceptation, the smart grid is a model of energy management in which the users are engaged in producing energy as well as consuming it, while having information systems fully aware of the energy demand-response of the network and of dynamically varying prices. A natural question is then: to make the smart grid a reality will the distribution grid have to be upgraded? We assume a positive answer to the question and we consider the lower layers of medium and low voltage to be the most affected by the change. In our previous work, we analyzed samples of the Dutch distribution grid (Pagani and Aiello, 2011) and we considered possible evolutions of these using synthetic topologies modeled after studies of complex systems in other technological domains (Pagani and Aiello, 2014). In this paper, we take an extra important step by defining a methodology for evolving any existing physical power grid to a good smart grid model, thus laying the foundations for a decision support system for utilities and governmental organizations. In doing so, we consider several possible evolution strategies and apply them to the Dutch distribution grid. We show how increasing connectivity is beneficial in realizing more efficient and reliable networks. Our proposal is topological in nature, enhanced with economic considerations of the costs of such evolutions in terms of cabling expenses and economic benefits of evolving the grid.

  12. Grid Generation Techniques Utilizing the Volume Grid Manipulator

    Science.gov (United States)

    Alter, Stephen J.

    1998-01-01

    This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.

  13. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  14. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  15. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  16. The Grid

    CERN Document Server

    Klotz, Wolf-Dieter

    2005-01-01

    Grid technology is widely emerging. Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple, robust yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources. This talk will give a short history how, out of lessons learned from the Internet, the vision of Grids was born. Then the extensible anatomy of a Grid architecture will be discussed. The talk will end by presenting a selection of major Grid projects in Europe and US and if time permits a short on-line demonstration.

  17. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  18. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  19. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  20. Increased Productivity for Emerging Grid Applications the Application Support System

    CERN Document Server

    Maier, Andrew; Mendez Lorenzo, Patricia; Moscicki, Jakub; Lamanna, Massimo; Muraru, Adrian

    2008-01-01

    Recently a growing number of various applications have been quickly and successfully enabled on the Grid by the CERN Grid application support team. This allowed the applications to achieve and publish large-scale results in a short time which otherwise would not be possible. We present the general infrastructure, support procedures and tools that have been developed. We discuss the general patterns observed in supporting new applications and porting them to the EGEE environment. The CERN Grid application support team has been working with the following real-life applications: medical and particle physics simulation (Geant4, Garfield), satellite imaging and geographic information for humanitarian relief operations (UNOSAT), telecommunications (ITU), theoretical physics (Lattice QCD, Feynman-loop evaluation), Bio-informatics (Avian Flu Data Challenge), commercial imaging processing and classification (Imense Ltd.) and physics experiments (ATLAS, LHCb, HARP). Using the EGEE Grid we created a standard infrastruct...

  1. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  2. The MicroGrid: A Scientific Tool for Modeling Computational Grids

    Directory of Open Access Journals (Sweden)

    H.J. Song

    2000-01-01

    Full Text Available The complexity and dynamic nature of the Internet (and the emerging Computational Grid demand that middleware and applications adapt to the changes in configuration and availability of resources. However, to the best of our knowledge there are no simulation tools which support systematic exploration of dynamic Grid software (or Grid resource behavior. We describe our vision and initial efforts to build tools to meet these needs. Our MicroGrid simulation tools enable Globus applications to be run in arbitrary virtual grid resource environments, enabling broad experimentation. We describe the design of these tools, and their validation on micro-benchmarks, the NAS parallel benchmarks, and an entire Grid application. These validation experiments show that the MicroGrid can match actual experiments within a few percent (2% to 4%.

  3. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  4. WE-H-BRB-02: Where Do We Stand in the Applications of Big Data in Radiation Oncology?

    International Nuclear Information System (INIS)

    Xing, L.

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  5. WE-H-BRB-02: Where Do We Stand in the Applications of Big Data in Radiation Oncology?

    Energy Technology Data Exchange (ETDEWEB)

    Xing, L. [Stanford University School of Medicine (United States)

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  6. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Science.gov (United States)

    Davies, C. B.

    1994-01-01

    acceptable since it makes possible an overall and local error reduction through grid redistribution. SAGE includes the ability to modify the adaption techniques in boundary regions, which substantially improves the flexibility of the adaptive scheme. The vectorial approach used in the analysis also provides flexibility. The user has complete choice of adaption direction and order of sequential adaptions without concern for the computational data structure. Multiple passes are available with no restraint on stepping directions; for each adaptive pass the user can choose a completely new set of adaptive parameters. This facility, combined with the capability of edge boundary control, enables the code to individually adapt multi-dimensional multiple grids. Zonal grids can be adapted while maintaining continuity along the common boundaries. For patched grids, the multiple-pass capability enables complete adaption. SAGE is written in FORTRAN 77 and is intended to be machine independent; however, it requires a FORTRAN compiler which supports NAMELIST input. It has been successfully implemented on Sun series computers, SGI IRIS's, DEC MicroVAX computers, HP series computers, the Cray YMP, and IBM PC compatibles. Source code is provided, but no sample input and output files are provided. The code reads three datafiles: one that contains the initial grid coordinates (x,y,z), one that contains corresponding flow-field variables, and one that contains the user control parameters. It is assumed that the first two datasets are formatted as defined in the plotting software package PLOT3D. Several machine versions of PLOT3D are available from COSMIC. The amount of main memory is dependent on the size of the matrix. The standard distribution medium for SAGE is a 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. SAGE was developed in 1989, first

  7. Modelling noise propagation using Grid Resources. Progress within GDI-Grid

    Science.gov (United States)

    Kiehle, Christian; Mayer, Christian; Padberg, Alexander; Stapelfeld, Hartmut

    2010-05-01

    Modelling noise propagation using Grid Resources. Progress within GDI-Grid. GDI-Grid (english: SDI-Grid) is a research project funded by the German Ministry for Science and Education (BMBF). It aims at bridging the gaps between OGC Web Services (OWS) and Grid infrastructures and identifying the potential of utilizing the superior storage capacities and computational power of grid infrastructures for geospatial applications while keeping the well-known service interfaces specified by the OGC. The project considers all major OGC webservice interfaces for Web Mapping (WMS), Feature access (Web Feature Service), Coverage access (Web Coverage Service) and processing (Web Processing Service). The major challenge within GDI-Grid is the harmonization of diverging standards as defined by standardization bodies for Grid computing and spatial information exchange. The project started in 2007 and will continue until June 2010. The concept for the gridification of OWS developed by lat/lon GmbH and the Department of Geography of the University of Bonn is applied to three real-world scenarios in order to check its practicability: a flood simulation, a scenario for emergency routing and a noise propagation simulation. The latter scenario is addressed by the Stapelfeldt Ingenieurgesellschaft mbH located in Dortmund adapting their LimA software to utilize grid resources. Noise mapping of e.g. traffic noise in urban agglomerates and along major trunk roads is a reoccurring demand of the EU Noise Directive. Input data requires road net and traffic, terrain, buildings and noise protection screens as well as population distribution. Noise impact levels are generally calculated in 10 m grid and along relevant building facades. For each receiver position sources within a typical range of 2000 m are split down into small segments, depending on local geometry. For each of the segments propagation analysis includes diffraction effects caused by all obstacles on the path of sound propagation

  8. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  9. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  10. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  11. Application of epidemic algorithms for smart grids control

    International Nuclear Information System (INIS)

    Krkoleva, Aleksandra

    2012-01-01

    the balance of production and demand within a Micro grid. In this case, the implementation og gossip algorithms enables maintaining the overall consumption of the group and facilities load forecasting. The main contribution of the thesis is the successful selection and adaptation of gossip algorithms for decentralized control in Smart Grids. The thesis presents an innovative concept for organizing consumers for providing ancillary services and participating in demand response action in Smart Grids. (Author)

  12. Smart grid security

    Energy Technology Data Exchange (ETDEWEB)

    Cuellar, Jorge (ed.) [Siemens AG, Muenchen (Germany). Corporate Technology

    2013-11-01

    The engineering, deployment and security of the future smart grid will be an enormous project requiring the consensus of many stakeholders with different views on the security and privacy requirements, not to mention methods and solutions. The fragmentation of research agendas and proposed approaches or solutions for securing the future smart grid becomes apparent observing the results from different projects, standards, committees, etc, in different countries. The different approaches and views of the papers in this collection also witness this fragmentation. This book contains the following papers: 1. IT Security Architecture Approaches for Smart Metering and Smart Grid. 2. Smart Grid Information Exchange - Securing the Smart Grid from the Ground. 3. A Tool Set for the Evaluation of Security and Reliability in Smart Grids. 4. A Holistic View of Security and Privacy Issues in Smart Grids. 5. Hardware Security for Device Authentication in the Smart Grid. 6. Maintaining Privacy in Data Rich Demand Response Applications. 7. Data Protection in a Cloud-Enabled Smart Grid. 8. Formal Analysis of a Privacy-Preserving Billing Protocol. 9. Privacy in Smart Metering Ecosystems. 10. Energy rate at home Leveraging ZigBee to Enable Smart Grid in Residential Environment.

  13. Successes and Challenges of Emerging Economy Multinationals

    DEFF Research Database (Denmark)

    Successes and Challenges of Emerging Economy Multinationals investigates a broad variety of cases presenting clear evidence of fast successful internationalization of emerging economy multinationals originating not only from big economic players such as China, India and Russia but also from other...... successfully internationalizing emerging countries, namely South Africa and Poland. In terms of size, the firms vary from huge multinational firms such as Huawei, Tata and Gazprom, to really small high technology firms. The in-depth analysis conducted in this book leads to the indication of numerous novel...

  14. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  15. Performance Portability Strategies for Grid C++ Expression Templates

    Directory of Open Access Journals (Sweden)

    Boyle Peter A.

    2018-01-01

    Full Text Available One of the key requirements for the Lattice QCD Application Development as part of the US Exascale Computing Project is performance portability across multiple architectures. Using the Grid C++ expression template as a starting point, we report on the progress made with regards to the Grid GPU offloading strategies. We present both the successes and issues encountered in using CUDA, OpenACC and Just-In-Time compilation. Experimentation and performance on GPUs with a SU(3×SU(3 streaming test will be reported. We will also report on the challenges of using current OpenMP 4.x for GPU offloading in the same code.

  16. Performance Portability Strategies for Grid C++ Expression Templates

    Science.gov (United States)

    Boyle, Peter A.; Clark, M. A.; DeTar, Carleton; Lin, Meifeng; Rana, Verinder; Vaquero Avilés-Casco, Alejandro

    2018-03-01

    One of the key requirements for the Lattice QCD Application Development as part of the US Exascale Computing Project is performance portability across multiple architectures. Using the Grid C++ expression template as a starting point, we report on the progress made with regards to the Grid GPU offloading strategies. We present both the successes and issues encountered in using CUDA, OpenACC and Just-In-Time compilation. Experimentation and performance on GPUs with a SU(3)×SU(3) streaming test will be reported. We will also report on the challenges of using current OpenMP 4.x for GPU offloading in the same code.

  17. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  18. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  19. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  20. Synchronization in single-phase grid-connected photovoltaic systems under grid faults

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2012-01-01

    The highly increasing penetration of single-phase photovoltaic (PV) systems pushes the grid requirements related to the integration of PV power systems to be updated. These upcoming regulations are expected to direct the grid-connected renewable generators to support the grid operation and stabil......The highly increasing penetration of single-phase photovoltaic (PV) systems pushes the grid requirements related to the integration of PV power systems to be updated. These upcoming regulations are expected to direct the grid-connected renewable generators to support the grid operation...

  1. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  2. Smart grid technologies in local electric grids

    Science.gov (United States)

    Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.

    2017-08-01

    The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.

  3. Small data in the era of big data

    OpenAIRE

    Kitchin, Rob; Lauriault, Tracey P.

    2015-01-01

    Academic knowledge building has progressed for the past few centuries using small data studies characterized by sampled data generated to answer specific questions. It is a strategy that has been remarkably successful, enabling the sciences, social sciences and humanities to advance in leaps and bounds. This approach is presently being challenged by the development of big data. Small data studies will however, we argue, continue to be popular and valuable in the fut...

  4. Cost-Benefit Analysis of Smart Grids Implementation

    International Nuclear Information System (INIS)

    Tomsic, Z.; Pongrasic, M.

    2014-01-01

    Paper presents guidelines for conducting the cost-benefit analysis of Smart Grid projects connected to the implementation of advanced technologies in electric power system. Restrictions of presented electric power networks are also mentioned along with solutions that are offered by advanced electric power network. From an economic point of view, the main characteristic of advanced electric power network is big investment, and benefits are seen after some time with risk of being smaller than expected. Therefore it is important to make a comprehensive analysis of those projects which consist of economic and qualitative analysis. This report relies on EPRI methodology developed in American institute for energy. The methodology is comprehensive and useful, but also simple and easy to understand. Steps of this methodology and main characteristics of methodologies which refer to EPRI methodology: methodology developed in Joint Research Center and methodologies for analysing implementation of smart meters in electricity power network are explained. Costs, benefits and categories in which they can be classified are also defined. As a part of qualitative analysis, social aspect of Smart Grid projects is described. In cost defining, special attention has to be paid to projects of integrating electricity from variable renewable energy sources into the power system because of additional costs. This work summarized categories of additional costs. In the end of this report, an overview is given of what has been done and what will be done in European Union. (author).

  5. Final Report Report: Smart Grid Ready PV Inverters with Utility Communication

    Energy Technology Data Exchange (ETDEWEB)

    Seal, Brian [Electric Power Research Inst. (EPRI), Knovville, TN (United States); Huque, Aminul [Electric Power Research Inst. (EPRI), Knovville, TN (United States); Rogers, Lindsey [Electric Power Research Inst. (EPRI), Knovville, TN (United States); Key, Tom [Electric Power Research Inst. (EPRI), Knovville, TN (United States); Riley, Cameron [Electric Power Research Inst. (EPRI), Knovville, TN (United States); Li, Huijuan [Electric Power Research Inst. (EPRI), Knovville, TN (United States); York, Ben [Electric Power Research Inst. (EPRI), Knovville, TN (United States); Purcell, Chris [BPL Global, Inc., Canonsburg, PA (United States); Pacific, Oliver [Spirae, Inc., Fort Collins, CO (United States); Ropp, Michael [Northern Plains Power Technologies, Brookings, SD (United States); Tran, Teresa [DTE Energy, Detroit, MI (United States); Asgeirsson, Hawk [DTE Energy, Detroit, MI (United States); Woodard, Justin [National Grid, Warwick (United Kingdom); Steffel, Steve [Pepco Holdings, Inc., Washington, DC (United States)

    2016-03-30

    In 2011, EPRI began a four-year effort under the Department of Energy (DOE) SunShot Initiative Solar Energy Grid Integration Systems - Advanced Concepts (SEGIS-AC) to demonstrate smart grid ready inverters with utility communication. The objective of the project was to successfully implement and demonstrate effective utilization of inverters with grid support functionality to capture the full value of distributed photovoltaic (PV). The project leveraged ongoing investments and expanded PV inverter capabilities, to enable grid operators to better utilize these grid assets. Developing and implementing key elements of PV inverter grid support capabilities will increase the distribution system’s capacity for higher penetration levels of PV, while reducing the cost. The project team included EPRI, Yaskawa-Solectria Solar, Spirae, BPL Global, DTE Energy, National Grid, Pepco, EDD, NPPT and NREL. The project was divided into three phases: development, deployment, and demonstration. Within each phase, the key areas included: head-end communications for Distributed Energy Resources (DER) at the utility operations center; methods for coordinating DER with existing distribution equipment; back-end PV plant master controller; and inverters with smart-grid functionality. Four demonstration sites were chosen in three regions of the United States with different types of utility operating systems and implementations of utility-scale PV inverters. This report summarizes the project and findings from field demonstration at three utility sites.

  6. A Quantum Universe Before the Big Bang(s)?

    Science.gov (United States)

    Veneziano, Gabriele

    2017-08-01

    The predictions of general relativity have been verified by now in a variety of different situations, setting strong constraints on any alternative theory of gravity. Nonetheless, there are strong indications that general relativity has to be regarded as an approximation of a more complete theory. Indeed theorists have long been looking for ways to connect general relativity, which describes the cosmos and the infinitely large, to quantum physics, which has been remarkably successful in explaining the infinitely small world of elementary particles. These two worlds, however, come closer and closer to each other as we go back in time all the way up to the big bang. Actually, modern cosmology has changed completely the old big bang paradigm: we now have to talk about (at least) two (big?) bangs. If we know quite something about the one closer to us, at the end of inflation, we are much more ignorant about the one that may have preceded inflation and possibly marked the beginning of time. No one doubts that quantum mechanics plays an essential role in answering these questions: unfortunately a unified theory of gravity and quantum mechanics is still under construction. Finding such a synthesis and confirming it experimentally will no doubt be one of the biggest challenges of this century’s physics.

  7. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  8. Greening the Grid - Advancing Solar, Wind, and Smart Grid Technologies (Spanish Version)

    Energy Technology Data Exchange (ETDEWEB)

    2016-04-01

    This is the Spanish version of 'Greening the Grid - Advancing Solar, Wind, and Smart Grid Technologies'. Greening the Grid provides technical assistance to energy system planners, regulators, and grid operators to overcome challenges associated with integrating variable renewable energy into the grid.

  9. Micro grids toward the smart grid

    International Nuclear Information System (INIS)

    Guerrero, J.

    2011-01-01

    Worldwide electrical grids are expecting to become smarter in the near future, with interest in Microgrids likely to grow. A microgrid can be defined as a part of the grid with elements of prime energy movers, power electronics converters, distributed energy storage systems and local loads, that can operate autonomously but also interacting with main grid. Thus, the ability of intelligent Microgrids to operate in island mode or connected to the grid will be a keypoint to cope with new functionalities and the integration of renewable energy resources. The functionalities expected for these small grids are: black start operation, frequency and voltage stability, active and reactive power flow control, active power filter capabilities, and storage energy management. In this presentation, a review of the main concepts related to flexible Microgrids will be introduced, with examples of real Microgrids. AC and DC Microgrids to integrate renewable and distributed energy resources will also be presented, as well as distributed energy storage systems, and standardization issues of these Microgrids. Finally, Microgrid hierarchical control will be analyzed looking at three different levels: i) a primary control based on the droop method, including an output impedance virtual loop; ii) a secondary control, which enables restoring any deviations produced by the primary control; and iii) a tertiary control to manage the power flow between the microgrid and the external electrical distribution system.

  10. Development and Evaluation of a Methodology for the Generation of Gridded Isotopic Datasets

    International Nuclear Information System (INIS)

    Argiriou, A.A.; Salamalikis, V; Lykoudis, S.P.

    2013-01-01

    The accurate knowledge of the spatial distribution of stable isotopes in precipitation is necessary for several applications. Since the number of rain sampling stations is small and unevenly distributed around the globe, the global distribution of stable isotopes can be calculated via the generation of gridded isotopic data sets. Several methods have been proposed for this purpose. In this work a methodology is proposed for the development of 10'x 10' gridded isotopic data from precipitation in the central and eastern Mediterranean. Statistical models are developed taking into account geographical and meteorological parameters as regressors. The residuals are interpolated onto the grid using ordinary kriging and thin plate splines. The result is added to the model grids, to obtain the final isotopic gridded data sets. Models are evaluated using an independent data set. the overall performance of the procedure is satisfactory and the obtained gridded data reproduce the isotopic parameters successfully. (author)

  11. Development and Evaluation of a Methodology for the Generation of Gridded Isotopic Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Argiriou, A. A.; Salamalikis, V [University of Patras, Department of Physics, Laboratory of Atmospheric Physics, Patras (Greece); Lykoudis, S. P. [National Observatory of Athens, Institute of Environmental and Sustainable Development, Athens (Greece)

    2013-07-15

    The accurate knowledge of the spatial distribution of stable isotopes in precipitation is necessary for several applications. Since the number of rain sampling stations is small and unevenly distributed around the globe, the global distribution of stable isotopes can be calculated via the generation of gridded isotopic data sets. Several methods have been proposed for this purpose. In this work a methodology is proposed for the development of 10'x 10' gridded isotopic data from precipitation in the central and eastern Mediterranean. Statistical models are developed taking into account geographical and meteorological parameters as regressors. The residuals are interpolated onto the grid using ordinary kriging and thin plate splines. The result is added to the model grids, to obtain the final isotopic gridded data sets. Models are evaluated using an independent data set. the overall performance of the procedure is satisfactory and the obtained gridded data reproduce the isotopic parameters successfully. (author)

  12. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  13. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  14. Development and Testing of a Prototype Grid-Tied Photovoltaic Power System

    Science.gov (United States)

    Eichenberg, Dennis J.

    2009-01-01

    The NASA Glenn Research Center (GRC) has developed and tested a prototype 2 kW DC grid-tied photovoltaic (PV) power system at the Center. The PV system has generated in excess of 6700 kWh since operation commenced in July 2006. The PV system is providing power to the GRC grid for use by all. Operation of the prototype PV system has been completely trouble free. A grid-tied PV power system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility. The project transfers space technology to terrestrial use via nontraditional partners. GRC personnel glean valuable experience with PV power systems that are directly applicable to various space power systems, and provide valuable space program test data. PV power systems help to reduce harmful emissions and reduce the Nation s dependence on fossil fuels. Power generated by the PV system reduces the GRC utility demand, and the surplus power aids the community. Present global energy concerns reinforce the need for the development of alternative energy systems. Modern PV panels are readily available, reliable, efficient, and economical with a life expectancy of at least 25 years. Modern electronics has been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy of at least 25 years. Based upon the success of the prototype PV system, additional PV power system expansion at GRC is under consideration. The prototype grid-tied PV power system was successfully designed and developed which served to validate the basic principles described, and the theoretical work that was performed. The report concludes that grid-tied photovoltaic power systems are reliable, maintenance free, long life power systems, and are of significant value to NASA and the community.

  15. Experiences with the GLUE information schema in the LCG/EGEE production grid

    International Nuclear Information System (INIS)

    Burke, S; Andreozzi, S; Field, L

    2008-01-01

    A common information schema for the description of Grid resources and services is an essential requirement for interoperating Grid infrastructures, and its implementation interacts with every Grid component. In this context, the GLUE information schema was originally defined in 2002 as a joint project between the European DataGrid and DataTAG projects and the US iVDGL. The schema has major components to describe Computing and Storage Elements, and also generic Service and Site information. It has been used extensively in the LCG/EGEE Grid, for job submission, data management, service discovery and monitoring. In this paper we present the experience gained over the last five years, highlighting both successes and problems. In particular, we consider the importance of having a clear definition of schema attributes; the construction of standard information providers and difficulties encountered in mapping an abstract schema to diverse real systems; the configuration of publication in a way which suits system managers and the varying characteristics of Grid sites; the validation of published information; the ways in which information can be used (and misused) by Grid services and users; and issues related to managing schema upgrades in a large distributed system

  16. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  17. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  18. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  19. Smart grid security

    CERN Document Server

    Goel, Sanjay; Papakonstantinou, Vagelis; Kloza, Dariusz

    2015-01-01

    This book on smart grid security is meant for a broad audience from managers to technical experts. It highlights security challenges that are faced in the smart grid as we widely deploy it across the landscape. It starts with a brief overview of the smart grid and then discusses some of the reported attacks on the grid. It covers network threats, cyber physical threats, smart metering threats, as well as privacy issues in the smart grid. Along with the threats the book discusses the means to improve smart grid security and the standards that are emerging in the field. The second part of the b

  20. The Grid2003 Production Grid Principles and Practice

    CERN Document Server

    Foster, I; Gose, S; Maltsev, N; May, E; Rodríguez, A; Sulakhe, D; Vaniachine, A; Shank, J; Youssef, S; Adams, D; Baker, R; Deng, W; Smith, J; Yu, D; Legrand, I; Singh, S; Steenberg, C; Xia, Y; Afaq, A; Berman, E; Annis, J; Bauerdick, L A T; Ernst, M; Fisk, I; Giacchetti, L; Graham, G; Heavey, A; Kaiser, J; Kuropatkin, N; Pordes, R; Sekhri, V; Weigand, J; Wu, Y; Baker, K; Sorrillo, L; Huth, J; Allen, M; Grundhoefer, L; Hicks, J; Luehring, F C; Peck, S; Quick, R; Simms, S; Fekete, G; Van den Berg, J; Cho, K; Kwon, K; Son, D; Park, H; Canon, S; Jackson, K; Konerding, D E; Lee, J; Olson, D; Sakrejda, I; Tierney, B; Green, M; Miller, R; Letts, J; Martin, T; Bury, D; Dumitrescu, C; Engh, D; Gardner, R; Mambelli, M; Smirnov, Y; Voeckler, J; Wilde, M; Zhao, Y; Zhao, X; Avery, P; Cavanaugh, R J; Kim, B; Prescott, C; Rodríguez, J; Zahn, A; McKee, S; Jordan, C; Prewett, J; Thomas, T; Severini, H; Clifford, B; Deelman, E; Flon, L; Kesselman, C; Mehta, G; Olomu, N; Vahi, K; De, K; McGuigan, P; Sosebee, M; Bradley, D; Couvares, P; De Smet, A; Kireyev, C; Paulson, E; Roy, A; Koranda, S; Moe, B; Brown, B; Sheldon, P

    2004-01-01

    The Grid2003 Project has deployed a multi-virtual organization, application-driven grid laboratory ("GridS") that has sustained for several months the production-level services required by physics experiments of the Large Hadron Collider at CERN (ATLAS and CMS), the Sloan Digital Sky Survey project, the gravitational wave search experiment LIGO, the BTeV experiment at Fermilab, as well as applications in molecular structure analysis and genome analysis, and computer science research projects in such areas as job and data scheduling. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. We describe the principles that have guided the development of this unique infrastructure and the practical experiences that have resulted from its creation and use. We discuss application requirements for grid services deployment and configur...

  1. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  2. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, F.; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads......The present report is a part of the research project ''Grid fault and designbasis for wind turbine'' supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...

  3. Block Fusion on Dynamically Adaptive Spacetree Grids for Shallow Water Waves

    KAUST Repository

    Weinzierl, Tobias

    2014-09-01

    © 2014 World Scientific Publishing Company. Spacetrees are a popular formalism to describe dynamically adaptive Cartesian grids. Even though they directly yield a mesh, it is often computationally reasonable to embed regular Cartesian blocks into their leaves. This promotes stencils working on homogeneous data chunks. The choice of a proper block size is sensitive. While large block sizes foster loop parallelism and vectorisation, they restrict the adaptivity\\'s granularity and hence increase the memory footprint and lower the numerical accuracy per byte. In the present paper, we therefore use a multiscale spacetree-block coupling admitting blocks on all spacetree nodes. We propose to find sets of blocks on the finest scale throughout the simulation and to replace them by fused big blocks. Such a replacement strategy can pick up hardware characteristics, i.e. which block size yields the highest throughput, while the dynamic adaptivity of the fine grid mesh is not constrained - applications can work with fine granular blocks. We study the fusion with a state-of-the-art shallow water solver at hands of an Intel Sandy Bridge and a Xeon Phi processor where we anticipate their reaction to selected block optimisation and vectorisation.

  4. Topologically protected loop flows in high voltage AC power grids

    International Nuclear Information System (INIS)

    Coletta, T; Delabays, R; Jacquod, Ph; Adagideli, I

    2016-01-01

    Geographical features such as mountain ranges or big lakes and inland seas often result in large closed loops in high voltage AC power grids. Sizable circulating power flows have been recorded around such loops, which take up transmission line capacity and dissipate but do not deliver electric power. Power flows in high voltage AC transmission grids are dominantly governed by voltage angle differences between connected buses, much in the same way as Josephson currents depend on phase differences between tunnel-coupled superconductors. From this previously overlooked similarity we argue here that circulating power flows in AC power grids are analogous to supercurrents flowing in superconducting rings and in rings of Josephson junctions. We investigate how circulating power flows can be created and how they behave in the presence of ohmic dissipation. We show how changing operating conditions may generate them, how significantly more power is ohmically dissipated in their presence and how they are topologically protected, even in the presence of dissipation, so that they persist when operating conditions are returned to their original values. We identify three mechanisms for creating circulating power flows, (i) by loss of stability of the equilibrium state carrying no circulating loop flow, (ii) by tripping of a line traversing a large loop in the network and (iii) by reclosing a loop that tripped or was open earlier. Because voltages are uniquely defined, circulating power flows can take on only discrete values, much in the same way as circulation around vortices is quantized in superfluids. (paper)

  5. ENHANCED HYBRID PSO – ACO ALGORITHM FOR GRID SCHEDULING

    Directory of Open Access Journals (Sweden)

    P. Mathiyalagan

    2010-07-01

    Full Text Available Grid computing is a high performance computing environment to solve larger scale computational demands. Grid computing contains resource management, task scheduling, security problems, information management and so on. Task scheduling is a fundamental issue in achieving high performance in grid computing systems. A computational GRID is typically heterogeneous in the sense that it combines clusters of varying sizes, and different clusters typically contains processing elements with different level of performance. In this, heuristic approaches based on particle swarm optimization and ant colony optimization algorithms are adopted for solving task scheduling problems in grid environment. Particle Swarm Optimization (PSO is one of the latest evolutionary optimization techniques by nature. It has the better ability of global searching and has been successfully applied to many areas such as, neural network training etc. Due to the linear decreasing of inertia weight in PSO the convergence rate becomes faster, which leads to the minimal makespan time when used for scheduling. To make the convergence rate faster, the PSO algorithm is improved by modifying the inertia parameter, such that it produces better performance and gives an optimized result. The ACO algorithm is improved by modifying the pheromone updating rule. ACO algorithm is hybridized with PSO algorithm for efficient result and better convergence in PSO algorithm.

  6. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  7. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  8. Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs

    Directory of Open Access Journals (Sweden)

    Mohd Usama

    2017-11-01

    Full Text Available At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS, Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.

  9. A Stationary Reference Frame Grid Synchronization System for Three-Phase Grid-Connected Power Converters Under Adverse Grid Conditions

    DEFF Research Database (Denmark)

    Rodríguez, P.; Luna, A.; Muñoz-Aguilar, R. S.

    2012-01-01

    synchronization method for three-phase three-wire networks, namely dual second-order generalized integrator (SOGI) frequency-locked loop. The method is based on two adaptive filters, implemented by using a SOGI on the stationary αβ reference frame, and it is able to perform an excellent estimation......Grid synchronization algorithms are of great importance in the control of grid-connected power converters, as fast and accurate detection of the grid voltage parameters is crucial in order to implement stable control strategies under generic grid conditions. This paper presents a new grid...

  10. Big bang nucleosynthesis constraints on bulk neutrinos

    International Nuclear Information System (INIS)

    Goh, H.S.; Mohapatra, R.N.

    2002-01-01

    We examine the constraints imposed by the requirement of successful nucleosynthesis on models with one large extra hidden space dimension and a single bulk neutrino residing in this dimension. We solve the Boltzmann kinetic equation for the thermal distribution of the Kaluza-Klein modes and evaluate their contribution to the energy density at the big bang nucleosynthesis epoch to constrain the size of the extra dimension R -1 ≡μ and the parameter sin 2 2θ which characterizes the mixing between the active and bulk neutrinos

  11. The influence of generation mix on the wind integrating capability of North China power grids: A modeling interpretation and potential solutions

    International Nuclear Information System (INIS)

    Yu Dayang; Zhang Bo; Liang Jun; Han Xueshan

    2011-01-01

    The large-scale wind power development in China has reached a bottleneck of grid integrating capability. As a result, excess wind electricity has to be rejected in the nighttime low demand hours, when the wind power is ramping up. To compensate for the fluctuation of wind power, new coal-fired power plants are being constructed along with the big wind projects in the North China grids. This study analyzed why adding coal-fired generation cannot remove the bottleneck of wind integration by modeling the operating problem of the wind integration. The peak-load adjusting factor of the regional grid is defined. Building more coal-fired power plants will not increase the adjusting factor of the current grid. Although it does help to increase the total integrated wind power in the short term, it will add difficulties to the long-term wind integration. Alternatively, the coordinated resource utilization is then suggested with the discussion of both the effective pumped hydro storage and the potential electric vehicle storage. - Highlights: → Adjusting factors indicate the grid wind integrating capability. → Building coal-fired generation restrains long-term wind integration. → HVDC and nuclear projects should be planned integrated with the wind. → Pumped storage and electric vehicles provide potential solutions.

  12. Evolutionary Feature Selection for Big Data Classification: A MapReduce Approach

    Directory of Open Access Journals (Sweden)

    Daniel Peralta

    2015-01-01

    Full Text Available Nowadays, many disciplines have to deal with big datasets that additionally involve a high number of features. Feature selection methods aim at eliminating noisy, redundant, or irrelevant features that may deteriorate the classification performance. However, traditional methods lack enough scalability to cope with datasets of millions of instances and extract successful results in a delimited time. This paper presents a feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets. The algorithm decomposes the original dataset in blocks of instances to learn from them in the map phase; then, the reduce phase merges the obtained partial results into a final vector of feature weights, which allows a flexible application of the feature selection procedure using a threshold to determine the selected subset of features. The feature selection method is evaluated by using three well-known classifiers (SVM, Logistic Regression, and Naive Bayes implemented within the Spark framework to address big data problems. In the experiments, datasets up to 67 millions of instances and up to 2000 attributes have been managed, showing that this is a suitable framework to perform evolutionary feature selection, improving both the classification accuracy and its runtime when dealing with big data problems.

  13. GridICE: monitoring the user/application activities on the grid

    International Nuclear Information System (INIS)

    Aiftimiei, C; Pra, S D; Andreozzi, S; Fattibene, E; Misurelli, G; Cuscela, G; Donvito, G; Dudhalkar, V; Maggi, G; Pierro, A; Fantinel, S

    2008-01-01

    The monitoring of the grid user activity and application performance is extremely useful to plan resource usage strategies particularly in cases of complex applications. Large VOs, such as the LHC VOs, do their monitoring by means of dashboards. Other VOs or communities, like for example the BioinfoGRID one, are characterized by a greater diversification of the application types: so the effort to provide a dashboard like monitor is particularly heavy. The main theme of this paper is to show the improvements introduced in GridICE, a web tool built to provides an almost complete grid monitoring. These recent improvements allows GridICE to provide new reports on the resources usage with details of the VOMS groups, roles and users. By accessing the GridICE web pages, the grid user can get all information that is relevant to keep track of his activity on the grid. In the same way, the activity of a VOMS group can be distinguished from the activity of the entire VO. In this paper we briefly talk about the features and advantages of this approach and, after discussing the requirements, we describe the software solutions, middleware and prerequisite to manage and retrieve the user's credentials

  14. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  15. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, Florin; Hansen, A.D.; Sørensen, P.

    loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads......The present report is a part of the research project "Grid fault and design basis for wind turbine" supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...

  16. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  17. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  18. A unified grid current control for grid-interactive DG inverters in microgrids

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    This paper proposes a unified grid current control for grid-interactive distributed generation inverters. In the approach, the grid-side current, instead of inverter-side current, is controlled as an inner loop, while the filter capacitor voltage is indirectly regulated through a virtual admittan...... locus analyses in the discrete z-domain are performed for elaborating the controller design. Simulations and experimental results demonstrate the performances of the proposed approach.......This paper proposes a unified grid current control for grid-interactive distributed generation inverters. In the approach, the grid-side current, instead of inverter-side current, is controlled as an inner loop, while the filter capacitor voltage is indirectly regulated through a virtual admittance...... in the outer loop. It, therefore, provides several superior features over traditional control schemes: 1) high-quality grid current in the grid-connected mode, 2) inherent derivative-less virtual output impedance control, and 3) the unified active damping for both grid-connected and islanded operations. Root...

  19. Grid-Voltage-Feedforward Active Damping for Grid-Connected Inverter with LCL Filter

    DEFF Research Database (Denmark)

    Lu, Minghui; Wang, Xiongfei; Blaabjerg, Frede

    2016-01-01

    For the grid-connected voltage source inverters, the feedforward scheme of grid voltage is commonly adopted to mitigate the current distortion caused by grid background voltages harmonics. This paper investigates the grid-voltage-feedforward active damping for grid connected inverter with LCL...... filter. It reveals that proportional feedforward control can not only fulfill the mitigation of grid disturbance, but also offer damping effects on the LCL filter resonance. Digital delays are intrinsic to digital controlled inverters; with these delays, the feedforward control can be equivalent...

  20. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  1. Modularized Parallel Neutron Instrument Simulation on the TeraGrid

    International Nuclear Information System (INIS)

    Chen, Meili; Cobb, John W.; Hagen, Mark E.; Miller, Stephen D.; Lynch, Vickie E.

    2007-01-01

    In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serial instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evaluation, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new features seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented

  2. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  3. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  4. Grid Data Management and Customer Demands at MeteoSwiss

    Science.gov (United States)

    Rigo, G.; Lukasczyk, Ch.

    2010-09-01

    near-real-time datasets to build up trust in the product in different applications. The implementation of a new method called RSOI for the daily production allowed to bring the daily precipitation field up to the expectations of customers. The main use of the grids were near-realtime and past event analysis in areas scarcely covered with stations, and inputs for forecast tools and models. Critical success factors of the product were speed of delivery and at the same time accuracy, temporal and spatial resolution, and configuration (coordinate system, projection). To date, grids of archived precipitation data since 1961 and daily/monthly precipitation gridsets with 4h-delivery lag of Switzerland or subareas are available.

  5. How to build a high-performance compute cluster for the Grid

    CERN Document Server

    Reinefeld, A

    2001-01-01

    The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing and data-storage resources at geographically distributed sites. Currently, much effort is spent on the design of Grid management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high- performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless integration of the cluster into the Grid environment. (11 refs).

  6. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  7. Psychosocial Development and the Big Five Personality Traits among Chinese University Students

    Science.gov (United States)

    Zhang, Li-fang

    2013-01-01

    This study explores how psychosocial development and personality traits are related. In particular, the study investigates the predictive power of the successful resolution of the Eriksonian psychosocial crises for the Big Five personality traits beyond age and gender. Four hundred university students in mainland China responded to the Measures of…

  8. Urban micro-grids

    International Nuclear Information System (INIS)

    Faure, Maeva; Salmon, Martin; El Fadili, Safae; Payen, Luc; Kerlero, Guillaume; Banner, Arnaud; Ehinger, Andreas; Illouz, Sebastien; Picot, Roland; Jolivet, Veronique; Michon Savarit, Jeanne; Strang, Karl Axel

    2017-02-01

    ENEA Consulting published the results of a study on urban micro-grids conducted in partnership with the Group ADP, the Group Caisse des Depots, ENEDIS, Omexom, Total and the Tuck Foundation. This study offers a vision of the definition of an urban micro-grid, the value brought by a micro-grid in different contexts based on real case studies, and the upcoming challenges that micro-grid stakeholders will face (regulation, business models, technology). The electric production and distribution system, as the backbone of an increasingly urbanized and energy dependent society, is urged to shift towards a more resilient, efficient and environment-friendly infrastructure. Decentralisation of electricity production into densely populated areas is a promising opportunity to achieve this transition. A micro-grid enhances local production through clustering electricity producers and consumers within a delimited electricity network; it has the ability to disconnect from the main grid for a limited period of time, offering an energy security service to its customers during grid outages for example. However: The islanding capability is an inherent feature of the micro-grid concept that leads to a significant premium on electricity cost, especially in a system highly reliant on intermittent electricity production. In this case, a smart grid, with local energy production and no islanding capability, can be customized to meet relevant sustainability and cost savings goals at lower costs For industrials, urban micro-grids can be economically profitable in presence of high share of reliable energy production and thermal energy demand micro-grids face strong regulatory challenges that should be overcome for further development Whether islanding is or is not implemented into the system, end-user demand for a greener, more local, cheaper and more reliable energy, as well as additional services to the grid, are strong drivers for local production and consumption. In some specific cases

  9. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  10. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  11. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...

  12. Job execution in virtualized runtime environments in grid

    International Nuclear Information System (INIS)

    Shamardin, Lev; Demichev, Andrey; Gorbunov, Ilya; Ilyin, Slava; Kryukov, Alexander

    2010-01-01

    Grid systems are used for calculations and data processing in various applied areas such as biomedicine, nanotechnology and materials science, cosmophysics and high energy physics as well as in a number of industrial and commercial areas. Traditional method of execution of jobs in grid is running jobs directly on the cluster nodes. This puts restrictions on the choice of the operational environment to the operating system of the node and also does not allow to enforce resource sharing policies or jobs isolation nor guarantee minimal level of available system resources. We propose a new approach to running jobs on the cluster nodes when each grid job runs in its own virtual environment. This allows to use different operating systems for different jobs on the same nodes in cluster, provides better isolation between running jobs and allows to enforce resource sharing policies. The implementation of the proposed approach was made in the framework of gLite middleware of the EGEE/WLCG project and was successfully tested in SINP MSU. The implementation is transparent for the grid user and allows to submit binaries compiled for various operating systems using exactly the same gLite interface. Virtual machine images with the standard gLite worker node software and sample MS Windows execution environment were created.

  13. Big data technologies in e-learning

    Directory of Open Access Journals (Sweden)

    Gyulara A. Mamedova

    2017-01-01

    Full Text Available Recently, e-learning around the world is rapidly developing, and the main problem is to provide the students with quality educational information on time. This task cannot be solved without analyzing the large flow of information, entering the information environment of e-learning from participants in the educational process – students, lecturers, administration, etc. In this environment, there are a large number of different types of data, both structured and unstructured. Data processing is difficult to implement by traditional statistical methods. The aim of the study is to show that for the development and implementation of successful e-learning systems, it is necessary to use new technologies that would allow storing and processing large data streams.In order to store the big data, a large amount of disk space is required. It is shown that to solve this problem it is efficient to use clustered NAS (Network Area Storage technology, which allows storing information of educational institutions on NAS servers and sharing them with Internet. To process and personalize the Big Data in the environment of e-learning, it is proposed to use the technologies MapReduce, Hadoop, NoSQL and others. The article gives examples of the use of these technologies in the cloud environment. These technologies in e-learning allow achieving flexibility, scalability, availability, quality of service, security, confidentiality and ease of educational information use.Another important problem of e-learning is the identification of new, sometimes hidden, interconnection in Big Data, new knowledge (data mining, which can be used to improve the educational process and improve its management. To classify electronic educational resources, identify patterns of students with similar psychological, behavioral and intellectual characteristics, developing individualized educational programs, it is proposed to use methods of analysis of Big Data.The article shows that at

  14. Smart Grid: Network simulator for smart grid test-bed

    International Nuclear Information System (INIS)

    Lai, L C; Ong, H S; Che, Y X; Do, N Q; Ong, X J

    2013-01-01

    Smart Grid become more popular, a smaller scale of smart grid test-bed is set up at UNITEN to investigate the performance and to find out future enhancement of smart grid in Malaysia. The fundamental requirement in this project is design a network with low delay, no packet drop and with high data rate. Different type of traffic has its own characteristic and is suitable for different type of network and requirement. However no one understands the natural of traffic in smart grid. This paper presents the comparison between different types of traffic to find out the most suitable traffic for the optimal network performance.

  15. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  16. On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.

    Science.gov (United States)

    Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N

    2016-04-01

    An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.

  17. Grid Voltage Modulated Control of Grid-Connected Voltage Source Inverters under Unbalanced Grid Conditions

    DEFF Research Database (Denmark)

    Li, Mingshen; Gui, Yonghao; Quintero, Juan Carlos Vasquez

    2017-01-01

    In this paper, an improved grid voltage modulated control (GVM) with power compensation is proposed for grid-connected voltage inverters when the grid voltage is unbalanced. The objective of the proposed control is to remove the power ripple and to improve current quality. Three power compensation...... objectives are selected to eliminate the negative sequence components of currents. The modified GVM method is designed to obtain two separate second-order systems for not only the fast convergence rate of the instantaneous active and reactive powers but also the robust performance. In addition, this method...

  18. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  19. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  20. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  1. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  2. A Big Data Guide to Understanding Climate Change: The Case for Theory-Guided Data Science.

    Science.gov (United States)

    Faghmous, James H; Kumar, Vipin

    2014-09-01

    Global climate change and its impact on human life has become one of our era's greatest challenges. Despite the urgency, data science has had little impact on furthering our understanding of our planet in spite of the abundance of climate data. This is a stark contrast from other fields such as advertising or electronic commerce where big data has been a great success story. This discrepancy stems from the complex nature of climate data as well as the scientific questions climate science brings forth. This article introduces a data science audience to the challenges and opportunities to mine large climate datasets, with an emphasis on the nuanced difference between mining climate data and traditional big data approaches. We focus on data, methods, and application challenges that must be addressed in order for big data to fulfill their promise with regard to climate science applications. More importantly, we highlight research showing that solely relying on traditional big data techniques results in dubious findings, and we instead propose a theory-guided data science paradigm that uses scientific theory to constrain both the big data techniques as well as the results-interpretation process to extract accurate insight from large climate data .

  3. The performance model of dynamic virtual organization (VO) formations within grid computing context

    International Nuclear Information System (INIS)

    Han Liangxiu

    2009-01-01

    Grid computing aims to enable 'resource sharing and coordinated problem solving in dynamic, multi-institutional virtual organizations (VOs)'. Within the grid computing context, successful dynamic VO formations mean a number of individuals and institutions associated with certain resources join together and form new VOs in order to effectively execute tasks within given time steps. To date, while the concept of VOs has been accepted, few research has been done on the impact of effective dynamic virtual organization formations. In this paper, we develop a performance model of dynamic VOs formation and analyze the effect of different complex organizational structures and their various statistic parameter properties on dynamic VO formations from three aspects: (1) the probability of a successful VO formation under different organizational structures and statistic parameters change, e.g. average degree; (2) the effect of task complexity on dynamic VO formations; (3) the impact of network scales on dynamic VO formations. The experimental results show that the proposed model can be used to understand the dynamic VO formation performance of the simulated organizations. The work provides a good path to understand how to effectively schedule and utilize resources based on the complex grid network and therefore improve the overall performance within grid environment.

  4. EASE-Grid 2.0: Incremental but Significant Improvements for Earth-Gridded Data Sets

    Directory of Open Access Journals (Sweden)

    Matthew H. Savoie

    2012-03-01

    Full Text Available Defined in the early 1990s for use with gridded satellite passive microwave data, the Equal-Area Scalable Earth Grid (EASE-Grid was quickly adopted and used for distribution of a variety of satellite and in situ data sets. Conceptually easy to understand, EASE-Grid suffers from limitations that make it impossible to format in the widely popular GeoTIFF convention without reprojection. Importing EASE-Grid data into standard mapping software packages is nontrivial and error-prone. This article defines a standard for an improved EASE-Grid 2.0 definition, addressing how the changes rectify issues with the original grid definition. Data distributed using the EASE-Grid 2.0 standard will be easier for users to import into standard software packages and will minimize common reprojection errors that users had encountered with the original EASE-Grid definition.

  5. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  6. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  7. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  8. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  9. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  10. International Symposium on Grids and Clouds (ISGC) 2016

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.

  11. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  12. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  13. Grid Transmission Expansion Planning Model Based on Grid Vulnerability

    Science.gov (United States)

    Tang, Quan; Wang, Xi; Li, Ting; Zhang, Quanming; Zhang, Hongli; Li, Huaqiang

    2018-03-01

    Based on grid vulnerability and uniformity theory, proposed global network structure and state vulnerability factor model used to measure different grid models. established a multi-objective power grid planning model which considering the global power network vulnerability, economy and grid security constraint. Using improved chaos crossover and mutation genetic algorithm to optimize the optimal plan. For the problem of multi-objective optimization, dimension is not uniform, the weight is not easy given. Using principal component analysis (PCA) method to comprehensive assessment of the population every generation, make the results more objective and credible assessment. the feasibility and effectiveness of the proposed model are validated by simulation results of Garver-6 bus system and Garver-18 bus.

  14. Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids

    Directory of Open Access Journals (Sweden)

    Sudi Mungkasi

    2016-01-01

    Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.

  15. SETI as a part of Big History

    Science.gov (United States)

    Maccone, Claudio

    2014-08-01

    Big History is an emerging academic discipline which examines history scientifically from the Big Bang to the present. It uses a multidisciplinary approach based on combining numerous disciplines from science and the humanities, and explores human existence in the context of this bigger picture. It is taught at some universities. In a series of recent papers ([11] through [15] and [17] through [18]) and in a book [16], we developed a new mathematical model embracing Darwinian Evolution (RNA to Humans, see, in particular, [17] and Human History (Aztecs to USA, see [16]) and then we extrapolated even that into the future up to ten million years (see 18), the minimum time requested for a civilization to expand to the whole Milky Way (Fermi paradox). In this paper, we further extend that model in the past so as to let it start at the Big Bang (13.8 billion years ago) thus merging Big History, Evolution on Earth and SETI (the modern Search for ExtraTerrestrial Intelligence) into a single body of knowledge of a statistical type. Our idea is that the Geometric Brownian Motion (GBM), so far used as the key stochastic process of financial mathematics (Black-Sholes models and related 1997 Nobel Prize in Economics!) may be successfully applied to the whole of Big History. In particular, in this paper we derive the Statistical Drake Equation (namely the statistical extension of the classical Drake Equation typical of SETI) can be regarded as the “frozen in time” part of GBM. This makes SETI a subset of our Big History Theory based on GBMs: just as the GBM is the “movie” unfolding in time, so the Statistical Drake Equation is its “still picture”, static in time, and the GBM is the time-extension of the Drake Equation. Darwinian Evolution on Earth may be easily described as an increasing GBM in the number of living species on Earth over the last 3.5 billion years. The first of them was RNA 3.5 billion years ago, and now 50 million living species or more exist, each

  16. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  17. Questioning the "big assumptions". Part I: addressing personal contradictions that impede professional development.

    Science.gov (United States)

    Bowe, Constance M; Lahey, Lisa; Armstrong, Elizabeth; Kegan, Robert

    2003-08-01

    The ultimate success of recent medical curriculum reforms is, in large part, dependent upon the faculty's ability to adopt and sustain new attitudes and behaviors. However, like many New Year's resolutions, sincere intent to change may be short lived and followed by a discouraging return to old behaviors. Failure to sustain the initial resolve to change can be misinterpreted as a lack of commitment to one's original goals and eventually lead to greater effort expended in rationalizing the status quo rather than changing it. The present article outlines how a transformative process that has proven to be effective in managing personal change, Questioning the Big Assumptions, was successfully used in an international faculty development program for medical educators to enhance individual personal satisfaction and professional effectiveness. This process systematically encouraged participants to explore and proactively address currently operative mechanisms that could stall their attempts to change at the professional level. The applications of the Big Assumptions process in faculty development helped individuals to recognize and subsequently utilize unchallenged and deep rooted personal beliefs to overcome unconscious resistance to change. This approach systematically led participants away from circular griping about what was not right in their current situation to identifying the actions that they needed to take to realize their individual goals. By thoughtful testing of personal Big Assumptions, participants designed behavioral changes that could be broadly supported and, most importantly, sustained.

  18. Addressing the Complexities of Big Data Analytics in Healthcare: The Diabetes Screening Case

    Directory of Open Access Journals (Sweden)

    Daswin De Silva

    2015-09-01

    Full Text Available The healthcare industry generates a high throughput of medical, clinical and omics data of varying complexity and features. Clinical decision-support is gaining widespread attention as medical institutions and governing bodies turn towards better management of this data for effective and efficient healthcare delivery and quality assured outcomes. Amass of data across all stages, from disease diagnosis to palliative care, is further indication of the opportunities and challenges to effective data management, analysis, prediction and optimization techniques as parts of knowledge management in clinical environments. Big Data analytics (BDA presents the potential to advance this industry with reforms in clinical decision-support and translational research. However, adoption of big data analytics has been slow due to complexities posed by the nature of healthcare data. The success of these systems is hard to predict, so further research is needed to provide a robust framework to ensure investment in BDA is justified. In this paper we investigate these complexities from the perspective of updated Information Systems (IS participation theory. We present a case study on a large diabetes screening project to integrate, converge and derive expedient insights from such an accumulation of data and make recommendations for a successful BDA implementation grounded in a participatory framework and the specificities of big data in healthcare context.

  19. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  1. Utilizing big data to provide better health at lower cost.

    Science.gov (United States)

    Jones, Laney K; Pulk, Rebecca; Gionfriddo, Michael R; Evans, Michael A; Parry, Dean

    2018-04-01

    The efficient use of big data in order to provide better health at a lower cost is described. As data become more usable and accessible in healthcare, organizations need to be prepared to use this information to positively impact patient care. In order to be successful, organizations need teams with expertise in informatics and data management that can build new infrastructure and restructure existing infrastructure to support quality and process improvements in real time, such as creating discrete data fields that can be easily retrieved and used to analyze and monitor care delivery. Organizations should use data to monitor performance (e.g., process metrics) as well as the health of their populations (e.g., clinical parameters and health outcomes). Data can be used to prevent hospitalizations, combat opioid abuse and misuse, improve antimicrobial stewardship, and reduce pharmaceutical spending. These examples also serve to highlight lessons learned to better use data to improve health. For example, data can inform and create efficiencies in care and engage and communicate with stakeholders early and often, and collaboration is necessary to have complete data. To truly transform care so that it is delivered in a way that is sustainable, responsible, and patient-centered, health systems need to act on these opportunities, invest in big data, and routinely use big data in the delivery of care. Using data efficiently has the potential to improve the care of our patients and lower cost. Despite early successes, barriers to implementation remain including data acquisition, integration, and usability. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  2. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  3. Grid Databases for Shared Image Analysis in the MammoGrid Project

    CERN Document Server

    Amendolia, S R; Hauer, T; Manset, D; McClatchey, R; Odeh, M; Reading, T; Rogulin, D; Schottlander, D; Solomonides, T

    2004-01-01

    The MammoGrid project aims to prove that Grid infrastructures can be used for collaborative clinical analysis of database-resident but geographically distributed medical images. This requires: a) the provision of a clinician-facing front-end workstation and b) the ability to service real-world clinician queries across a distributed and federated database. The MammoGrid project will prove the viability of the Grid by harnessing its power to enable radiologists from geographically dispersed hospitals to share standardized mammograms, to compare diagnoses (with and without computer aided detection of tumours) and to perform sophisticated epidemiological studies across national boundaries. This paper outlines the approach taken in MammoGrid to seamlessly connect radiologist workstations across a Grid using an "information infrastructure" and a DICOM-compliant object model residing in multiple distributed data stores in Italy and the UK

  4. Multigrid on unstructured grids using an auxiliary set of structured grids

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, C.C.; Malhotra, S.; Schultz, M.H. [Yale Univ., New Haven, CT (United States)

    1996-12-31

    Unstructured grids do not have a convenient and natural multigrid framework for actually computing and maintaining a high floating point rate on standard computers. In fact, just the coarsening process is expensive for many applications. Since unstructured grids play a vital role in many scientific computing applications, many modifications have been proposed to solve this problem. One suggested solution is to map the original unstructured grid onto a structured grid. This can be used as a fine grid in a standard multigrid algorithm to precondition the original problem on the unstructured grid. We show that unless extreme care is taken, this mapping can lead to a system with a high condition number which eliminates the usefulness of the multigrid method. Theorems with lower and upper bounds are provided. Simple examples show that the upper bounds are sharp.

  5. Enhancing GIS Capabilities for High Resolution Earth Science Grids

    Science.gov (United States)

    Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.

    2017-12-01

    Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS

  6. Analysis of Traffic Crashes Involving Pedestrians Using Big Data: Investigation of Contributing Factors and Identification of Hotspots.

    Science.gov (United States)

    Xie, Kun; Ozbay, Kaan; Kurkcu, Abdullah; Yang, Hong

    2017-08-01

    This study aims to explore the potential of using big data in advancing the pedestrian risk analysis including the investigation of contributing factors and the hotspot identification. Massive amounts of data of Manhattan from a variety of sources were collected, integrated, and processed, including taxi trips, subway turnstile counts, traffic volumes, road network, land use, sociodemographic, and social media data. The whole study area was uniformly split into grid cells as the basic geographical units of analysis. The cell-structured framework makes it easy to incorporate rich and diversified data into risk analysis. The cost of each crash, weighted by injury severity, was assigned to the cells based on the relative distance to the crash site using a kernel density function. A tobit model was developed to relate grid-cell-specific contributing factors to crash costs that are left-censored at zero. The potential for safety improvement (PSI) that could be obtained by using the actual crash cost minus the cost of "similar" sites estimated by the tobit model was used as a measure to identify and rank pedestrian crash hotspots. The proposed hotspot identification method takes into account two important factors that are generally ignored, i.e., injury severity and effects of exposure indicators. Big data, on the one hand, enable more precise estimation of the effects of risk factors by providing richer data for modeling, and on the other hand, enable large-scale hotspot identification with higher resolution than conventional methods based on census tracts or traffic analysis zones. © 2017 Society for Risk Analysis.

  7. Asymmetrical Grid Fault Ride-Through Strategy of Three-phase Grid-connected Inverter Considering Network Impedance Impact in Low Voltage Grid

    DEFF Research Database (Denmark)

    Guo, Xiaoqiang; Zhang, Xue; Wang, Baocheng

    2014-01-01

    This letter presents a new control strategy of threephase grid-connected inverter for the positive sequence voltage recovery and negative sequence voltage reduction under asymmetrical grid faults. Unlike the conventional control strategy based on an assumption that the network impedance is mainly...... of the proposed solution for the flexible voltage support in a low-voltage grid, where thenetwork impedance is mainly resistive.......This letter presents a new control strategy of threephase grid-connected inverter for the positive sequence voltage recovery and negative sequence voltage reduction under asymmetrical grid faults. Unlike the conventional control strategy based on an assumption that the network impedance is mainly...... inductive, the proposed control strategy is more flexible and effective by considering the network impedance impact, which is of great importance for the high penetration of grid-connected renewable energy systems into low-voltage grids. The experimental tests are carried out to validate the effectiveness...

  8. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  9. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  10. The GRID seminar

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The Grid infrastructure is a key part of the computing environment for the simulation, processing and analysis of the data of the LHC experiments. These experiments depend on the availability of a worldwide Grid infrastructure in several aspects of their computing model. The Grid middleware will hide much of the complexity of this environment to the user, organizing all the resources in a coherent virtual computer center. The general description of the elements of the Grid, their interconnections and their use by the experiments will be exposed in this talk. The computational and storage capability of the Grid is attracting other research communities beyond the high energy physics. Examples of these applications will be also exposed during the presentation.

  11. Bigger data for big data: from Twitter to brain-computer interfaces.

    Science.gov (United States)

    Roesch, Etienne B; Stahl, Frederic; Gaber, Mohamed Medhat

    2014-02-01

    We are sympathetic with Bentley et al.'s attempt to encompass the wisdom of crowds in a generative model, but posit that a successful attempt at using big data will include more sensitive measurements, more varied sources of information, and will also build from the indirect information available through technology, from ancillary technical features to data from brain-computer interfaces.

  12. Importance of Grid Center Arrangement

    Science.gov (United States)

    Pasaogullari, O.; Usul, N.

    2012-12-01

    In Digital Elevation Modeling, grid size is accepted to be the most important parameter. Despite the point density and/or scale of the source data, it is freely decided by the user. Most of the time, arrangement of the grid centers are ignored, even most GIS packages omit the choice of grid center coordinate selection. In our study; importance of the arrangement of grid centers is investigated. Using the analogy between "Raster Grid DEM" and "Bitmap Image", importance of placement of grid centers in DEMs are measured. The study has been conducted on four different grid DEMs obtained from a half ellipsoid. These grid DEMs are obtained in such a way that they are half grid size apart from each other. Resulting grid DEMs are investigated through similarity measures. Image processing scientists use different measures to investigate the dis/similarity between the images and the amount of different information they carry. Grid DEMs are projected to a finer grid in order to co-center. Similarity measures are then applied to each grid DEM pairs. These similarity measures are adapted to DEM with band reduction and real number operation. One of the measures gives function graph and the others give measure matrices. Application of similarity measures to six grid DEM pairs shows interesting results. These four different grid DEMs are created with the same method for the same area, surprisingly; thirteen out of 14 measures state that, the half grid size apart grid DEMs are different from each other. The results indicated that although grid DEMs carry mutual information, they have also additional individual information. In other words, half grid size apart constructed grid DEMs have non-redundant information.; Joint Probability Distributions Function Graphs

  13. Initial results of local grid control using wind farms with grid support

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, Poul; Hansen, Anca D.; Iov, F.; Blaabjerg, F.

    2005-09-01

    This report describes initial results with simulation of local grid control using wind farms with grid support. The focus is on simulation of the behaviour of the wind farms when they are isolated from the main grid and establish a local grid together with a few other grid components. The isolated subsystems used in the work presented in this report do not intend to simulate a specific subsystem, but they are extremely simplified single bus bar systems using only a few more components than the wind farm. This approach has been applied to make it easier to understand the dynamics of the subsystem. The main observation is that the fast dynamics of the wind turbines seem to be able to contribute significantly to the grid control, which can be useful where the wind farm is isolated with a subsystem from the main grid with surplus of generation. Thus, the fast down regulation of the wind farm using automatic frequency control can keep the subsystem in operation and thereby improve the reliability of the grid. (LN)

  14. Integrated agent-based home energy management systems for smart grid applications

    NARCIS (Netherlands)

    Asare-Bediako, B.; Kling, W.L.; Ribeiro, P.F.

    2013-01-01

    The participation of residential consumers is vital for a successful implementation of the smart grid vision. The installation of smart meters is envisioned to increase residential consumers' involvement in the electric energy sector. The installations of local energy generations (photovoltaic, and

  15. How should grid operators govern smart grid innovation projects? An embedded case study approach

    International Nuclear Information System (INIS)

    Reuver, Mark de; Lei, Telli van der; Lukszo, Zofia

    2016-01-01

    Grid operators increasingly have to collaborate with other actors in order to realize smart grid innovations. For routine maintenance, grid operators typically acquire technologies in one-off transactions, but the innovative nature of smart grid projects may require more collaborate relationships. This paper studies how a transactional versus relational approach to governing smart grid innovation projects affects incentives for other actors to collaborate. We analyse 34 cases of smart grid innovation projects based on extensive archival data as well as interviews. We find that projects relying on relational governance are more likely to provide incentives for collaboration. Especially non-financial incentives such as reputational benefits and shared intellectual property rights are more likely to be found in projects relying on relational governance. Policy makers that wish to stimulate smart grid innovation projects should consider stimulating long-term relationships between grid operators and third parties, because such relationships are more likely to produce incentives for collaboration. - Highlights: • Smart grids require collaboration between grid operators and other actors. • We contrast transactional and relational governance of smart grid projects. • Long-term relations produce more incentives for smart grid collaboration. • Non-financial incentives are more important in long-term relations. • Policy makers should stimulate long-term relations to stimulate smart grids.

  16. Big Data-Based Approach to Detect, Locate, and Enhance the Stability of an Unplanned Microgrid Islanding

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Li, Yan; Zhang, Yingchen; Zhang, Jun Jason; Gao, David Wenzhong; Muljadi, Eduard; Gu, Yi

    2017-10-01

    In this paper, a big data-based approach is proposed for the security improvement of an unplanned microgrid islanding (UMI). The proposed approach contains two major steps: the first step is big data analysis of wide-area monitoring to detect a UMI and locate it; the second step is particle swarm optimization (PSO)-based stability enhancement for the UMI. First, an optimal synchrophasor measurement device selection (OSMDS) and matching pursuit decomposition (MPD)-based spatial-temporal analysis approach is proposed to significantly reduce the volume of data while keeping appropriate information from the synchrophasor measurements. Second, a random forest-based ensemble learning approach is trained to detect the UMI. When combined with grid topology, the UMI can be located. Then the stability problem of the UMI is formulated as an optimization problem and the PSO is used to find the optimal operational parameters of the UMI. An eigenvalue-based multiobjective function is proposed, which aims to improve the damping and dynamic characteristics of the UMI. Finally, the simulation results demonstrate the effectiveness and robustness of the proposed approach.

  17. Characterization of Slosh Damping for Ortho-Grid and Iso-Grid Internal Tank Structures

    Science.gov (United States)

    Westra, Douglas G.; Sansone, Marco D.; Eberhart, Chad J.; West, Jeffrey S.

    2016-01-01

    Grid stiffened tank structures such as Ortho-Grid and Iso-Grid are widely used in cryogenic tanks for providing stiffening to the tank while reducing mass, compared to tank walls of constant cross-section. If the structure is internal to the tank, it will positively affect the fluid dynamic behavior of the liquid propellant, in regard to fluid slosh damping. As NASA and commercial companies endeavor to explore the solar system, vehicles will by necessity become more mass efficient, and design margin will be reduced where possible. Therefore, if the damping characteristics of the Ortho-Grid and Iso-Grid structure is understood, their positive damping effect can be taken into account in the systems design process. Historically, damping by internal structures has been characterized by rules of thumb and for Ortho-Grid, empirical design tools intended for slosh baffles of much larger cross-section have been used. There is little or no information available to characterize the slosh behavior of Iso-Grid internal structure. Therefore, to take advantage of these structures for their positive damping effects, there is much need for obtaining additional data and tools to characterize them. Recently, the NASA Marshall Space Flight Center conducted both sub-scale testing and computational fluid dynamics (CFD) simulations of slosh damping for Ortho-Grid and Iso-Grid tanks for cylindrical tanks containing water. Enhanced grid meshing techniques were applied to the geometrically detailed and complex Ortho-Grid and Iso-Grid structures. The Loci-STREAM CFD program with the Volume of Fluid Method module for tracking and locating the water-air fluid interface was used to conduct the simulations. The CFD simulations were validated with the test data and new empirical models for predicting damping and frequency of Ortho-Grid and Iso-Grid structures were generated.

  18. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  19. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  20. Robust Grid-Current-Feedback Resonance Suppression Method for LCL-Type Grid-Connected Inverter Connected to Weak Grid

    DEFF Research Database (Denmark)

    Zhou, Xiaoping; Zhou, Leming; Chen, Yandong

    2018-01-01

    In this paper, a robust grid-current-feedback reso-nance suppression (GCFRS) method for LCL-type grid-connected inverter is proposed to enhance the system damping without introducing the switching noise and eliminate the impact of control delay on system robustness against grid-impedance variation....... It is composed of GCFRS method, the full duty-ratio and zero-beat-lag PWM method, and the lead-grid-current-feedback-resonance-suppression (LGCFRS) method. Firstly, the GCFRS is used to suppress the LCL-resonant peak well and avoid introducing the switching noise. Secondly, the proposed full duty-ratio and zero......-beat-lag PWM method is used to elimi-nate the one-beat-lag computation delay without introducing duty cycle limitations. Moreover, it can also realize the smooth switching from positive to negative half-wave of the grid current and improve the waveform quality. Thirdly, the proposed LGCFRS is used to further...

  1. Grid connectivity issues and the importance of GCC. [GCC - Grid Code Compliance

    Energy Technology Data Exchange (ETDEWEB)

    Das, A.; Schwartz, M.-K. [GL Renewable Certification, Malleswaram, Bangalore (India)

    2012-07-01

    In India, the wind energy is concentrated in rural areas with a very high penetration. In these cases, the wind power has an increasing influence on the power quality on the grids. Another aspect is the influence of weak grids on the operation of wind turbines. Hence it becomes very much essential to introduce such a strong grid code which is particularly applicable to wind sector and suitable for Indian environmental grid conditions. This paper focuses on different international grid codes and their requirement with regard to the connection of wind farms to the electric power systems to mitigate the grid connectivity issues. The requirements include the ways to achieve voltage and frequency stability in the grid-tied wind power system. In this paper, comparative overview and analysis of the main grid connecting requirements will be conducted, comprising several national and regional codes from many countries where high wind penetration levels have been achieved or are expected in the future. The objective of these requirements is to provide wind farms with the control and regulation capabilities encountered in conventional power plants and are necessary for the safe, reliable and economic operation of the power system. This paper also provides a brief idea on the Grid Code Compliance (GCC) certification procedure implemented by the leading accredited certifying body like Germanischer Lloyd Renewables Certification (GL RC), who checks the conformity of the wind turbines as per region specific grid codes. (Author)

  2. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  3. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  4. The GridShare solution: a smart grid approach to improve service provision on a renewable energy mini-grid in Bhutan

    International Nuclear Information System (INIS)

    Quetchenbach, T G; Harper, M J; Jacobson, A E; Robinson IV, J; Hervin, K K; Chase, N A; Dorji, C

    2013-01-01

    This letter reports on the design and pilot installation of GridShares, devices intended to alleviate brownouts caused by peak power use on isolated, village-scale mini-grids. A team consisting of the authors and partner organizations designed, built and field-tested GridShares in the village of Rukubji, Bhutan. The GridShare takes an innovative approach to reducing brownouts by using a low cost device that communicates the state of the grid to its users and regulates usage before severe brownouts occur. This demand-side solution encourages users to distribute the use of large appliances more evenly throughout the day, allowing power-limited systems to provide reliable, long-term renewable electricity to these communities. In the summer of 2011, GridShares were installed in every household and business connected to the Rukubji micro-hydro mini-grid, which serves approximately 90 households with a 40 kW nominal capacity micro-hydro system. The installation was accompanied by an extensive education program. Following the installation of the GridShares, the occurrence and average length of severe brownouts, which had been caused primarily by the use of electric cooking appliances during meal preparation, decreased by over 92%. Additionally, the majority of residents surveyed stated that now they are more certain that their rice will cook well and that they would recommend installing GridShares in other villages facing similar problems. (letter)

  5. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  6. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  7. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which r...

  8. Big data to optimise product strategy in electronic industry

    OpenAIRE

    Khan, Nawaz; Lakshmi Sabih, Vijay; Georgiadou, Elli; Repanovich, Angela

    2016-01-01

    This research identifies the success factors for new product development and competitive advantage as well as argues how big data can expedite the process of launching a new product initiative. By combining the research findings and the patterns of background theories, an inquisitive framework for the new product development and competitive advantage is proposed. This model and framework is a prototype, which with the aid of scenario recommends the parsimonious and an unified way to elucidate...

  9. The LHCb Grid Simulation

    CERN Multimedia

    Baranov, Alexander

    2016-01-01

    The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods ...

  10. Smart grid in Denmark 2.0. Implementing three key recommendations from the Smart Grid Network. [DanGrid]; Smart Grid i Danmark 2.0. Implementering af tre centrale anbefalinger fra Smart Grid netvaerket

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-11-01

    In 2011 the Smart Grid Network, established by the Danish Climate and Energy minister in 2010, published a report which identifies 35 recommendations for implementing smart grid in Denmark. The present report was prepared by the Danish Energy Association and Energinet.dk and elaborates three of these recommendations: Concept for controlling the power system; Information model for the dissemination of data; Roadmap for deployment of smart grid. Concept of Smart Grid: The concept mobilizes and enables electric power demand response and production from smaller customers. This is done by customers or devices connected to the power system modify their behavior to meet the needs of the power system. The concept basically distinguishes between two different mechanisms to enable flexibility. One is the use of price signals (variable network tariffs and electricity prices), which gives customers a financial incentive to move their electricity consumption and production to times when it is of less inconvenience to the power system. The second is flexibility products, where a pre-arranged and well-specified performance - for example, a load reduction in a defined network area - can be activated as required by grid operators and / or Energinet.dk at an agreed price. Information Model for Disseminating Data: The future power system is complex with a large number of physical units, companies and individuals are actively involved in the power system. Similarly, the amount of information needed to be collected, communicated and processed grows explosively, and it is therefore essential to ensure a well-functioning IT infrastructure. A crucial element is a standardized information model in the Danish power system. The concept therefore indicates to use international standards to define an information model. Roadmap Focusing on Grid Companies' Role: There is a need to remove two key barriers. The first barrier is that the existing regulation does not support the grid using

  11. The Most Economical Mode of Power Supply for Remote and Less Developed Areas in China: Power Grid Extension or Micro-Grid?

    Directory of Open Access Journals (Sweden)

    Sen Guo

    2017-05-01

    Full Text Available There are still residents without access to electricity in some remote and less developed areas of China, which lead to low living standards and hinder sustainable development for these residents. In order to achieve the strategic targets of solving China’s energy poverty, realizing basic energy service equalization, and comprehensively building up a moderately prosperous society, several policies have been successively promulgated in recent years, which aim to solve the electricity access issue for residents living in remote and less developed areas. It is of great importance to determine the most economical mode of power supply in remote and less developed areas, which directly affects the economic efficiency of public investment projects. Therefore, this paper focuses on how to select the most economical power supply mode for rural electrification in China. Firstly, the primary modes to supply electricity for residents living in the remote and less developed areas are discussed, which include power grid extension mode and micro-grid mode. Secondly, based on the levelized cost of electricity (LCOE technique, the life cycle economic cost accounting model for different power supply modes are built. Finally, taking a minority nationality village in Yunnan province as an example, the empirical analysis is performed, and the LCOEs of various possible modes for rural electrification are accounted. The results show that the photovoltaic (PV-based independent micro-grid system is the most economical due to the minimum LCOE, namely 0.658 RMB/kWh. However, other power supply modes have much higher LCOEs. The LCOEs of power grid extension model, wind-based independent micro-grid system and biomass-based independent micro-grid system are 1.078 RMB/kWh, 0.704 RMB/kWh and 0.885 RMB/kWh, respectively. The proposed approach is effective and practical, which can provide reference for rural electrification in China.

  12. ASEAN grid-connected biomass residues fired cogeneration plants

    International Nuclear Information System (INIS)

    Adnan, M.F.; Alias, R.

    2006-01-01

    Energy supply is one of the major concerns in the world. With uncertainty in the main oil suppliers, the oil price is expected to remain high due to continuous demand from the world. Since oil is mostly used for electricity and transportation, its shortage would cause major disruptions in our daily activities. Thus to counter this scenario and faster depletion of fossil fuel resources, various measures have been taken to find alternative source of energy such as renewable energy. One of the renewable energy sources is from biomass residues which is aplenty particularly in ASEAN. Through one of the collaboration programme between ASEAN and EC which is The EC-ASEAN Cogeneration Programme, a number of Full-Scale Demonstration Projects (FSDP) using biomass residues have been commissioned and implemented successfully. Four of the FSDPs in Thailand and Malaysia are connected to the grid. These projects have been operating very well and since the fuel is commonly available in this ASEAN region, duplication should not be a problem. Thus, this paper would highlight the success stories in implementing biomass residues grid connected project while enhancing cooperation between ASEAN and EC. (Author)

  13. Synchronization method for grid integrated battery storage systems during asymmetrical grid faults

    Directory of Open Access Journals (Sweden)

    Popadić Bane

    2017-01-01

    Full Text Available This paper aims at presenting a robust and reliable synchronization method for battery storage systems during asymmetrical grid faults. For this purpose, a Matlab/Simulink based model for testing of the power electronic interface between the grid and the battery storage systems has been developed. The synchronization method proposed in the paper is based on the proportional integral resonant controller with the delay signal cancellation. The validity of the synchronization method has been verified using the advanced laboratory station for the control of grid connected distributed energy sources. The proposed synchronization method has eliminated unfavourable components from the estimated grid angular frequency, leading to the more accurate and reliable tracking of the grid voltage vector positive sequence during both the normal operation and the operation during asymmetrical grid faults. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. III 042004: Integrated and Interdisciplinary Research entitled: Smart Electricity Distribution Grids Based on Distribution Management System and Distributed Generation

  14. Development of Mitsubishi high thermal performance grid 2 - overview of the development and Dnb test results

    International Nuclear Information System (INIS)

    Hoshi, M.; Imaizumi, M.; Mori, M.; Hori, K.; Ikeda, K.

    2001-01-01

    Spacer grid plays fundamental role in thermal performance of PWR fuel assembly. Grid spacer with higher thermal performance gives greater DNB (Departure from Nucleate Boiling) margin for the core. Mitsubishi has developed a prototype Zircaloy grid with higher thermal performance. In this paper, process of the development and DNB test results of the grid is presented. To achieve a goal to design grid with higher DNB performance, CFD (Computational Fluid Dynamics) and Freon DNB test are employed in the development. It is also concerned that the grid should be hydraulically compatible to existing grid. CFD is used in examining mixing capability and pressure drop for early stage of the development. Freon DNB test is used for preliminary checking of DNB performance for several design of the grids. After the final design is fixed, DNB test has been carried out at a high pressure / high temperature water test loop to verify the DNB performance. Also, hydraulic test has been done in a water test loop. The test results show that the grid has higher DNB performance and lower pressure loss coefficient compared with existing grid. It is also concluded that a combination of CFD and Freon DNB testing is successful tool for designing and development of grid. (authors)

  15. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  16. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  17. Evaluation on the measurements of Grids for PWR's Spent Fuel in IMEF

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yong Sun; Kim, Gil Soo; Kim, Young Joon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    4 grids were successfully measured to get cell sizes of grids which were burned about 3 cycles in the core of a reactor. In general, to minimize the standard deviation to the mean, a grid is to be set on the fixing jig for maintaining the stable position by master-slave manipulators mounted on the hot cell wall of M5a. The data were collected in MS-Excel sheet, calculated by VBA (visual basic for application) program, and analyzed. The analyzed data were observed a little shifted to both left and right sides as well as both top and bottom sides from the center of a grid. The results were also evaluated by conventional statistics analysis to understand the dimensional properties as well as integrities of grids. The developed 3-dimensional measurement apparatus was applied to measure the cell sizes of four grids, and the acquired data were also evaluated by conventional statistics analysis. As a result, the evaluated data seems to be very close to the mean of population, but they show a peculiarity which is to be studied the reason.

  18. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  19. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  20. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  1. Flexible operation of parallel grid-connecting converters under unbalanced grid voltage

    DEFF Research Database (Denmark)

    Lu, Jinghang; Savaghebi, Mehdi; Guerrero, Josep M.

    2017-01-01

    -link voltage ripple, and overloading. Moreover, under grid voltage unbalance, the active power delivery ability is decreased due to the converter's current rating limitation. In this paper, a thorough study on the current limitation of the grid-connecting converter under grid voltage unbalance is conducted....... In addition, based on the principle that total output active power should be oscillation free, a coordinated control strategy is proposed for the parallel grid-connecting converters. The case study has been conducted to demonstrate the effectiveness of this proposed control strategy....

  2. Semantic and syntactic interoperability in online processing of big Earth observation data.

    Science.gov (United States)

    Sudmanns, Martin; Tiede, Dirk; Lang, Stefan; Baraldi, Andrea

    2018-01-01

    The challenge of enabling syntactic and semantic interoperability for comprehensive and reproducible online processing of big Earth observation (EO) data is still unsolved. Supporting both types of interoperability is one of the requirements to efficiently extract valuable information from the large amount of available multi-temporal gridded data sets. The proposed system wraps world models, (semantic interoperability) into OGC Web Processing Services (syntactic interoperability) for semantic online analyses. World models describe spatio-temporal entities and their relationships in a formal way. The proposed system serves as enabler for (1) technical interoperability using a standardised interface to be used by all types of clients and (2) allowing experts from different domains to develop complex analyses together as collaborative effort. Users are connecting the world models online to the data, which are maintained in a centralised storage as 3D spatio-temporal data cubes. It allows also non-experts to extract valuable information from EO data because data management, low-level interactions or specific software issues can be ignored. We discuss the concept of the proposed system, provide a technical implementation example and describe three use cases for extracting changes from EO images and demonstrate the usability also for non-EO, gridded, multi-temporal data sets (CORINE land cover).

  3. Operation of an InGrid based X-ray detector at the CAST experiment

    Science.gov (United States)

    Krieger, Christoph; Desch, Klaus; Kaminski, Jochen; Lupberger, Michael

    2018-02-01

    The CERN Axion Solar Telescope (CAST) is searching for axions and other particles which could be candidates for DarkMatter and even Dark Energy. These particles could be produced in the Sun and detected by a conversion into soft X-ray photons inside a strong magnetic field. In order to increase the sensitivity for physics beyond the Standard Model, detectors with a threshold below 1 keV as well as efficient background rejection methods are required to compensate for low energies and weak couplings resulting in very low detection rates. Those criteria are fulfilled by a detector utilizing the combination of a pixelized readout chip with an integrated Micromegas stage. These InGrid (Integrated Grid) devices can be build by photolithographic postprocessing techniques, resulting in a close to perfect match of grid and pixels facilitating the detection of single electrons on the chip surface. The high spatial resolution allows for energy determination by simple electron counting as well as for an event-shape based analysis as background rejection method. Tests at an X-ray generator revealed the energy threshold of an InGrid based X-ray detector to be well below the carbon Kα line at 277 eV. After the successful demonstration of the detectors key features, the detector was mounted at one of CAST's four detector stations behind an X-ray telescope in 2014. After several months of successful operation without any detector related interruptions, the InGrid based X-ray detector continues data taking at CAST in 2015. During operation at the experiment, background rates in the order of 10-5 keV-1 cm-2 s-1 have been achieved by application of a likelihood based method discriminating the non-photon background originating mostly from cosmic rays. For continued operation in 2016, an upgraded InGrid based detector is to be installed among other improvements including decoupling and sampling of the signal induced on the grid as well as a veto scintillator to further lower the

  4. Prospects of Foreign Capital Raising for Russian Power Grid Companies

    Directory of Open Access Journals (Sweden)

    N. N. Shvets

    2015-01-01

    Full Text Available The power sector reform in Russia saw capital raising as one of the key objectives. Additional investments are necessary, in particular, for renovation of fixed assets which are ca. 70% worn out. The official Strategy for the development of the Russian power grid also provides for privatization of certain companies and foreign investors are considered among others as the target audience. Upon prospective privatization the sector is expected not only to experience a certain increase in capital expenditures, but also to benefit from foreign expertise and efficiency enhancement. At the moment, however, the privatization plans are hard to implement due to a number of obstacles. Prospective investors are mostly concerned about the lack of transparent regulation and clear development strategy of the industry. This is particularly relevant to the tariff system, which has been continuously altered in recent years. This might be explained by the need of the state support by other sectors, which is often provided at the expense of the power industry. Furthermore, the prospects of foreign capital raising are negatively influenced by the conflict in Ukraine and the corresponding negative perception of potential investors. The above factors result in the decrease in value of power grid companies as well as in the lack of visibility regarding the prospects of the sector development. Privatization thus becomes unreasonable both for the state and prospective investors. At the same time, despite the sector specifics, there are precedents of successful sale of power grid assets to private investors by international peers. Particularly, Vatenfall and Forum have recently closed relevant transactions, nothing to say about the power grid sector of Brazil, majorly controlled by private owners. Transparent regulation, clear pricing rules and well-balanced economic policy are, indeed, indispensable prerequisites for successful privatization. Those might back up a

  5. Phase-lock loop of Grid-connected Voltage Source Converter under non-ideal grid condition

    DEFF Research Database (Denmark)

    Wang, Haojie; Sun, Hai; Han, Minxiao

    2015-01-01

    It is a normal practice that the DC micro-grid is connected to AC main grid through Grid-connected Voltage Source Converter (G-VSC) for voltage support. Accurate control of DC micro-grid voltage is difficult for G-VSC under unbalanced grid condition as the fundamental positive-sequence component...... and distorted system voltage the proposed PLL can accurately detect the fundamental positive-sequence component of grid voltage thus accurate control of DC micro-grid voltage can be realized....... phase information cannot be accurately tracked. Based on analysis of the cause of double-frequency ripple when unbalance exists in main grid, a phase-locked loop (PLL) detection technique is proposed. Under the conditions of unsymmetrical system voltage, varying system frequency, single-phase system...

  6. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  7. Smart grid security innovative solutions for a modernized grid

    CERN Document Server

    Skopik, Florian

    2015-01-01

    The Smart Grid security ecosystem is complex and multi-disciplinary, and relatively under-researched compared to the traditional information and network security disciplines. While the Smart Grid has provided increased efficiencies in monitoring power usage, directing power supplies to serve peak power needs and improving efficiency of power delivery, the Smart Grid has also opened the way for information security breaches and other types of security breaches. Potential threats range from meter manipulation to directed, high-impact attacks on critical infrastructure that could bring down regi

  8. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  9. Grid Integration Research | Wind | NREL

    Science.gov (United States)

    Grid Integration Research Grid Integration Research Researchers study grid integration of wind three wind turbines with transmission lines in the background. Capabilities NREL's grid integration electric power system operators to more efficiently manage wind grid system integration. A photo of

  10. EASE-Grid 2.0: Incremental but Significant Improvements for Earth-Gridded Data Sets

    OpenAIRE

    Brodzik, Mary J.; Billingsley, Brendan; Haran, Terry; Raup, Bruce; Savoie, Matthew H.

    2012-01-01

    Defined in the early 1990s for use with gridded satellite passive microwave data, the Equal-Area Scalable Earth Grid (EASE-Grid) was quickly adopted and used for distribution of a variety of satellite and in situ data sets. Conceptually easy to understand, EASE-Grid suffers from limitations that make it impossible to format in the widely popular GeoTIFF convention without reprojection. Importing EASE-Grid data into standard mapping software packages is nontrivial and error-prone. This article...

  11. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.

    Science.gov (United States)

    Gharavi, Hamid; Hu, Bin

    2017-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.

  12. A fusion networking model for smart grid power distribution backbone communication network based on PTN

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2016-01-01

    Full Text Available In current communication network for distribution in Chinese power grid systems, the fiber communication backbone network for distribution and TD-LTE power private wireless backhaul network of power grid are both bearing by the SDH optical transmission network, which also carries the communication network of transformer substation and main electric. As the data traffic of the distribution communication and TD-LTE power private wireless network grow rapidly in recent years, it will have a big impact with the SDH network’s bearing capacity which is mainly used for main electric communication in high security level. This paper presents a fusion networking model which use a multiple-layer PTN network as the unified bearing of the TD-LTE power private wireless backhaul network and fiber communication backbone network for distribution. Network dataflow analysis shows that this model can greatly reduce the capacity pressure of the traditional SDH network as well as ensure the reliability of the transmission of the communication network for distribution and TD-LTE power private wireless network.

  13. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  14. SuperGrid or SmartGrid: Competing strategies for large-scale integration of intermittent renewables?

    International Nuclear Information System (INIS)

    Blarke, Morten B.; Jenkins, Bryan M.

    2013-01-01

    This paper defines and compares two strategies for integrating intermittent renewables: SuperGrid and SmartGrid. While conventional energy policy suggests that these strategies may be implemented alongside each other, the paper identifies significant technological and socio-economic conflicts of interest between the two. The article identifies differences between a domestic strategy for the integration of intermittent renewables, vis-à-vis the SmartGrid, and a cross-system strategy, vis-à-vis the SuperGrid. Policy makers and transmission system operators must understand the need for both strategies to evolve in parallel, but in different territories, or with strategic integration, avoiding for one strategy to undermine the feasibility of the other. A strategic zoning strategy is introduced from which attentive societies as well as the global community stand to benefit. The analysis includes a paradigmatic case study from West Denmark which supports the hypothesis that these strategies are mutually exclusive. The case study shows that increasing cross-system transmission capacity jeopardizes the feasibility of SmartGrid technology investments. A political effort is required for establishing dedicated SmartGrid innovation zones, while also redefining infrastructure to avoid the narrow focus on grids and cables. SmartGrid Investment Trusts could be supported from reallocation of planned transmission grid investments to provide for the equitable development of SmartGrid strategies. - Highlights: • Compares SuperGrid and SmartGrid strategies for integrating intermittent renewables. • Identifies technological and socio-economic conflicts of interest between the two. • Proposes a strategic zoning strategy allowing for both strategies to evolve. • Presents a paradigmatic case study showing that strategies are mutually exclusive. • Proposes dedicated SmartGrid innovation zones and SmartGrid investment trusts

  15. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  16. Big Data over a 100 G network at Fermilab

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; Dykstra, Dave; Slyz, Marko

    2014-01-01

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out of the laboratory of about 30 Gbit/s and on the Local area network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research and Development facility connected to the ESnet 100 G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. This work presents the new R and D facility and the continuation of the evaluation program.

  17. Big Data Over a 100G Network at Fermilab

    Science.gov (United States)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; Dykstra, Dave; Slyz, Marko

    2014-06-01

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out of the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. This work presents the new R&D facility and the continuation of the evaluation program.

  18. Mapping of grid faults and grid codes[Wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Iov, F. [Aalborg Univ., Inst. of Energy Technology (Denmark); Hansen, Anca D.; Soerensen, Poul; Cutululis, N.A. [Risoe National Lab. - DTU, Wind Enegy Dept., Roskilde (Denmark)

    2007-06-15

    The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need for such investigations. The grid connection requirements for wind turbines have increased significantly during the last 5-10 years. Especially the requirements for wind turbines to stay connected to the grid during and after voltage sags, imply potential challenges in the design of wind turbines. These requirements pose challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads' impact on the wind turbines' lifetime are defined. The goal of this report is to present a mapping of different grid fault types and their frequency in different countries. The report provides also a detailed overview of the Low Voltage Ride-Through Capabilities for wind turbines in different relevant countries. The most relevant study cases for the quantification of the loads' impact on the wind turbines' lifetime are defined. (au)

  19. Bus.py: A GridLAB-D Communication Interface for Smart Distribution Grid Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Timothy M.; Palmintier, Bryan; Suryanarayanan, Siddharth; Maciejewski, Anthony A.; Siegel, Howard Jay

    2015-07-03

    As more Smart Grid technologies (e.g., distributed photovoltaic, spatially distributed electric vehicle charging) are integrated into distribution grids, static distribution simulations are no longer sufficient for performing modeling and analysis. GridLAB-D is an agent-based distribution system simulation environment that allows fine-grained end-user models, including geospatial and network topology detail. A problem exists in that, without outside intervention, once the GridLAB-D simulation begins execution, it will run to completion without allowing the real-time interaction of Smart Grid controls, such as home energy management systems and aggregator control. We address this lack of runtime interaction by designing a flexible communication interface, Bus.py (pronounced bus-dot-pie), that uses Python to pass messages between one or more GridLAB-D instances and a Smart Grid simulator. This work describes the design and implementation of Bus.py, discusses its usefulness in terms of some Smart Grid scenarios, and provides an example of an aggregator-based residential demand response system interacting with GridLAB-D through Bus.py. The small scale example demonstrates the validity of the interface and shows that an aggregator using said interface is able to control residential loads in GridLAB-D during runtime to cause a reduction in the peak load on the distribution system in (a) peak reduction and (b) time-of-use pricing cases.

  20. Profitability of smart grid solutions applied in power grid

    Directory of Open Access Journals (Sweden)

    Katić Nenad A.

    2016-01-01

    Full Text Available The idea of a Smart Grid solution has been developing for years, as complete solution for a power utility, consisting of different advanced technologies aimed at improving of the efficiency of operation. The trend of implementing various smart systems continues, e.g. Energy Management Systems, Grid Automation Systems, Advanced Metering Infrastructure, Smart power equipment, Distributed Energy Resources, Demand Response systems, etc. Futhermore, emerging technologies, such as energy storages, electrical vehicles or distributed generators, become integrated in distribution networks and systems. Nowadays, the idea of a Smart Grid solution becomes more realistic by full integration of all advanced operation technologies (OT within IT environment, providing the complete digitalization of an Utility (IT/OT integration. The overview of smart grid solutions, estimation of investments, operation costs and possible benefits are presented in this article, with discusison about profitability of such systems.

  1. Application of synchronous grid-connected controller in the wind-solar-storage micro grid

    OpenAIRE

    Li, Hua; Ren, Yongfeng; Li, Le; Luo, Zhenpeng

    2016-01-01

    Recently, there has been an increasing interest in using distributed generators (DG) not only to inject power into the grid, but also to enhance the power quality. In this study, a space voltage pulse width modulation (SVPWM) control method is proposed for a synchronous grid-connected controller in a wind-solar-storage micro grid. This method is based on the appropriate topology of the synchronous controller. The wind-solar-storage micro grid is controlled to reconnect to the grid synchronous...

  2. Discovery Monday: How to measure success

    CERN Multimedia

    2003-01-01

    The last Discovery Monday which was carried out by the surveyors at CERN was a great success, one which they could not measure with their usual precision. The various entertaining as well as instructive experiments deserve a big "Thank you" to the SU group at the EST division. Children learn how to measure with the water level, like in Roman times.At CERN, photogrammetric techniques are used to precisely measure positions of complex ensembles like detectors. In Microcosm, photogrammetry is also invaluable to take the measure of visitors who can no longer cheat about their size. They were measured to a precision of one tenth of a millimetre and received a certificate.The alignment of accelerators is one of the big challenges for the surveyors at CERN. But even with good instruments, you need to have good eyes!

  3. Analysis of turbine-grid interaction of grid-connected wind turbine using HHT

    Science.gov (United States)

    Chen, A.; Wu, W.; Miao, J.; Xie, D.

    2018-05-01

    This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.

  4. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2017-01-01

    This new edition provides a description of current developments relating to grid methods, grid codes, and their applications to actual problems. Grid generation methods are indispensable for the numerical solution of differential equations. Adaptive grid-mapping techniques, in particular, are the main focus and represent a promising tool to deal with systems with singularities. This 3rd edition includes three new chapters on numerical implementations (10), control of grid properties (11), and applications to mechanical, fluid, and plasma related problems (13). Also the other chapters have been updated including new topics, such as curvatures of discrete surfaces (3). Concise descriptions of hybrid mesh generation, drag and sweeping methods, parallel algorithms for mesh generation have been included too. This new edition addresses a broad range of readers: students, researchers, and practitioners in applied mathematics, mechanics, engineering, physics and other areas of applications.

  5. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  6. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  7. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  8. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  9. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  10. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  11. Demonstrating the Superiority of the FCB Grid as a Tool for Students To Write Effective Advertising Strategy.

    Science.gov (United States)

    Yssel, Johan C.

    Although the FCB (Foote, Cone, & Belding) grid was never intended to serve as an educational tool, it can be applied successfully in advertising classes to address the three areas that S. E. Moriarty considers to be the minimum for writing strategy. To demonstrate the superiority of the FCB grid as a pedagogical tool, a study analyzed…

  12. Space weather and power grids: findings and outlook

    Science.gov (United States)

    Krausmann, Elisabeth; Andersson, Emmelie; Murtagh, William; Mitchison, Neil

    2014-05-01

    The impact of space weather on the power grid is a tangible and recurring threat with potentially serious consequences on society. Of particular concern is the long-distance high-voltage power grid, which is vulnerable to the effects of geomagnetic storms that can damage or destroy equipment or lead to grid collapse. In order to launch a dialogue on the topic and encourage authorities, regulators and operators in European countries and North America to learn from each other, the European Commission's Joint Research Centre, the Swedish Civil Contingencies Agency, and NOAA's Space Weather Prediction Centre, with the contribution of the UK Civil Contingencies Secretariat, jointly organised a workshop on the impact of extreme space weather on the power grid on 29-30 October 2013. Being structured into 6 sessions, the topics addressed were space-weather phenomena and the dynamics of their impact on the grid, experiences with prediction and now-casting in the USA and in Europe, risk assessment and preparedness, as well as policy implications arising from increased awareness of the space-weather hazard. The main workshop conclusions are: • There is increasing awareness of the risk of space-weather impact among power-grid operators and regulators and some countries consider it a priority risk to be addressed. • The predictability of space-weather phenomena is still limited and relies, in part, on data from ageing satellites. NOAA is working with NASA to launch the DSCOVR solar wind spacecraft, the replacement for the ACE satellite, in early 2015. • In some countries, models and tools for GIC prediction and grid impact assessment have been developed in collaboration with national power grids but equipment vulnerability models are scarce. • Some countries have successfully hardened their transmission grids to space-weather impact and sustained relatively little or no damage due to currents induced by past moderate space-weather events. • While there is preparedness

  13. A View on Fuzzy Systems for Big Data: Progress and Opportunities

    Directory of Open Access Journals (Sweden)

    Alberto Fernandez

    2016-04-01

    Full Text Available Currently, we are witnessing a growing trend in the study and application of problems in the framework of Big Data. This is mainly due to the great advantages which come from the knowledge extraction from a high volume of information. For this reason, we observe a migration of the standard Data Mining systems towards a new functional paradigm that allows at working with Big Data. By means of the MapReduce model and its different extensions, scalability can be successfully addressed, while maintaining a good fault tolerance during the execution of the algorithms. Among the different approaches used in Data Mining, those models based on fuzzy systems stand out for many applications. Among their advantages, we must stress the use of a representation close to the natural language. Additionally, they use an inference model that allows a good adaptation to different scenarios, especially those with a given degree of uncertainty. Despite the success of this type of systems, their migration to the Big Data environment in the different learning areas is at a preliminary stage yet. In this paper, we will carry out an overview of the main existing proposals on the topic, analyzing the design of these models. Additionally, we will discuss those problems related to the data distribution and parallelization of the current algorithms, and also its relationship with the fuzzy representation of the information. Finally, we will provide our view on the expectations for the future in this framework according to the design of those methods based on fuzzy sets, as well as the open challenges on the topic.

  14. Evaluation of acoustic telemetry grids for determining aquatic animal movement and survival

    Science.gov (United States)

    Kraus, Richard T.; Holbrook, Christopher; Vandergoot, Christopher; Stewart, Taylor R.; Faust, Matthew D.; Watkinson, Douglas A.; Charles, Colin; Pegg, Mark; Enders, Eva C.; Krueger, Charles C.

    2018-01-01

    Acoustic telemetry studies have frequently prioritized linear configurations of hydrophone receivers, such as perpendicular from shorelines or across rivers, to detect the presence of tagged aquatic animals. This approach introduces unknown bias when receivers are stationed for convenience at geographic bottlenecks (e.g., at the mouth of an embayment or between islands) as opposed to deployments following a statistical sampling design.We evaluated two-dimensional acoustic receiver arrays (grids: receivers spread uniformly across space) as an alternative approach to provide estimates of survival, movement, and habitat use. Performance of variably-spaced receiver grids (5–25 km spacing) was evaluated by simulating (1) animal tracks as correlated random walks (speed: 0.1–0.9 m/s; turning angle standard deviation: 5–30 degrees); (2) variable tag transmission intervals along each track (nominal delay: 15–300 seconds); and (3) probability of detection of each transmission based on logistic detection range curves (midpoint: 200–1500 m). From simulations, we quantified i) time between successive detections on any receiver (detection time), ii) time between successive detections on different receivers (transit time), and iii) distance between successive detections on different receivers (transit distance).In the most restrictive detection range scenario (200 m), the 95th percentile of transit time was 3.2 days at 5 km grid spacing, 5.7 days at 7 km, and 15.2 days at 25 km; for the 1500 m detection range scenario, it was 0.1 days at 5 km, 0.5 days at 7 km, and 10.8 days at 25 km. These values represented upper bounds on the expected maximum time that an animal could go undetected. Comparison of the simulations with pilot studies on three fishes (walleye Sander vitreus, common carp Cyprinus carpio, and channel catfish Ictalurus punctatus) from two independent large lake ecosystems (lakes Erie and Winnipeg) revealed shorter detection and transit times than what

  15. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  16. A Generic Danish Distribution Grid Model for Smart Grid Technology Testing

    DEFF Research Database (Denmark)

    Cha, Seung-Tae; Wu, Qiuwei; Østergaard, Jacob

    2012-01-01

    This paper describes the development of a generic Danish distribution grid model for smart grid technology testing based on the Bornholm power system. The frequency dependent network equivalent (FDNE) method has been used in order to accurately preserve the desired properties and characteristics...... as a generic Smart Grid benchmark model for testing purposes....... by comparing the transient response of the original Bornholm power system model and the developed generic model under significant fault conditions. The results clearly show that the equivalent generic distribution grid model retains the dynamic characteristics of the original system, and can be used...

  17. Impact of grid impedance variations on harmonic emission of grid-connected inverters

    DEFF Research Database (Denmark)

    Hoseinzadeh, Bakhtyar; Bak, Claus Leth; Blaabjerg, Frede

    2017-01-01

    This paper addresses harmonic magnification due to resonance circuits resulting from interaction between uncertain grid impedance and converter. The source of harmonic may be either the grid or inverter. It is demonstrated that unknown and unpredictable grid impedance may result in variable...

  18. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  19. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  20. Beyond grid security

    International Nuclear Information System (INIS)

    Hoeft, B; Epting, U; Koenig, T

    2008-01-01

    While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids). In contrast to groups such as OSCT (Operational Security Coordination Team) and JSPG (Joint Security Policy Group), the purpose of ISSeG is to provide a holistic approach to security for Grid computer centres, from strategic considerations to an implementation plan and its deployment. The generalised methodology of Integrated Site Security (ISS) is based on the knowledge gained during its implementation at several sites as well as through security audits, and this will be briefly discussed. Several examples of ISS implementation tasks at the Forschungszentrum Karlsruhe will be presented, including segregation of the network for administration and maintenance and the implementation of Application Gateways. Furthermore, the web-based ISSeG training material will be introduced. This aims to offer ISS implementation guidance to other Grid installations in order to help avoid common pitfalls

  1. Grid simulator for power quality assessment of micro-grids

    DEFF Research Database (Denmark)

    Carrasco, Joaquin Eloy Garcia; Vasquez, Juan Carlos; Guerrero, Josep M.

    2013-01-01

    voltages, low-order harmonics and flicker. The aim of this equipment is to test the performance of a given system under such distorted voltages. A prototype of the simulator, consisting of two inverters connected back-to-back to a 380 V three-phase grid and feeding a micro-grid composed of two......-inverter interfaced distributed generators and a critical load was built and tested. A set of experimental results for linear purely resistive loads, non-linear loads and current-controlled inverters is presented to prove the capabilities of the simulator. Finally, a case study is presented by testing a micro-grid.......In this study, a grid simulator based on a back-to-back inverter topology with resonant controllers is presented. The simulator is able to generate three-phase voltages for a range of amplitudes and frequencies with different types of perturbations, such as voltage sags, steady-state unbalanced...

  2. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  3. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  4. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  5. Smart grids - French Expertise

    International Nuclear Information System (INIS)

    2015-11-01

    The adaptation of electrical systems is the focus of major work worldwide. Bringing electricity to new territories, modernizing existing electricity grids, implementing energy efficiency policies and deploying renewable energies, developing new uses for electricity, introducing electric vehicles - these are the challenges facing a multitude of regions and countries. Smart Grids are the result of the convergence of electrical systems technologies with information and communications technologies. They play a key role in addressing the above challenges. Smart Grid development is a major priority for both public and private-sector actors in France. The experience of French companies has grown with the current French electricity system, a system that already shows extensive levels of 'intelligence', efficiency and competitiveness. French expertise also leverages substantial competence in terms of 'systems engineering', and can provide a tailored response to meet all sorts of needs. French products and services span all the technical and commercial building blocks that make up the Smart Grid value chain. They address the following issues: Improving the use and valuation of renewable energies and decentralized means of production, by optimizing the balance between generation and consumption. Strengthening the intelligence of the transmission and distribution grids: developing 'Supergrid', digitizing substations in transmission networks, and automating the distribution grids are the focus of a great many projects designed to reinforce the 'self-healing' capacity of the grid. Improving the valuation of decentralized flexibilities: this involves, among others, deploying smart meters, reinforcing active energy efficiency measures, and boosting consumers' contribution to grid balancing, via practices such as demand response which implies the aggregation of flexibility among residential, business, and/or industrial sites. Addressing current technological challenges, in

  6. Power grid complex network evolutions for the smart grid

    NARCIS (Netherlands)

    Pagani, Giuliano Andrea; Aiello, Marco

    2014-01-01

    The shift towards an energy grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the electricity distribution infrastructure. Today the grid is a hierarchical one delivering energy from large scale facilities to end-users. Tomorrow it will be a

  7. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    Science.gov (United States)

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  8. Scaling-Up Successfully: Pathways to Replication for Educational NGOs

    Science.gov (United States)

    Jowett, Alice; Dyer, Caroline

    2012-01-01

    Non-government organisations (NGOs) are big players in international development, critical to the achievement of the Millennium Development Goals (MDGs) and constantly under pressure to "achieve more". Scaling-up their initiatives successfully and sustainably can be an efficient and cost effective way for NGOs to increase their impact across a…

  9. Earth Science Data Analysis in the Era of Big Data

    Science.gov (United States)

    Kuo, K.-S.; Clune, T. L.; Ramachandran, R.

    2014-01-01

    Anyone with even a cursory interest in information technology cannot help but recognize that "Big Data" is one of the most fashionable catchphrases of late. From accurate voice and facial recognition, language translation, and airfare prediction and comparison, to monitoring the real-time spread of flu, Big Data techniques have been applied to many seemingly intractable problems with spectacular successes. They appear to be a rewarding way to approach many currently unsolved problems. Few fields of research can claim a longer history with problems involving voluminous data than Earth science. The problems we are facing today with our Earth's future are more complex and carry potentially graver consequences than the examples given above. How has our climate changed? Beside natural variations, what is causing these changes? What are the processes involved and through what mechanisms are these connected? How will they impact life as we know it? In attempts to answer these questions, we have resorted to observations and numerical simulations with ever-finer resolutions, which continue to feed the "data deluge." Plausibly, many Earth scientists are wondering: How will Big Data technologies benefit Earth science research? As an example from the global water cycle, one subdomain among many in Earth science, how would these technologies accelerate the analysis of decades of global precipitation to ascertain the changes in its characteristics, to validate these changes in predictive climate models, and to infer the implications of these changes to ecosystems, economies, and public health? Earth science researchers need a viable way to harness the power of Big Data technologies to analyze large volumes and varieties of data with velocity and veracity. Beyond providing speedy data analysis capabilities, Big Data technologies can also play a crucial, albeit indirect, role in boosting scientific productivity by facilitating effective collaboration within an analysis environment

  10. Grid interoperability: the interoperations cookbook

    Energy Technology Data Exchange (ETDEWEB)

    Field, L; Schulz, M [CERN (Switzerland)], E-mail: Laurence.Field@cern.ch, E-mail: Markus.Schulz@cern.ch

    2008-07-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards.

  11. Grid interoperability: the interoperations cookbook

    International Nuclear Information System (INIS)

    Field, L; Schulz, M

    2008-01-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards

  12. Calculation approaches for grid usage fees to influence the load curve in the distribution grid level

    International Nuclear Information System (INIS)

    Illing, Bjoern

    2014-01-01

    Dominated by the energy policy the decentralized German energy market is changing. One mature target of the government is to increase the contribution of renewable generation to the gross electricity consumption. In order to achieve this target disadvantages like an increased need for capacity management occurs. Load reduction and variable grid fees offer the grid operator solutions to realize capacity management by influencing the load profile. The evolution of the current grid fees towards more causality is required to adapt these approaches. Two calculation approaches are developed in this assignment. On the one hand multivariable grid fees keeping the current components demand and energy charge. Additional to the grid costs grid load dependent parameters like the amount of decentralized feed-ins, time and local circumstances as well as grid capacities are considered. On the other hand the grid fee flat-rate which represents a demand based model on a monthly level. Both approaches are designed to meet the criteria for future grid fees. By means of a case study the effects of the grid fees on the load profile at the low voltage grid is simulated. Thereby the consumption is represented by different behaviour models and the results are scaled at the benchmark grid area. The resulting load curve is analyzed concerning the effects of peak load reduction as well as the integration of renewable energy sources. Additionally the combined effect of grid fees and electricity tariffs is evaluated. Finally the work discusses the launching of grid fees in the tense atmosphere of politics, legislation and grid operation. Results of this work are two calculation approaches designed for grid operators to define the grid fees. Multivariable grid fees are based on the current calculation scheme. Hereby demand and energy charges are weighted by time, locational and load related dependencies. The grid fee flat-rate defines a limitation in demand extraction. Different demand levels

  13. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  14. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  15. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  16. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  17. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  18. WE-H-BRB-03: Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success

    Energy Technology Data Exchange (ETDEWEB)

    McNutt, T. [Johns Hopkins University (United States)

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  19. WE-H-BRB-03: Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success

    International Nuclear Information System (INIS)

    McNutt, T.

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  20. Grid Security

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing users and application developers to take security seriously is equally difficult. This paper will describe the main Grid security issues, both in terms of technology and policy, that have been tackled over recent years in LCG and related Grid projects. Achievements to date will be described and opportunities for future improvements will be addressed.

  1. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  2. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  3. Emissions & Generation Resource Integrated Database (eGRID), eGRID2012

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Emissions emissions rates; net generation; resource mix; and many other attributes. eGRID2012 Version 1.0 is the eighth edition of eGRID, which contains the...

  4. Intelligent Test Mechanism Design of Worn Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available With the continuous development of national economy, big gear was widely applied in metallurgy and mine domains. So, big gear plays an important role in above domains. In practical production, big gear abrasion and breach take place often. It affects normal production and causes unnecessary economic loss. A kind of intelligent test method was put forward on worn big gear mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. The measure equations transformations were made on involute straight gear. Original polar coordinate equations were transformed into rectangular coordinate equations. Big gear abrasion measure principle was introduced. Detection principle diagram was given. Detection route realization method was introduced. OADM12 laser sensor was selected. Detection on big gear abrasion area was realized by detection mechanism. Tested data of unworn gear and worn gear were led in designed calculation program written by Visual Basic language. Big gear abrasion quantity can be obtained. It provides a feasible method for intelligent test and intelligent repair welding on worn big gear.

  5. The case for the relativistic hot big bang cosmology

    Science.gov (United States)

    Peebles, P. J. E.; Schramm, D. N.; Kron, R. G.; Turner, E. L.

    1991-01-01

    What has become the standard model in cosmology is described, and some highlights are presented of the now substantial range of evidence that most cosmologists believe convincingly establishes this model, the relativistic hot big bang cosmology. It is shown that this model has yielded a set of interpretations and successful predictions that substantially outnumber the elements used in devising the theory, with no well-established empirical contradictions. Brief speculations are made on how the open puzzles and work in progress might affect future developments in this field.

  6. Small data, data infrastructures and big data (Working Paper 1)

    OpenAIRE

    Kitchin, Rob; Lauriault, Tracey P.

    2014-01-01

    The production of academic knowledge has progressed for the past few centuries using small data studies characterized by sampled data generated to answer specific questions. It is a strategy that has been remarkably successful, enabling the sciences, social sciences and humanities to advance in leaps and bounds. This approach is presently being challenged by the development of big data. Small data studies will, however, continue to be important in the future because of their utility in answer...

  7. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  8. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  9. Project Scheduling Heuristics-Based Standard PSO for Task-Resource Assignment in Heterogeneous Grid

    Directory of Open Access Journals (Sweden)

    Ruey-Maw Chen

    2011-01-01

    Full Text Available The task scheduling problem has been widely studied for assigning resources to tasks in heterogeneous grid environment. Effective task scheduling is an important issue for the performance of grid computing. Meanwhile, the task scheduling problem is an NP-complete problem. Hence, this investigation introduces a named “standard“ particle swarm optimization (PSO metaheuristic approach to efficiently solve the task scheduling problems in grid. Meanwhile, two promising heuristics based on multimode project scheduling are proposed to help in solving interesting scheduling problems. They are the best performance resource heuristic and the latest finish time heuristic. These two heuristics applied to the PSO scheme are for speeding up the search of the particle and improving the capability of finding a sound schedule. Moreover, both global communication topology and local ring communication topology are also investigated for efficient study of proposed scheme. Simulation results demonstrate that the proposed approach in this investigation can successfully solve the task-resource assignment problems in grid computing and similar scheduling problems.

  10. Raman spectroscopy, "big data", and local heterogeneity of solid state synthesized lithium titanate

    Science.gov (United States)

    Pelegov, Dmitry V.; Slautin, Boris N.; Gorshkov, Vadim S.; Zelenovskiy, Pavel S.; Kiselev, Evgeny A.; Kholkin, Andrei L.; Shur, Vladimir Ya.

    2017-04-01

    Existence of defects is an inherent property of real materials. Due to an explicit correlation between defects concentration and conductivity, it is important to understand the level and origins of the structural heterogeneity for any particulate electrode material. Poor conductive lithium titanate Li4Ti5O12 (LTO), widely used in batteries for grids and electric buses, needs it like no one else. In this work, structural heterogeneity of compacted lithium titanate is measured locally in 100 different points by conventional micro-Raman technique, characterized in terms of variation of Raman spectra parameters and interpreted using our version of "big data" analysis. This very simple approach with automated measurement and treatment has allowed us to demonstrate inherent heterogeneity of solid-state synthesized LTO and attribute it to the existence of lithium and oxygen vacancies. The proposed approach can be used as a fast, convenient, and cost-effective defects-probing tool for a wide range of materials with defects-sensitive properties. In case of LTO, such an approach can be used to increase its charge/discharge rates by synthesis of materials with controlled nonstoichiometry. New approaches to solid state synthesis of LTO, suitable for high-power applications, will help to significantly reduce the costs of batteries for heavy-duty electric vehicles and smart-grids.

  11. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  12. Performance studies of CMS workflows using Big Data technologies

    OpenAIRE

    Ambroz, Luca; Bonacorsi, Daniele; Grandi, Claudio

    2017-01-01

    At the Large Hadron Collider (LHC), more than 30 petabytes of data are produced from particle collisions every year of data taking. The data processing requires large volumes of simulated events through Monte Carlo techniques. Furthermore, physics analysis implies daily access to derived data formats by hundreds of users. The Worldwide LHC Computing Grid (WLCG) - an international collaboration involving personnel and computing centers worldwide - is successfully coping with these challenges, ...

  13. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  14. Study of the magnetic spectrograph BIG KARL on image errors and their causes

    International Nuclear Information System (INIS)

    Paul, D.

    1987-12-01

    The ionoptical aberrations of the QQDDQ spectrograph BIG KARL are measured and analyzed in order to improve resolution and transmission at large acceptance. The entrance phasespace is scanned in a cartesian grid by means of a narrow collimated beam of scattered deuterons. The distortions due to the nonlinear transformation by the system are measured in the detector plane. A model is developed which describes the measured distortions. The model allows to locate nonlinearities in the system responsible for the observed distortions. It gives a good understanding of geometrical nonlinearities up to the fifth order and chromatical nonlinearities up to the third order. To confirm the model, the magnetic field in the quadrupoles is measured including the fringe field region. Furthermore, nonlinearities appearing in ideal magnets are discussed and compared to experimental data. (orig.) [de

  15. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  16. LHC computing grid

    International Nuclear Information System (INIS)

    Novaes, Sergio

    2011-01-01

    Full text: We give an overview of the grid computing initiatives in the Americas. High-Energy Physics has played a very important role in the development of grid computing in the world and in Latin America it has not been different. Lately, the grid concept has expanded its reach across all branches of e-Science, and we have witnessed the birth of the first nationwide infrastructures and its use in the private sector. (author)

  17. High density grids

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Aina E.; Baxter, Elizabeth L.

    2018-01-16

    An X-ray data collection grid device is provided that includes a magnetic base that is compatible with robotic sample mounting systems used at synchrotron beamlines, a grid element fixedly attached to the magnetic base, where the grid element includes at least one sealable sample window disposed through a planar synchrotron-compatible material, where the planar synchrotron-compatible material includes at least one automated X-ray positioning and fluid handling robot fiducial mark.

  18. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  19. Comparison tomography relocation hypocenter grid search and guided grid search method in Java island

    International Nuclear Information System (INIS)

    Nurdian, S. W.; Adu, N.; Palupi, I. R.; Raharjo, W.

    2016-01-01

    The main data in this research is earthquake data recorded from 1952 to 2012 with 9162 P wave and 2426 events are recorded by 30 stations located around Java island. Relocation hypocenter processed using grid search and guidded grid search method. Then the result of relocation hypocenter become input for tomography pseudo bending inversion process. It can be used to identification the velocity distribution in subsurface. The result of relocation hypocenter by grid search and guided grid search method after tomography process shown in locally and globally. In locally area grid search method result is better than guided grid search according to geological reseach area. But in globally area the result of guided grid search method is better for a broad area because the velocity variation is more diverse than the other one and in accordance with local geological research conditions. (paper)

  20. Photovoltaic system connected to electric grid in Brazil; Sistemas fotovoltaicos conectados a rede eletrica no Brasil

    Energy Technology Data Exchange (ETDEWEB)

    Varella, Fabiana Karla de Oliveira Martins [Universidade Federal Rural do Semi-Arido (UFERSA), Mossoro, RN (Brazil)], email: fkv@ufersa.edu.br; Gomes, Rodolfo Dourado Maia; Jannuzzi, Gilberto De Martino [International Energy Initiative (IEI), Campinas, SP (Brazil)], email: rodolfo@iei-la.org

    2010-07-01

    Brazil has in the next decades the big challenge of seeking for solutions to meet its growing energy service needs and, at the same time, satisfy criteria of economics, security of supply, public health, secure universal energy access and environmental sustainability. The growing environmental pressures over the hydropower potential exploitation at the Amazon region and the energy sources even more distant from the customer load center are some of the aspects which are posed in order to seek for solutions. Several countries are betting on grid-connected PV systems. In Brazil, the initiatives to promote the use of PV energy are still a few. Even though the country is endowed with a great solar energy potential, the initiatives to create and consolidate a market for the use of such technology and to develop the national industry for equipment and services are still incipient. The lack of legislation and regulation is one of the barriers pointed out. Thus, the objective of this report is to assess the reasons why the country does not have a specific legislation to promote the use of grid-connected PV systems. For such, grid-connected PV systems installed in Brazil and the existent incentives are identified. The methodology used was based on literature review and conduction of specific questionnaires. The latter was sent to the Ministry of Mines and Energy, researchers and one power distribution company. (author)