WorldWideScience

Sample records for big wells field

  1. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  2. 2013 strategic petroleum reserve big hill well integrity grading report.

    Energy Technology Data Exchange (ETDEWEB)

    Lord, David L.; Roberts, Barry L.; Lord, Anna C. Snider; Bettin, Giorgia; Sobolik, Steven Ronald; Park, Byoung Yoon; Rudeen, David Keith; Eldredge, Lisa; Wynn, Karen; Checkai, Dean; Perry, James Thomas

    2014-02-01

    This report summarizes the work performed in developing a framework for the prioritization of cavern access wells for remediation and monitoring at the Big Hill Strategic Petroleum Reserve site. This framework was then applied to all 28 wells at the Big Hill site with each well receiving a grade for remediation and monitoring. Numerous factors affecting well integrity were incorporated into the grading framework including casing survey results, cavern pressure history, results from geomechanical simulations, and site geologic factors. The framework was developed in a way as to be applicable to all four of the Strategic Petroleum Reserve sites.

  3. Classical and quantum Big Brake cosmology for scalar field and tachyonic models

    Energy Technology Data Exchange (ETDEWEB)

    Kamenshchik, A. Yu. [Dipartimento di Fisica e Astronomia and INFN, Via Irnerio 46, 40126 Bologna (Italy) and L.D. Landau Institute for Theoretical Physics of the Russian Academy of Sciences, Kosygin str. 2, 119334 Moscow (Russian Federation); Manti, S. [Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa (Italy)

    2013-02-21

    We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bang and Big Crunch singularities are not traversable.

  4. Classical and quantum Big Brake cosmology for scalar field and tachyonic models

    International Nuclear Information System (INIS)

    Kamenshchik, A. Yu.; Manti, S.

    2013-01-01

    We study a relation between the cosmological singularities in classical and quantum theory, comparing the classical and quantum dynamics in some models possessing the Big Brake singularity - the model based on a scalar field and two models based on a tachyon-pseudo-tachyon field . It is shown that the effect of quantum avoidance is absent for the soft singularities of the Big Brake type while it is present for the Big Bang and Big Crunch singularities. Thus, there is some kind of a classical - quantum correspondence, because soft singularities are traversable in classical cosmology, while the strong Big Bang and Big Crunch singularities are not traversable.

  5. The Big Five Personality Factors and Application Fields

    Directory of Open Access Journals (Sweden)

    Agnė Matuliauskaitė

    2011-07-01

    Full Text Available The Big five factors are used in many research fields. The literature survey showed that the personality trait theory was used to study and explain relations with different variables. The article focuses on a brief description of methods that can help with identifying the Big five factors and considers the model for applying them in personnel selection. The paper looks at scientific researches assessing relations between the Big five factors and different variables such as job performance, academic performance, student knowledge management and evaluation.Article in Lithuanian

  6. Unique Associations Between Big Five Personality Aspects and Multiple Dimensions of Well-Being.

    Science.gov (United States)

    Sun, Jessie; Kaufman, Scott Barry; Smillie, Luke D

    2018-04-01

    Personality traits are associated with well-being, but the precise correlates vary across well-being dimensions and within each Big Five domain. This study is the first to examine the unique associations between the Big Five aspects (rather than facets) and multiple well-being dimensions. Two samples of U.S. participants (total N = 706; M age  = 36.17; 54% female) recruited via Amazon's Mechanical Turk completed measures of the Big Five aspects and subjective, psychological, and PERMA well-being. One aspect within each domain was more strongly associated with well-being variables. Enthusiasm and Withdrawal were strongly associated with a broad range of well-being variables, but other aspects of personality also had idiosyncratic associations with distinct forms of positive functioning (e.g., Compassion with positive relationships, Industriousness with accomplishment, and Intellect with personal growth). An aspect-level analysis provides an optimal (i.e., parsimonious yet sufficiently comprehensive) framework for describing the relation between personality traits and multiple ways of thriving in life. © 2016 Wiley Periodicals, Inc.

  7. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  8. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  9. Neural network prediction of carbonate lithofacies from well logs, Big Bow and Sand Arroyo Creek fields, Southwest Kansas

    Science.gov (United States)

    Qi, L.; Carr, T.R.

    2006-01-01

    In the Hugoton Embayment of southwestern Kansas, St. Louis Limestone reservoirs have relatively low recovery efficiencies, attributed to the heterogeneous nature of the oolitic deposits. This study establishes quantitative relationships between digital well logs and core description data, and applies these relationships in a probabilistic sense to predict lithofacies in 90 uncored wells across the Big Bow and Sand Arroyo Creek fields. In 10 wells, a single hidden-layer neural network based on digital well logs and core described lithofacies of the limestone depositional texture was used to train and establish a non-linear relationship between lithofacies assignments from detailed core descriptions and selected log curves. Neural network models were optimized by selecting six predictor variables and automated cross-validation with neural network parameters and then used to predict lithofacies on the whole data set of the 2023 half-foot intervals from the 10 cored wells with the selected network size of 35 and a damping parameter of 0.01. Predicted lithofacies results compared to actual lithofacies displays absolute accuracies of 70.37-90.82%. Incorporating adjoining lithofacies, within-one lithofacies improves accuracy slightly (93.72%). Digital logs from uncored wells were batch processed to predict lithofacies and probabilities related to each lithofacies at half-foot resolution corresponding to log units. The results were used to construct interpolated cross-sections and useful depositional patterns of St. Louis lithofacies were illustrated, e.g., the concentration of oolitic deposits (including lithofacies 5 and 6) along local highs and the relative dominance of quartz-rich carbonate grainstone (lithofacies 1) in the zones A and B of the St. Louis Limestone. Neural network techniques are applicable to other complex reservoirs, in which facies geometry and distribution are the key factors controlling heterogeneity and distribution of rock properties. Future work

  10. BigFoot Field Data for North American Sites, 1999-2003

    Data.gov (United States)

    National Aeronautics and Space Administration — The BigFoot project gathered field data for selected EOS Land Validation Sites in North America from 1999 to 2003. Data collected and derived for varying intervals...

  11. BigFoot Field Data for North American Sites, 1999-2003

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The BigFoot project gathered field data for selected EOS Land Validation Sites in North America from 1999 to 2003. Data collected and derived for varying...

  12. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  13. The coal deposits of the Alkali Butte, the Big Sand Draw, and the Beaver Creek fields, Fremont County, Wyoming

    Science.gov (United States)

    Thompson, Raymond M.; White, Vincent L.

    1952-01-01

    Large coal reserves are present in three areas located between 12 and 20 miles southeast of Riverton, Fremont County, central Wyoming. Coal in two of these areas, the Alkali Butte coal field and the Big Sand Draw coal field, is exposed on the surface and has been developed to some extent by underground mining. The Beaver Creek coal field is known only from drill cuttings and cores from wells drilled for oil and gas in the Beaver Creek oil and gas field.These three coal areas can be reached most readily from Riverton, Wyo. State Route 320 crosses Wind River about 1 mile south of Riverton. A few hundred yards south of the river a graveled road branches off the highway and extends south across the Popo Agie River toward Sand Draw oil and gas field. About 8 miles south of the highway along the Sand Draw road, a dirt road bears east and along this road it is about 12 miles to the Bell coal mine in the Alkali Butte coal field. Three miles southeast of the Alkali Butte turn-off, 3 miles of oiled road extends southwest into the Beaver Creek oil and gas field. About 6 miles southeast of the Beaver Creek turn-off, in the valley of Little Sand Draw Creek, a dirt road extends east 1. mile and then southeast 1 mile to the Downey mine in the Big Sand Draw coal field. Location of these coal fields is shown on figure 1 with their relationship to the Wind River basin and other coal fields, place localities, and wells mentioned in this report. The coal in the Alkali Butte coal field is exposed partly on the Wind River Indian Reservation in Tps. 1 and 2 S., R. 6 E., and partly on public land. Coal in the Beaver Creek and Big Sand Draw coal fields is mainly on public land. The region has a semiarid climate with rainfall averaging less than 10 in. per year. When rain does fall the sandy-bottomed stream channels fill rapidly and are frequently impassable for a few hours. Beaver Creek, Big Sand Draw, Little Sand Draw, and Kirby Draw and their smaller tributaries drain the area and flow

  14. Dirac fields in loop quantum gravity and big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Bojowald, Martin; Das, Rupam; Scherrer, Robert J.

    2008-01-01

    Big bang nucleosynthesis requires a fine balance between equations of state for photons and relativistic fermions. Several corrections to equation of state parameters arise from classical and quantum physics, which are derived here from a canonical perspective. In particular, loop quantum gravity allows one to compute quantum gravity corrections for Maxwell and Dirac fields. Although the classical actions are very different, quantum corrections to the equation of state are remarkably similar. To lowest order, these corrections take the form of an overall expansion-dependent multiplicative factor in the total density. We use these results, along with the predictions of big bang nucleosynthesis, to place bounds on these corrections and especially the patch size of discrete quantum gravity states.

  15. The Big Five personality factors and psychological well-being following stroke: a systematic review.

    Science.gov (United States)

    Dwan, Toni; Ownsworth, Tamara

    2017-12-22

    To identify and appraise studies investigating the relationship between the Big Five personality factors and psychological well-being following stroke and evidence for personality change. Systematic searches of six databases (PsychINFO, CINAHL, Ovid Medline, Cochrane, PubMed, and Web of Science) were conducted from inception to June 2017. Studies involving adult stroke samples that employed a validated measure of at least one of the Big Five personality factors were included. Two reviewers independently assessed the eligibility and methodological quality of studies. Eleven studies were identified that assessed associations between personality and psychological well-being after stroke (nine studies) or post-stroke personality change (two studies). A consistent finding was that higher neuroticism was significantly related to poorer psychological well-being. The evidence for the other Big Five factors was mixed. In terms of personality change, two cross-sectional studies reported high rates of elevated neuroticism (38-48%) and low extraversion (33-40%) relative to normative data. Different questionnaires and approaches to measuring personality (i.e., self vs. informant ratings, premorbid personality vs. current personality) complicated comparisons between studies. People high on neuroticism are at increased risk of poor psychological well-being after stroke. Prospective longitudinal studies are needed to address the limited research on post-stroke personality change. Implications for rehabilitation High neuroticism is associated with poorer psychological well-being after stroke. Assessing personality characteristics early after stroke may help to identify those at risk of poor psychological outcomes.

  16. 'Big data' in pharmaceutical science: challenges and opportunities.

    Science.gov (United States)

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  17. Automated disposal of produced water from a coalbed methane well field, a case history

    International Nuclear Information System (INIS)

    Luckianow, B.J.; Findley, M.L.; Paschal, W.T.

    1994-01-01

    This paper provides an overview of the automated disposal system for produced water designed and operated by Taurus Exploration, Inc. This presentation draws from Taurus' case study in the planning, design, construction, and operation of production water disposal facilities for the Mt. Olive well field, located in the Black Warrior Basin of Alabama. The common method for disposing of water produced from coalbed methane wells in the Warrior Basin is to discharge into a receiving stream. The limiting factor in the discharge method is the capability of the receiving stream to assimilate the chloride component of the water discharged. During the winter and spring, the major tributaries of the Black Warrior River are capable of assimilating far more production water than operations can generate. During the summer and fall months, however, these same tributaries can approach near zero flow, resulting in insufficient flow for dilution. During such periods pumping shut-down within the well field can be avoided by routing production waters into a storage facility. This paper discusses the automated production water disposal system on Big Sandy Creek designed and operated by Taurus. This system allows for continuous discharge to the receiving stream, thus taking full advantage of Big Sandy Creek's assimilative capacity, while allowing a provision for excess produced water storage and future stream discharge

  18. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  19. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  20. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  2. A theoretical model of subsidence caused by petroleum production: Big Hill Field, Jefferson County, Texas

    International Nuclear Information System (INIS)

    Hill, D.W.; Sharp, J.M. Jr.

    1993-01-01

    In the Texas Gulf Coastal Plain, there is a history of oil and gas production extending over 2 to 5 decades. Concurrent with this production history, there has been unprecedented population growth accompanied by vastly increased groundwater demands. Land subsidence on both local and regional bases in this geologic province has been measured and predicted in several studies. The vast majority of these studies have addressed the problem from the standpoint of groundwater usage while only a few have considered the effects of oil and gas production. Based upon field-based computational techniques (Helm, 1984), a model has been developed to predict land subsidence caused by oil and gas production. This method is applied to the Big Hill Field in Jefferson County, Texas. Inputs include production data from a series of wells in this field and lithologic data from electric logs of these same wells. Outputs include predicted amounts of subsidence, the time frame of subsidence, and sensitivity analyses of compressibility and hydraulic conductivity estimates. Depending upon estimated compressibility, subsidence, to date, is predicted to be as high as 20 cm. Similarly, depending upon estimated vertical hydraulic conductivity, the time frame may be decades for this subsidence. These same methods can be applied to other oil/gas fields with established production histories as well as new fields when production scenarios are assumed. Where subsidence has been carefully measured above petroleum reservoir, the model may be used inversely to calculate sediment compressibilities

  3. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  4. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  5. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  6. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  7. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  8. Generating ekpyrotic curvature perturbations before the big bang

    International Nuclear Information System (INIS)

    Lehners, Jean-Luc; Turok, Neil; McFadden, Paul; Steinhardt, Paul J.

    2007-01-01

    We analyze a general mechanism for producing a nearly scale-invariant spectrum of cosmological curvature perturbations during a contracting phase preceding a big bang, which can be entirely described using 4D effective field theory. The mechanism, based on first producing entropic perturbations and then converting them to curvature perturbations, can be naturally incorporated in cyclic and ekpyrotic models in which the big bang is modeled as a brane collision, as well as other types of cosmological models with a pre-big bang phase. We show that the correct perturbation amplitude can be obtained and that the spectral tilt n s tends to range from slightly blue to red, with 0.97 s <1.02 for the simplest models, a range compatible with current observations but shifted by a few percent towards the blue compared to the prediction of the simplest, large-field inflationary models

  9. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  10. Big³. Editorial.

    Science.gov (United States)

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  11. Data-driven analysis of collections of big datasets by the Bi-CoPaM method yields field-specific novel insights

    DEFF Research Database (Denmark)

    Abu-Jamous, Basel; Liu, Chao; Roberts, David, J.

    2017-01-01

    not commonly considered. To bridge this gap between the fast pace of data generation and the slower pace of data analysis, and to exploit the massive amounts of existing data, we suggest employing data-driven explorations to analyse collections of related big datasets. This approach aims at extracting field......Massive amounts of data have recently been, and are increasingly being, generated from various fields, such as bioinformatics, neuroscience and social networks. Many of these big datasets were generated to answer specific research questions, and were analysed accordingly. However, the scope...... clusters of consistently correlated objects. We demonstrate the power of data-driven explorations by applying the Bi-CoPaM to two collections of big datasets from two distinct fields, namely bioinformatics and neuroscience. In the first application, the collective analysis of forty yeast gene expression...

  12. Current applications of big data in obstetric anesthesiology.

    Science.gov (United States)

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  13. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  14. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  15. Big-Eyed Bugs Have Big Appetite for Pests

    Science.gov (United States)

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  16. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  17. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  18. Keynote: Big Data, Big Opportunities

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    The enthusiasm for big data is obscuring the complexity and diversity of data in scholarship and the challenges for stewardship. Inside the black box of data are a plethora of research, technology, and policy issues. Data are not shiny objects that are easily exchanged. Rather, data are representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Data practices are local, varying from field to field, individual to indiv...

  19. Alternative mechanism of avoiding the big rip or little rip for a scalar phantom field

    International Nuclear Information System (INIS)

    Xi Ping; Zhai Xianghua; Li Xinzhou

    2012-01-01

    Depending on the choice of its potential, the scalar phantom field φ (the equation of state parameter w 2 correction. The singularity is avoidable under all these potentials. Hence, we conclude that the avoidance of big or little rip is hardly dependent on special potential.

  20. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  1. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  2. Quantum nature of the big bang.

    Science.gov (United States)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  3. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  4. A Big Five facet analysis of sub-clinical narcissism: understanding boldness in terms of well-known personality traits.

    Science.gov (United States)

    Furnham, Adrian; Crump, John

    2014-08-01

    This study aimed to examine a Big Five 'bright-side' analysis of a sub-clinical personality disorder, i.e. narcissism. A total of 6957 British adults completed the NEO-PI-R, which measures the Big Five Personality factors at the domain and the facet level, as well as the Hogan Development Survey (HDS), which has a measure of Narcissism called Bold as one of its dysfunctional interpersonal tendencies. Correlation and regression results confirmed many of the associations between the Big Five domains and facets (NEO-PI-R) and sub-clinical narcissism. The Bold (Narcissism) scale from the HDS was the criterion variable in all analyses. Bold individuals are disagreeable extraverts with very low scores on facet Modesty but moderately high scores on Assertiveness, Competence and Achievement Striving. The study confirmed work using different population groups and different measures. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Big Five personality traits: are they really important for the subjective well-being of Indians?

    Science.gov (United States)

    Tanksale, Deepa

    2015-02-01

    This study empirically examined the relationship between the Big Five personality traits and subjective well-being (SWB) in India. SWB variables used were life satisfaction, positive affect and negative affect. A total of 183 participants in the age range 30-40 years from Pune, India, completed the personality and SWB measures. Backward stepwise regression analysis showed that the Big Five traits accounted for 17% of the variance in life satisfaction, 35% variance in positive affect and 28% variance in negative affect. Conscientiousness emerged as the strongest predictor of life satisfaction. In line with the earlier research findings, neuroticism and extraversion were found to predict negative affect and positive affect, respectively. Neither openness to experience nor agreeableness contributed to SWB. The research emphasises the need to revisit the association between personality and SWB across different cultures, especially non-western cultures. © 2014 International Union of Psychological Science.

  6. Big Data - Smart Health Strategies

    Science.gov (United States)

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  7. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  8. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  9. Big data analytics in healthcare: promise and potential.

    Science.gov (United States)

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  10. Big Data: Philosophy, Emergence, Crowdledge, and Science Education

    Science.gov (United States)

    dos Santos, Renato P.

    2015-01-01

    Big Data already passed out of hype, is now a field that deserves serious academic investigation, and natural scientists should also become familiar with Analytics. On the other hand, there is little empirical evidence that any science taught in school is helping people to lead happier, more prosperous, or more politically well-informed lives. In…

  11. Big Data as Governmentality in International Development

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    2017-01-01

    Statistics have long shaped the field of visibility for the governance of development projects. The introduction of big data has altered the field of visibility. Employing Dean's “analytics of government” framework, we analyze two cases—malaria tracking in Kenya and monitoring of food prices...... in Indonesia. Our analysis shows that big data introduces a bias toward particular types of visualizations. What problems are being made visible through big data depends to some degree on how the underlying data is visualized and who is captured in the visualizations. It is also influenced by technical factors...

  12. [Embracing medical innovation in the era of big data].

    Science.gov (United States)

    You, Suning

    2015-01-01

    Along with the advent of big data era worldwide, medical field has to place itself in it inevitably. The current article thoroughly introduces the basic knowledge of big data, and points out the coexistence of its advantages and disadvantages. Although the innovations in medical field are struggling, the current medical pattern will be changed fundamentally by big data. The article also shows quick change of relevant analysis in big data era, depicts a good intention of digital medical, and proposes some wise advices to surgeons.

  13. Big data and technology assessment: research topic or competitor?

    DEFF Research Database (Denmark)

    Rieder, Gernot; Simon, Judith

    2017-01-01

    With its promise to transform how we live, work, and think, Big Data has captured the imaginations of governments, businesses, and academia. However, the grand claims of Big Data advocates have been accompanied with concerns about potential detrimental implications for civil rights and liberties......, leading to a climate of clash and mutual distrust between different stakeholders. Throughout the years, the interdisciplinary field of technology assessment (TA) has gained considerable experience in studying socio-technical controversies and as such is exceptionally well equipped to assess the premises...... considerations on how TA might contribute to more responsible data-based research and innovation....

  14. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  15. Visualizing the knowledge structure and evolution of big data research in healthcare informatics.

    Science.gov (United States)

    Gu, Dongxiao; Li, Jingjing; Li, Xingguo; Liang, Changyong

    2017-02-01

    In recent years, the literature associated with healthcare big data has grown rapidly, but few studies have used bibliometrics and a visualization approach to conduct deep mining and reveal a panorama of the healthcare big data field. To explore the foundational knowledge and research hotspots of big data research in the field of healthcare informatics, this study conducted a series of bibliometric analyses on the related literature, including papers' production trends in the field and the trend of each paper's co-author number, the distribution of core institutions and countries, the core literature distribution, the related information of prolific authors and innovation paths in the field, a keyword co-occurrence analysis, and research hotspots and trends for the future. By conducting a literature content analysis and structure analysis, we found the following: (a) In the early stage, researchers from the United States, the People's Republic of China, the United Kingdom, and Germany made the most contributions to the literature associated with healthcare big data research and the innovation path in this field. (b) The innovation path in healthcare big data consists of three stages: the disease early detection, diagnosis, treatment, and prognosis phase, the life and health promotion phase, and the nursing phase. (c) Research hotspots are mainly concentrated in three dimensions: the disease dimension (e.g., epidemiology, breast cancer, obesity, and diabetes), the technical dimension (e.g., data mining and machine learning), and the health service dimension (e.g., customized service and elderly nursing). This study will provide scholars in the healthcare informatics community with panoramic knowledge of healthcare big data research, as well as research hotspots and future research directions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Echolocation behavior of big brown bats, Eptesicus fuscus, in the field and the laboratory

    DEFF Research Database (Denmark)

    Surlykke, Annemarie; Moss, Cynthia F.

    2000-01-01

    Echolocation signals were recorded from big brown bats, Eptesicus fuscus, flying in the field and the laboratory. In open field areas the interpulse intervals ~IPI! of search signals were either around 134 ms or twice that value, 270 ms. At long IPI’s the signals were of long duration ~14 to 18......–20 ms!, narrow bandwidth, and low frequency, sweeping down to a minimum frequency (Fmin) of 22–25 kHz. At short IPI’s the signals were shorter ~6–13 ms!, of higher frequency, and broader bandwidth. In wooded areas only short ~6–11 ms! relatively broadband search signals were emitted at a higher rate...

  17. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  18. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  19. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  20. [Applications of eco-environmental big data: Progress and prospect].

    Science.gov (United States)

    Zhao, Miao Miao; Zhao, Shi Cheng; Zhang, Li Yun; Zhao, Fen; Shao, Rui; Liu, Li Xiang; Zhao, Hai Feng; Xu, Ming

    2017-05-18

    With the advance of internet and wireless communication technology, the fields of ecology and environment have entered a new digital era with the amount of data growing explosively and big data technologies attracting more and more attention. The eco-environmental big data is based airborne and space-/land-based observations of ecological and environmental factors and its ultimate goal is to integrate multi-source and multi-scale data for information mining by taking advantages of cloud computation, artificial intelligence, and modeling technologies. In comparison with other fields, the eco-environmental big data has its own characteristics, such as diverse data formats and sources, data collected with various protocols and standards, and serving different clients and organizations with special requirements. Big data technology has been applied worldwide in ecological and environmental fields including global climate prediction, ecological network observation and modeling, and regional air pollution control. The development of eco-environmental big data in China is facing many problems, such as data sharing issues, outdated monitoring facilities and techno-logies, and insufficient data mining capacity. Despite all this, big data technology is critical to solving eco-environmental problems, improving prediction and warning accuracy on eco-environmental catastrophes, and boosting scientific research in the field in China. We expected that the eco-environmental big data would contribute significantly to policy making and environmental services and management, and thus the sustainable development and eco-civilization construction in China in the coming decades.

  1. Big data in complex systems challenges and opportunities

    CERN Document Server

    Azar, Ahmad; Snasael, Vaclav; Kacprzyk, Janusz; Abawajy, Jemal

    2015-01-01

    This volume provides challenges and Opportunities with updated, in-depth material on the application of Big data to complex systems in order to find solutions for the challenges and problems facing big data sets applications. Much data today is not natively in structured format; for example, tweets and blogs are weakly structured pieces of text, while images and video are structured for storage and display, but not for semantic content and search. Therefore transforming such content into a structured format for later analysis is a major challenge. Data analysis, organization, retrieval, and modeling are other  foundational challenges treated in this book. The material of this book will be useful for researchers and practitioners in the field of big data as well as advanced undergraduate and graduate  students. Each of the 17 chapters in the book opens with a chapter abstract and key terms list. The chapters are organized along the lines of problem description, related works, and analysis of the results and ...

  2. Optimization of well field management

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine

    Groundwater is a limited but important resource for fresh water supply. Differ- ent conflicting objectives are important when operating a well field. This study investigates how the management of a well field can be improved with respect to different objectives simultaneously. A framework...... for optimizing well field man- agement using multi-objective optimization is developed. The optimization uses the Strength Pareto Evolutionary Algorithm 2 (SPEA2) to find the Pareto front be- tween the conflicting objectives. The Pareto front is a set of non-inferior optimal points and provides an important tool...... for the decision-makers. The optimization framework is tested on two case studies. Both abstract around 20,000 cubic meter of water per day, but are otherwise rather different. The first case study concerns the management of Hardhof waterworks, Switzer- land, where artificial infiltration of river water...

  3. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  4. Modeling and Analysis in Marine Big Data: Advances and Challenges

    Directory of Open Access Journals (Sweden)

    Dongmei Huang

    2015-01-01

    Full Text Available It is aware that big data has gathered tremendous attentions from academic research institutes, governments, and enterprises in all aspects of information sciences. With the development of diversity of marine data acquisition techniques, marine data grow exponentially in last decade, which forms marine big data. As an innovation, marine big data is a double-edged sword. On the one hand, there are many potential and highly useful values hidden in the huge volume of marine data, which is widely used in marine-related fields, such as tsunami and red-tide warning, prevention, and forecasting, disaster inversion, and visualization modeling after disasters. There is no doubt that the future competitions in marine sciences and technologies will surely converge into the marine data explorations. On the other hand, marine big data also brings about many new challenges in data management, such as the difficulties in data capture, storage, analysis, and applications, as well as data quality control and data security. To highlight theoretical methodologies and practical applications of marine big data, this paper illustrates a broad view about marine big data and its management, makes a survey on key methods and models, introduces an engineering instance that demonstrates the management architecture, and discusses the existing challenges.

  5. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  6. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  7. Field data provide estimates of effective permeability, fracture spacing, well drainage area and incremental production in gas shales

    KAUST Repository

    Eftekhari, Behzad

    2018-05-23

    About half of US natural gas comes from gas shales. It is valuable to study field production well by well. We present a field data-driven solution for long-term shale gas production from a horizontal, hydrofractured well far from other wells and reservoir boundaries. Our approach is a hybrid between an unstructured big-data approach and physics-based models. We extend a previous two-parameter scaling theory of shale gas production by adding a third parameter that incorporates gas inflow from the external unstimulated reservoir. This allows us to estimate for the first time the effective permeability of the unstimulated shale and the spacing of fractures in the stimulated region. From an analysis of wells in the Barnett shale, we find that on average stimulation fractures are spaced every 20 m, and the effective permeability of the unstimulated region is 100 nanodarcy. We estimate that over 30 years on production the Barnett wells will produce on average about 20% more gas because of inflow from the outside of the stimulated volume. There is a clear tradeoff between production rate and ultimate recovery in shale gas development. In particular, our work has strong implications for well spacing in infill drilling programs.

  8. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  9. Application and Prospect of Big Data in Water Resources

    Science.gov (United States)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  10. Feasibility of Geothermal Energy Extraction from Non-Activated Petroleum Wells in Arun Field

    Science.gov (United States)

    Syarifudin, M.; Octavius, F.; Maurice, K.

    2016-09-01

    The big obstacle to develop geothermal is frequently came from the economical viewpoint which mostly contributed by the drilling cost. However, it potentially be tackled by converting the existing decommissioned petroleum well to be converted for geothermal purposes. In Arun Field, Aceh, there are 188 wells and 62% of them are inactive (2013). The major obstacle is that the outlet water temperature from this conversion setup will not as high as the temperature that come out from the conventional geothermal well, since it will only range from 60 to 180oC depending on several key parameters such as the values of ground temperature, geothermal gradient in current location, the flow inside of the tubes, and type of the tubes (the effect from these parameters are studied). It will just be considered as low to medium temperature, according to geothermal well classification. Several adjustments has to be made such as putting out pipes inside the well that have been used to lift the oil/gas and replacing them with a curly long coil tubing which act as a heat exchanger. It will convert the cold water from the surface to be indirectly heated by the hot rock at the bottom of the well in a closed loop system. In order to make power production, the binary cycle system is used so that the low to medium temperature fluid is able to generate electricity. Based on this study, producing geothermal energy for direct use and electricity generation in Arun Field is technically possible. In this study case, we conclude that 2900 kW of electricity could be generated. While for-direct utility, a lot of local industries in Northern Sumatera could get the benefits from this innovation.

  11. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  12. Application and Exploration of Big Data Mining in Clinical Medicine.

    Science.gov (United States)

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  13. Benefits, Challenges and Tools of Big Data Management

    Directory of Open Access Journals (Sweden)

    Fernando L. F. Almeida

    2017-10-01

    Full Text Available Big Data is one of the most predominant field of knowledge and research that has generated high repercussion in the process of digital transformation of organizations in recent years. The Big Data's main goal is to improve work processes through analysis and interpretation of large amounts of data. Knowing how Big Data works, its benefits, challenges and tools, are essential elements for business success. Our study performs a systematic review on Big Data field adopting a mind map approach, which allows us to easily and visually identify its main elements and dependencies. The findings identified and mapped a total of 12 main branches of benefits, challenges and tools, and also a total of 52 sub branches in each of the main areas of the model.

  14. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  15. Big bounce, slow-roll inflation, and dark energy from conformal gravity

    Science.gov (United States)

    Gegenberg, Jack; Rahmati, Shohreh; Seahra, Sanjeev S.

    2017-02-01

    We examine the cosmological sector of a gauge theory of gravity based on the SO(4,2) conformal group of Minkowski space. We allow for conventional matter coupled to the spacetime metric as well as matter coupled to the field that gauges special conformal transformations. An effective vacuum energy appears as an integration constant, and this allows us to recover the late time acceleration of the Universe. Furthermore, gravitational fields sourced by ordinary cosmological matter (i.e. dust and radiation) are significantly weakened in the very early Universe, which has the effect of replacing the big bang with a big bounce. Finally, we find that this bounce is followed by a period of nearly exponential slow roll inflation that can last long enough to explain the large scale homogeneity of the cosmic microwave background.

  16. Big Data Analytics in Healthcare.

    Science.gov (United States)

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.

  17. Magnetic moment calculation for p+d→ 3 He+γ process in Big=bang nucleosynthesis with effective field theory

    International Nuclear Information System (INIS)

    Bayegan, S.; Sadeghi, H.

    2004-01-01

    In big-bang nucleosynthesis, processes relevant ti increasing of nucleon density are more important. One of the theories that its solutions more accurately explain the experimental works is Effective Field Theory in this paper. Magnetic moment (χM1) for radiative capture of protons by deuterons p + d → 3 He+γ process is calculated using Effective Field Theory. The calculation includes coulomb interaction up to next-to -next-leading order (N 2 LO)

  18. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  19. Can companies benefit from Big Science? Science and Industry

    CERN Document Server

    Autio, Erkko; Bianchi-Streit, M

    2003-01-01

    Several studies have indicated that there are significant returns on financial investment via "Big Science" centres. Financial multipliers ranging from 2.7 (ESA) to 3.7 (CERN) have been found, meaning that each Euro invested in industry by Big Science generates a two- to fourfold return for the supplier. Moreover, laboratories such as CERN are proud of their record in technology transfer, where research developments lead to applications in other fields - for example, with particle accelerators and detectors. Less well documented, however, is the effect of the experience that technological firms gain through working in the arena of Big Science. Indeed, up to now there has been no explicit empirical study of such benefits. Our findings reveal a variety of outcomes, which include technological learning, the development of new products and markets, and impact on the firm's organization. The study also demonstrates the importance of technologically challenging projects for staff at CERN. Together, these findings i...

  20. Research on information security in big data era

    Science.gov (United States)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  1. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  2. BIG-DATA and the Challenges for Statistical Inference and Economics Teaching and Learning

    Directory of Open Access Journals (Sweden)

    J.L. Peñaloza Figueroa

    2017-04-01

    Full Text Available The  increasing  automation  in  data  collection,  either  in  structured  or unstructured formats, as well as the development of reading, concatenation and comparison algorithms and the growing analytical skills which characterize the era of Big Data, cannot not only be considered a technological achievement, but an organizational, methodological and analytical challenge for knowledge as well, which is necessary to generate opportunities and added value. In fact, exploiting the potential of Big-Data includes all fields of community activity; and given its ability to extract behaviour patterns, we are interested in the challenges for the field of teaching and learning, particularly in the field of statistical inference and economic theory. Big-Data can improve the understanding of concepts, models and techniques used in both statistical inference and economic theory, and it can also generate reliable and robust short and long term predictions. These facts have led to the demand for analytical capabilities, which in turn encourages teachers and students to demand access to massive information produced by individuals, companies and public and private organizations in their transactions and inter- relationships. Mass data (Big Data is changing the way people access, understand and organize knowledge, which in turn is causing a shift in the approach to statistics and economics teaching, considering them as a real way of thinking rather than just operational and technical disciplines. Hence, the question is how teachers can use automated collection and analytical skills to their advantage when teaching statistics and economics; and whether it will lead to a change in what is taught and how it is taught.

  3. Big Data Analytics, Infectious Diseases and Associated Ethical Impacts

    OpenAIRE

    Garattini, C.; Raffle, J.; Aisyah, D. N.; Sartain, F.; Kozlakidis, Z.

    2017-01-01

    The exponential accumulation, processing and accrual of big data in healthcare are only possible through an equally rapidly evolving field of big data analytics. The latter offers the capacity to rationalize, understand and use big data to serve many different purposes, from improved services modelling to prediction of treatment outcomes, to greater patient and disease stratification. In the area of infectious diseases, the application of big data analytics has introduced a number of changes ...

  4. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  5. Application and Exploration of Big Data Mining in Clinical Medicine

    Science.gov (United States)

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-01-01

    Objective: To review theories and technologies of big data mining and their application in clinical medicine. Data Sources: Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Study Selection: Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. Results: This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster–Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Conclusion: Big data mining has the potential to play an important role in clinical medicine. PMID:26960378

  6. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  7. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  8. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  9. A Big Data Guide to Understanding Climate Change: The Case for Theory-Guided Data Science.

    Science.gov (United States)

    Faghmous, James H; Kumar, Vipin

    2014-09-01

    Global climate change and its impact on human life has become one of our era's greatest challenges. Despite the urgency, data science has had little impact on furthering our understanding of our planet in spite of the abundance of climate data. This is a stark contrast from other fields such as advertising or electronic commerce where big data has been a great success story. This discrepancy stems from the complex nature of climate data as well as the scientific questions climate science brings forth. This article introduces a data science audience to the challenges and opportunities to mine large climate datasets, with an emphasis on the nuanced difference between mining climate data and traditional big data approaches. We focus on data, methods, and application challenges that must be addressed in order for big data to fulfill their promise with regard to climate science applications. More importantly, we highlight research showing that solely relying on traditional big data techniques results in dubious findings, and we instead propose a theory-guided data science paradigm that uses scientific theory to constrain both the big data techniques as well as the results-interpretation process to extract accurate insight from large climate data .

  10. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  11. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  12. Astroinformatics: the big data of the universe

    OpenAIRE

    Barmby, Pauline

    2016-01-01

    In astrophysics we like to think that our field was the originator of big data, back when it had to be carried around in big sky charts and books full of tables. These days, it's easier to move astrophysics data around, but we still have a lot of it, and upcoming telescope  facilities will generate even more. I discuss how astrophysicists approach big data in general, and give examples from some Western Physics & Astronomy research projects.  I also give an overview of ho...

  13. Scalar field cosmologies with inverted potentials

    Energy Technology Data Exchange (ETDEWEB)

    Boisseau, B.; Giacomini, H. [Université de Tours, Laboratoire de Mathématiques et Physique Théorique, CNRS/UMR 7350, 37200 Tours (France); Polarski, D., E-mail: bruno.boisseau@lmpt.univ-tours.fr, E-mail: hector.giacomini@lmpt.univ-tours.fr, E-mail: david.polarski@umontpellier.fr [Université Montpellier and CNRS, Laboratoire Charles Coulomb, UMR 5221, F-34095 Montpellier (France)

    2015-10-01

    Regular bouncing solutions in the framework of a scalar-tensor gravity model were found in a recent work. We reconsider the problem in the Einstein frame (EF) in the present work. Singularities arising at the limit of physical viability of the model in the Jordan frame (JF) are either of the Big Bang or of the Big Crunch type in the EF. As a result we obtain integrable scalar field cosmological models in general relativity (GR) with inverted double-well potentials unbounded from below which possess solutions regular in the future, tending to a de Sitter space, and starting with a Big Bang. The existence of the two fixed points for the field dynamics at late times found earlier in the JF becomes transparent in the EF.

  14. Scalar field cosmologies with inverted potentials

    International Nuclear Information System (INIS)

    Boisseau, B.; Giacomini, H.; Polarski, D.

    2015-01-01

    Regular bouncing solutions in the framework of a scalar-tensor gravity model were found in a recent work. We reconsider the problem in the Einstein frame (EF) in the present work. Singularities arising at the limit of physical viability of the model in the Jordan frame (JF) are either of the Big Bang or of the Big Crunch type in the EF. As a result we obtain integrable scalar field cosmological models in general relativity (GR) with inverted double-well potentials unbounded from below which possess solutions regular in the future, tending to a de Sitter space, and starting with a Big Bang. The existence of the two fixed points for the field dynamics at late times found earlier in the JF becomes transparent in the EF

  15. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  16. Observational hints on the Big Bounce

    International Nuclear Information System (INIS)

    Mielczarek, Jakub; Kurek, Aleksandra; Szydłowski, Marek; Kamionka, Michał

    2010-01-01

    In this paper we study possible observational consequences of the bouncing cosmology. We consider a model where a phase of inflation is preceded by a cosmic bounce. While we consider in this paper only that the bounce is due to loop quantum gravity, most of the results presented here can be applied for different bouncing cosmologies. We concentrate on the scenario where the scalar field, as the result of contraction of the universe, is driven from the bottom of the potential well. The field is amplified, and finally the phase of the standard slow-roll inflation is realized. Such an evolution modifies the standard inflationary spectrum of perturbations by the additional oscillations and damping on the large scales. We extract the parameters of the model from the observations of the cosmic microwave background radiation. In particular, the value of inflaton mass is equal to m = (1.7±0.6)·10 13 GeV. In our considerations we base on the seven years of observations made by the WMAP satellite. We propose the new observational consistency check for the phase of slow-roll inflation. We investigate the conditions which have to be fulfilled to make the observations of the Big Bounce effects possible. We translate them to the requirements on the parameters of the model and then put the observational constraints on the model. Based on assumption usually made in loop quantum cosmology, the Barbero-Immirzi parameter was shown to be constrained by γ < 1100 from the cosmological observations. We have compared the Big Bounce model with the standard Big Bang scenario and showed that the present observational data is not informative enough to distinguish these models

  17. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  18. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  19. Rotational inhomogeneities from pre-big bang?

    International Nuclear Information System (INIS)

    Giovannini, Massimo

    2005-01-01

    The evolution of the rotational inhomogeneities is investigated in the specific framework of four-dimensional pre-big bang models. While minimal (dilaton-driven) scenarios do not lead to rotational fluctuations, in the case of non-minimal (string-driven) models, fluid sources are present in the pre-big bang phase. The rotational modes of the geometry, coupled to the divergenceless part of the velocity field, can then be amplified depending upon the value of the barotropic index of the perfect fluids. In the light of a possible production of rotational inhomogeneities, solutions describing the coupled evolution of the dilaton field and of the fluid sources are scrutinized in both the string and Einstein frames. In semi-realistic scenarios, where the curvature divergences are regularized by means of a non-local dilaton potential, the rotational inhomogeneities are amplified during the pre-big bang phase but they decay later on. Similar analyses can also be performed when a contraction occurs directly in the string frame metric

  20. Rotational inhomogeneities from pre-big bang?

    Energy Technology Data Exchange (ETDEWEB)

    Giovannini, Massimo [Department of Physics, Theory Division, CERN, 1211 Geneva 23 (Switzerland)

    2005-01-21

    The evolution of the rotational inhomogeneities is investigated in the specific framework of four-dimensional pre-big bang models. While minimal (dilaton-driven) scenarios do not lead to rotational fluctuations, in the case of non-minimal (string-driven) models, fluid sources are present in the pre-big bang phase. The rotational modes of the geometry, coupled to the divergenceless part of the velocity field, can then be amplified depending upon the value of the barotropic index of the perfect fluids. In the light of a possible production of rotational inhomogeneities, solutions describing the coupled evolution of the dilaton field and of the fluid sources are scrutinized in both the string and Einstein frames. In semi-realistic scenarios, where the curvature divergences are regularized by means of a non-local dilaton potential, the rotational inhomogeneities are amplified during the pre-big bang phase but they decay later on. Similar analyses can also be performed when a contraction occurs directly in the string frame metric.

  1. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  2. The role of big laboratories

    International Nuclear Information System (INIS)

    Heuer, R-D

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward. (paper)

  3. Editorial: Big data through the power lens: Marker for regulating innovation

    OpenAIRE

    Ulbricht, Lena; von Grafenstein, Maximilian

    2016-01-01

    Facing general conceptions of the power effects of big data, this thematic edition is interested in studies that scrutinise big data and power in concrete fields of application. It brings together scholars from different disciplines who analyse the fields agriculture, education, border control and consumer policy. As will be made explicit in the following, each of the articles tells us something about firstly, what big data is and how it relates to power. They secondly also shed light on how ...

  4. Big Data Comes to School

    Directory of Open Access Journals (Sweden)

    Bill Cope

    2016-03-01

    Full Text Available The prospect of “big data” at once evokes optimistic views of an information-rich future and concerns about surveillance that adversely impacts our personal and private lives. This overview article explores the implications of big data in education, focusing by way of example on data generated by student writing. We have chosen writing because it presents particular complexities, highlighting the range of processes for collecting and interpreting evidence of learning in the era of computer-mediated instruction and assessment as well as the challenges. Writing is significant not only because it is central to the core subject area of literacy; it is also an ideal medium for the representation of deep disciplinary knowledge across a number of subject areas. After defining what big data entails in education, we map emerging sources of evidence of learning that separately and together have the potential to generate unprecedented amounts of data: machine assessments, structured data embedded in learning, and unstructured data collected incidental to learning activity. Our case is that these emerging sources of evidence of learning have significant implications for the traditional relationships between assessment and instruction. Moreover, for educational researchers, these data are in some senses quite different from traditional evidentiary sources, and this raises a number of methodological questions. The final part of the article discusses implications for practice in an emerging field of education data science, including publication of data, data standards, and research ethics.

  5. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  6. Big data, surveillance and crisis management

    NARCIS (Netherlands)

    Boersma, F.K.; Fonio, C.

    2018-01-01

    Big data, surveillance, crisis management. Three largely different and richly researched fields, however, the interplay amongst these three domains is rarely addressed. In this enlightening title, the link between these three fields is explored in a consequential order through a variety of

  7. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  8. A practical guide to big data research in psychology.

    Science.gov (United States)

    Chen, Eric Evan; Wojcik, Sean P

    2016-12-01

    The massive volume of data that now covers a wide variety of human behaviors offers researchers in psychology an unprecedented opportunity to conduct innovative theory- and data-driven field research. This article is a practical guide to conducting big data research, covering data management, acquisition, processing, and analytics (including key supervised and unsupervised learning data mining methods). It is accompanied by walkthrough tutorials on data acquisition, text analysis with latent Dirichlet allocation topic modeling, and classification with support vector machines. Big data practitioners in academia, industry, and the community have built a comprehensive base of tools and knowledge that makes big data research accessible to researchers in a broad range of fields. However, big data research does require knowledge of software programming and a different analytical mindset. For those willing to acquire the requisite skills, innovative analyses of unexpected or previously untapped data sources can offer fresh ways to develop, test, and extend theories. When conducted with care and respect, big data research can become an essential complement to traditional research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  10. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  11. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  12. Quantum particle in a potential well field and in an electric field

    International Nuclear Information System (INIS)

    Gyunter, U.; Olejnik, V.P.

    1990-01-01

    Solutions of the Dirac equation in the field of δ-like potential well with arbitrary symmetry and in uniform electric field were obtained and analyzed. It is shown that wave function and energy of electron in bound state in the absence of electric field depend sufficiently on the type of potential well symmetry. 1 ref

  13. Investigating Seed Longevity of Big Sagebrush (Artemisia tridentata)

    Science.gov (United States)

    Wijayratne, Upekala C.; Pyke, David A.

    2009-01-01

    from each site, as well as several environmental variables, were used to evaluate seed viability within the context of habitat variation. Initial viability of seeds used in the seed retrieval experiment was 81 and 92 percent for mountain and Wyoming big sagebrush, respectively. After remaining in the field for 24 months, buried Wyoming big sagebrush seeds retained 28-58 percent viability,11-23 percent of seeds under litter remained viable, and no seeds remained viable on the surface (estimates are 95-percent confidence intervals). The odds of remaining viable did not change from 12 to 24 months. However, after 24 months the odds of seeds beneath litter being viable decreased to 75 percent of the odds of viability at 12 months. Similar to Wyoming big sagebrush, buried seeds of mountain big sagebrush were 31-68 percent viable, seeds under litter retained 10-22 percent of their viability, and no surface seeds were viable after 24 months. Both subspecies of big sagebrush had some portion of seed that remained viable for more than one growing season provided they were buried or under litter. Although seeds beneath litter may remain viable in intact communities, seeds are susceptible to incineration during fires. Nine months after seed dispersal, seed bank estimates for Wyoming big sagebrush ranged from 19 to 49 viable seeds/m2 in litter samples and 19-57 viable seeds/m2 in soil samples (95-percent confidence interval). For mountain big sagebrush, estimates were 27-75 viable seeds/m2 in litter samples and 54-139 viable seeds/m2 in soil (95-percent confidence interval). The number of viable seeds present in the seed bank 9 months after seed dispersal was not significantly different from the number present immediately after seed dispersal. Seed viability was highest in mountain big sagebrush sites for seeds on the surface and beneath litter, but decreased after one season. Buried seeds of both subspecies were in equal abundances and may be insulated from the effect

  14. Dalhart's only Permian field gets best oil well

    International Nuclear Information System (INIS)

    Land, R.

    1992-01-01

    This paper reports that activity is picking up in Proctor Ranch oil field in the northwestern Texas panhandle, the only Permian producing field in the lightly drilled Dalhart basin. During the last 2 1/2 months, the field has a new operator and a new producing well, the best of five drilled since discovery in 1990. Corlena Oil Co., Amarillo, acquired the field from McKinney Oil Co. in May and tested its first well in early July. The 1-64 Proctor, 18 miles west of Channing, pumped at rates as high as 178 bd of oil and 6 b/d of water from Permian Wolfcamp dolomite perforations at 4,016-29 ft. Corlena plans to drill another well south of the field soon. The lease requires that the next well be spudded by early November. The field appears to be combination structural-stratigraphic trap in which the dolomite pinches out against the Bravo Domes-Oldham nose to the west

  15. Will Big Data Mean the End of Privacy?

    Science.gov (United States)

    Pence, Harry E.

    2015-01-01

    Big Data is currently a hot topic in the field of technology, and many campuses are considering the addition of this topic into their undergraduate courses. Big Data tools are not just playing an increasingly important role in many commercial enterprises; they are also combining with new digital devices to dramatically change privacy. This article…

  16. Functional connectomics from a "big data" perspective.

    Science.gov (United States)

    Xia, Mingrui; He, Yong

    2017-10-15

    In the last decade, explosive growth regarding functional connectome studies has been observed. Accumulating knowledge has significantly contributed to our understanding of the brain's functional network architectures in health and disease. With the development of innovative neuroimaging techniques, the establishment of large brain datasets and the increasing accumulation of published findings, functional connectomic research has begun to move into the era of "big data", which generates unprecedented opportunities for discovery in brain science and simultaneously encounters various challenging issues, such as data acquisition, management and analyses. Big data on the functional connectome exhibits several critical features: high spatial and/or temporal precision, large sample sizes, long-term recording of brain activity, multidimensional biological variables (e.g., imaging, genetic, demographic, cognitive and clinic) and/or vast quantities of existing findings. We review studies regarding functional connectomics from a big data perspective, with a focus on recent methodological advances in state-of-the-art image acquisition (e.g., multiband imaging), analysis approaches and statistical strategies (e.g., graph theoretical analysis, dynamic network analysis, independent component analysis, multivariate pattern analysis and machine learning), as well as reliability and reproducibility validations. We highlight the novel findings in the application of functional connectomic big data to the exploration of the biological mechanisms of cognitive functions, normal development and aging and of neurological and psychiatric disorders. We advocate the urgent need to expand efforts directed at the methodological challenges and discuss the direction of applications in this field. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A short story about a big magic bug

    OpenAIRE

    Bunk, Boyke; Schulz, Arne; Stammen, Simon; Münch, Richard; Warren, Martin J; Rohde, Manfred; Jahn, Dieter; Biedendieck, Rebekka

    2010-01-01

    Bacillus megaterium, the "big beast," is a Gram-positive bacterium with a size of 4 × 1.5 µm. During the last years, it became more and more popular in the field of biotechnology for its recombinant protein production capacity. For the purpose of intra- as well as extracellular protein synthesis several vectors were constructed and commercialized (MoBiTec GmbH, Germany). On the basis of two compatible vectors, a T7 RNA polymerase driven protein production system was established. Vectors for c...

  18. The big data processing platform for intelligent agriculture

    Science.gov (United States)

    Huang, Jintao; Zhang, Lichen

    2017-08-01

    Big data technology is another popular technology after the Internet of Things and cloud computing. Big data is widely used in many fields such as social platform, e-commerce, and financial analysis and so on. Intelligent agriculture in the course of the operation will produce large amounts of data of complex structure, fully mining the value of these data for the development of agriculture will be very meaningful. This paper proposes an intelligent data processing platform based on Storm and Cassandra to realize the storage and management of big data of intelligent agriculture.

  19. Big Data and Health Economics: Opportunities, Challenges and Risks

    Directory of Open Access Journals (Sweden)

    Diego Bodas-Sagi

    2018-03-01

    Full Text Available Big Data offers opportunities in many fields. Healthcare is not an exception. In this paper we summarize the possibilities of Big Data and Big Data technologies to offer useful information to policy makers. In a world with tight public budgets and ageing populations we feel necessary to save costs in any production process. The use of outcomes from Big Data could be in the future a way to improve decisions at a lower cost than today. In addition to list the advantages of properly using data and technologies from Big Data, we also show some challenges and risks that analysts could face. We also present an hypothetical example of the use of administrative records with health information both for diagnoses and patients.

  20. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  1. 3rd International Symposium on Big Data and Cloud Computing Challenges

    CERN Document Server

    Neelanarayanan, V

    2016-01-01

    This proceedings volume contains selected papers that were presented in the 3rd International Symposium on Big data and Cloud Computing Challenges, 2016 held at VIT University, India on March 10 and 11. New research issues, challenges and opportunities shaping the future agenda in the field of Big Data and Cloud Computing are identified and presented throughout the book, which is intended for researchers, scholars, students, software developers and practitioners working at the forefront in their field. This book acts as a platform for exchanging ideas, setting questions for discussion, and sharing the experience in Big Data and Cloud Computing domain.

  2. Big Data and Data Science in Critical Care.

    Science.gov (United States)

    Sanchez-Pinto, L Nelson; Luo, Yuan; Churpek, Matthew M

    2018-05-09

    The digitalization of the healthcare system has resulted in a deluge of clinical Big Data and has prompted the rapid growth of data science in medicine. Data science, which is the field of study dedicated to the principled extraction of knowledge from complex data, is particularly relevant in the critical care setting. The availability of large amounts of data in the intensive care unit, the need for better evidence-based care, and the complexity of critical illness makes the use of data science techniques and data-driven research particularly appealing to intensivists. Despite the increasing number of studies and publications in the field, so far there have been few examples of data science projects that have resulted in successful implementations of data-driven systems in the intensive care unit. However, given the expected growth in the field, intensivists should be familiar with the opportunities and challenges of Big Data and data science. In this paper, we review the definitions, types of algorithms, applications, challenges, and future of Big Data and data science in critical care. Copyright © 2018. Published by Elsevier Inc.

  3. Toward a Literature-Driven Definition of Big Data in Healthcare.

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  4. Towards a big crunch dual

    Energy Technology Data Exchange (ETDEWEB)

    Hertog, Thomas E-mail: hertog@vulcan2.physics.ucsb.edu; Horowitz, Gary T

    2004-07-01

    We show there exist smooth asymptotically anti-de Sitter initial data which evolve to a big crunch singularity in a low energy supergravity limit of string theory. This opens up the possibility of using the dual conformal field theory to obtain a fully quantum description of the cosmological singularity. A preliminary study of this dual theory suggests that the big crunch is an endpoint of evolution even in the full string theory. We also show that any theory with scalar solitons must have negative energy solutions. The results presented here clarify our earlier work on cosmic censorship violation in N=8 supergravity. (author)

  5. RESEARCH ON THE CONSTRUCTION OF REMOTE SENSING AUTOMATIC INTERPRETATION SYMBOL BIG DATA

    Directory of Open Access Journals (Sweden)

    Y. Gao

    2018-04-01

    Full Text Available Remote sensing automatic interpretation symbol (RSAIS is an inexpensive and fast method in providing precise in-situ information for image interpretation and accuracy. This study designed a scientific and precise RSAIS data characterization method, as well as a distributed and cloud architecture massive data storage method. Additionally, it introduced an offline and online data update mode and a dynamic data evaluation mechanism, with the aim to create an efficient approach for RSAIS big data construction. Finally, a national RSAIS database with more than 3 million samples covering 86 land types was constructed during 2013–2015 based on the National Geographic Conditions Monitoring Project of China and then annually updated since the 2016 period. The RSAIS big data has proven to be a good method for large scale image interpretation and field validation. It is also notable that it has the potential to solve image automatic interpretation with the assistance of deep learning technology in the remote sensing big data era.

  6. Research on the Construction of Remote Sensing Automatic Interpretation Symbol Big Data

    Science.gov (United States)

    Gao, Y.; Liu, R.; Liu, J.; Cheng, T.

    2018-04-01

    Remote sensing automatic interpretation symbol (RSAIS) is an inexpensive and fast method in providing precise in-situ information for image interpretation and accuracy. This study designed a scientific and precise RSAIS data characterization method, as well as a distributed and cloud architecture massive data storage method. Additionally, it introduced an offline and online data update mode and a dynamic data evaluation mechanism, with the aim to create an efficient approach for RSAIS big data construction. Finally, a national RSAIS database with more than 3 million samples covering 86 land types was constructed during 2013-2015 based on the National Geographic Conditions Monitoring Project of China and then annually updated since the 2016 period. The RSAIS big data has proven to be a good method for large scale image interpretation and field validation. It is also notable that it has the potential to solve image automatic interpretation with the assistance of deep learning technology in the remote sensing big data era.

  7. Using Big Book to Teach Things in My House

    OpenAIRE

    Effrien, Intan; Lailatus, Sa’diyah; Nuruliftitah Maja, Neneng

    2017-01-01

    The purpose of this study to determine students' interest in learning using the big book media. Big book is a big book from the general book. The big book contains simple words and images that match the content of sentences and spelling. From here researchers can know the interest and development of students' knowledge. As well as train researchers to remain crative in developing learning media for students.

  8. Clinical research of traditional Chinese medicine in big data era.

    Science.gov (United States)

    Zhang, Junhua; Zhang, Boli

    2014-09-01

    With the advent of big data era, our thinking, technology and methodology are being transformed. Data-intensive scientific discovery based on big data, named "The Fourth Paradigm," has become a new paradigm of scientific research. Along with the development and application of the Internet information technology in the field of healthcare, individual health records, clinical data of diagnosis and treatment, and genomic data have been accumulated dramatically, which generates big data in medical field for clinical research and assessment. With the support of big data, the defects and weakness may be overcome in the methodology of the conventional clinical evaluation based on sampling. Our research target shifts from the "causality inference" to "correlativity analysis." This not only facilitates the evaluation of individualized treatment, disease prediction, prevention and prognosis, but also is suitable for the practice of preventive healthcare and symptom pattern differentiation for treatment in terms of traditional Chinese medicine (TCM), and for the post-marketing evaluation of Chinese patent medicines. To conduct clinical studies involved in big data in TCM domain, top level design is needed and should be performed orderly. The fundamental construction and innovation studies should be strengthened in the sections of data platform creation, data analysis technology and big-data professionals fostering and training.

  9. Big Data, Big Responsibility! Building best-practice privacy strategies into a large-scale neuroinformatics platform

    Directory of Open Access Journals (Sweden)

    Christina Popovich

    2017-04-01

    OBI’s rigorous approach to data sharing in the field of neuroscience maintains the accessibility of research data for big discoveries without compromising patient privacy and security. We believe that Brain-CODE is a powerful and advantageous tool; moving neuroscience research from independent silos to an integrative system approach for improving patient health. OBI’s vision for improved brain health for patients living with neurological disorders paired with Brain-CODE’s best-practice strategies in privacy protection of patient data offer a novel and innovative approach to “big data” initiatives aimed towards improving public health and society world-wide.

  10. Big (Bio)Chemical Data Mining Using Chemometric Methods: A Need for Chemists.

    Science.gov (United States)

    Tauler, Roma; Parastar, Hadi

    2018-03-23

    This review aims to demonstrate abilities to analyze Big (Bio)Chemical Data (BBCD) with multivariate chemometric methods and to show some of the more important challenges of modern analytical researches. In this review, the capabilities and versatility of chemometric methods will be discussed in light of the BBCD challenges that are being encountered in chromatographic, spectroscopic and hyperspectral imaging measurements, with an emphasis on their application to omics sciences. In addition, insights and perspectives on how to address the analysis of BBCD are provided along with a discussion of the procedures necessary to obtain more reliable qualitative and quantitative results. In this review, the importance of Big Data and of their relevance to (bio)chemistry are first discussed. Then, analytical tools which can produce BBCD are presented as well as some basics needed to understand prospects and limitations of chemometric techniques when they are applied to BBCD are given. Finally, the significance of the combination of chemometric approaches with BBCD analysis in different chemical disciplines is highlighted with some examples. In this paper, we have tried to cover some of the applications of big data analysis in the (bio)chemistry field. However, this coverage is not extensive covering everything done in the field. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  12. Commentary: Epidemiology in the era of big data.

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  13. Toward a Literature-Driven Definition of Big Data in Healthcare

    Directory of Open Access Journals (Sweden)

    Emilie Baro

    2015-01-01

    Full Text Available Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n and the number of variables (p for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n*p≥7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR data.

  14. Toward a Literature-Driven Definition of Big Data in Healthcare

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  15. Reheating and dangerous relics in pre-big-bang string cosmology

    International Nuclear Information System (INIS)

    Buonanno, Alessandra; Lemoine, Martin; Olive, Keith A.

    2000-01-01

    We discuss the mechanism of reheating in pre-big-bang string cosmology and we calculate the amount of moduli and gravitinos produced gravitationally and in scattering processes of the thermal bath. We find that this abundance always exceeds the limits imposed by big-bang nucleosynthesis, and significant entropy production is required. The exact amount of entropy needed depends on the details of the high curvature phase between the dilaton-driven inflationary era and the radiation era. We show that the domination and decay of the zero-mode of a modulus field, which could well be the dilaton, or of axions, suffices to dilute moduli and gravitinos. In this context, baryogenesis can be accommodated in a simple way via the Affleck-Dine mechanism and in some cases the Affleck-Dine condensate could provide both the source of entropy and the baryon asymmetry

  16. The case for the relativistic hot big bang cosmology

    Science.gov (United States)

    Peebles, P. J. E.; Schramm, D. N.; Kron, R. G.; Turner, E. L.

    1991-01-01

    What has become the standard model in cosmology is described, and some highlights are presented of the now substantial range of evidence that most cosmologists believe convincingly establishes this model, the relativistic hot big bang cosmology. It is shown that this model has yielded a set of interpretations and successful predictions that substantially outnumber the elements used in devising the theory, with no well-established empirical contradictions. Brief speculations are made on how the open puzzles and work in progress might affect future developments in this field.

  17. EKALAVYA MODEL OF HIGHER EDUCATION – AN INNOVATION OF IBM’S BIG DATA UNIVERSITY

    OpenAIRE

    Dr. P. S. Aithal; Shubhrajyotsna Aithal

    2016-01-01

    Big Data Science is a new multi-disciplinary subject in the society, comprising of business intelligence, data analytics, and the related fields have become increasingly important in both the academic and the business communities during the 21st century. Many organizations and business intelligence experts have foreseen the significant development in the big data field as next big wave in future research arena in many industry sectors and the society. To become an expert and skilled in this n...

  18. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  19. Big Data and HPC: A Happy Marriage

    KAUST Repository

    Mehmood, Rashid

    2016-01-25

    International Data Corporation (IDC) defines Big Data technologies as “a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data produced every day, by enabling high velocity capture, discovery, and/or analysis”. High Performance Computing (HPC) most generally refers to “the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business”. Big data platforms are built primarily considering the economics and capacity of the system for dealing with the 4V characteristics of data. HPC traditionally has been more focussed on the speed of digesting (computing) the data. For these reasons, the two domains (HPC and Big Data) have developed their own paradigms and technologies. However, recently, these two have grown fond of each other. HPC technologies are needed by Big Data to deal with the ever increasing Vs of data in order to forecast and extract insights from existing and new domains, faster, and with greater accuracy. Increasingly more data is being produced by scientific experiments from areas such as bioscience, physics, and climate, and therefore, HPC needs to adopt data-driven paradigms. Moreover, there are synergies between them with unimaginable potential for developing new computing paradigms, solving long-standing grand challenges, and making new explorations and discoveries. Therefore, they must get married to each other. In this talk, we will trace the HPC and big data landscapes through time including their respective technologies, paradigms and major applications areas. Subsequently, we will present the factors that are driving the convergence of the two technologies, the synergies between them, as well as the benefits of their convergence to the biosciences field. The opportunities and challenges of the

  20. Is happiness good for your personality? Concurrent and prospective relations of the big five with subjective well-being.

    Science.gov (United States)

    Soto, Christopher J

    2015-02-01

    The present research examined longitudinal relations of the Big Five personality traits with three core aspects of subjective well-being: life satisfaction, positive affect, and negative affect. Latent growth models and autoregressive models were used to analyze data from a large, nationally representative sample of 16,367 Australian residents. Concurrent and change correlations indicated that higher levels of subjective well-being were associated with higher levels of Extraversion, Agreeableness, and Conscientiousness, and with lower levels of Neuroticism. Moreover, personality traits prospectively predicted change in well-being, and well-being levels prospectively predicted personality change. Specifically, prospective trait effects indicated that individuals who were initially extraverted, agreeable, conscientious, and emotionally stable subsequently increased in well-being. Prospective well-being effects indicated that individuals with high initial levels of well-being subsequently became more agreeable, conscientious, emotionally stable, and introverted. These findings challenge the common assumption that associations of personality traits with subjective well-being are entirely, or almost entirely, due to trait influences on well-being. They support the alternative hypothesis that personality traits and well-being aspects reciprocally influence each other over time. © 2013 Wiley Periodicals, Inc.

  1. Quantum well electronic states in a tilted magnetic field.

    Science.gov (United States)

    Trallero-Giner, C; Padilha, J X; Lopez-Richard, V; Marques, G E; Castelano, L K

    2017-08-16

    We report the energy spectrum and the eigenstates of conduction and uncoupled valence bands of a quantum well under the influence of a tilted magnetic field. In the framework of the envelope approximation, we implement two analytical approaches to obtain the nontrivial solutions of the tilted magnetic field: (a) the Bubnov-Galerkin spectral method and b) the perturbation theory. We discuss the validity of each method for a broad range of magnetic field intensity and orientation as well as quantum well thickness. By estimating the accuracy of the perturbation method, we provide explicit analytical solutions for quantum wells in a tilted magnetic field configuration that can be employed to study several quantitative phenomena.

  2. Toward a Literature-Driven Definition of Big Data in Healthcare

    OpenAIRE

    Baro, Emilie; Degoul, Samuel; Beuscart, R?gis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. ...

  3. The structure of the big magnetic storms

    International Nuclear Information System (INIS)

    Mihajlivich, J. Spomenko; Chop, Rudi; Palangio, Paolo

    2010-01-01

    The records of geomagnetic activity during Solar Cycles 22 and 23 (which occurred from 1986 to 2006) indicate several extremely intensive A-class geomagnetic storms. These were storms classified in the category of the Big Magnetic Storms. In a year of maximum solar activity during Solar Cycle 23, or more precisely, during a phase designated as a post-maximum phase in solar activity (PPM - Phase Post maximum), near the autumn equinox, on 29, October 2003, an extremely strong and intensive magnetic storm was recorded. In the first half of November 2004 (7, November 2004) an intensive magnetic storm was recorded (the Class Big Magnetic Storm). The level of geomagnetic field variations which were recorded for the selected Big Magnetic Storms, was ΔD st=350 nT. For the Big Magnetic Storms the indicated three-hour interval indices geomagnetic activity was Kp = 9. This study presents the spectral composition of the Di - variations which were recorded during magnetic storms in October 2003 and November 2004. (Author)

  4. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  5. Big Sites, Big Questions, Big Data, Big Problems: Scales of Investigation and Changing Perceptions of Archaeological Practice in the Southeastern United States

    Directory of Open Access Journals (Sweden)

    Cameron B Wesson

    2014-08-01

    Full Text Available Since at least the 1930s, archaeological investigations in the southeastern United States have placed a priority on expansive, near-complete, excavations of major sites throughout the region. Although there are considerable advantages to such large–scale excavations, projects conducted at this scale are also accompanied by a series of challenges regarding the comparability, integrity, and consistency of data recovery, analysis, and publication. We examine the history of large–scale excavations in the southeast in light of traditional views within the discipline that the region has contributed little to the ‘big questions’ of American archaeology. Recently published analyses of decades old data derived from Southeastern sites reveal both the positive and negative aspects of field research conducted at scales much larger than normally undertaken in archaeology. Furthermore, given the present trend toward the use of big data in the social sciences, we predict an increased use of large pre–existing datasets developed during the New Deal and other earlier periods of archaeological practice throughout the region.

  6. Phantom inflation and the 'Big Trip'

    International Nuclear Information System (INIS)

    Gonzalez-Diaz, Pedro F.; Jimenez-Madrid, Jose A.

    2004-01-01

    Primordial inflation is regarded to be driven by a phantom field which is here implemented as a scalar field satisfying an equation of state p=ωρ, with ω-1. Being even aggravated by the weird properties of phantom energy, this will pose a serious problem with the exit from the inflationary phase. We argue, however, in favor of the speculation that a smooth exit from the phantom inflationary phase can still be tentatively recovered by considering a multiverse scenario where the primordial phantom universe would travel in time toward a future universe filled with usual radiation, before reaching the big rip. We call this transition the 'Big Trip' and assume it to take place with the help of some form of anthropic principle which chooses our current universe as being the final destination of the time transition

  7. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    Science.gov (United States)

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  8. Big Data in food and agriculture

    Directory of Open Access Journals (Sweden)

    Kelly Bronson

    2016-06-01

    Full Text Available Farming is undergoing a digital revolution. Our existing review of current Big Data applications in the agri-food sector has revealed several collection and analytics tools that may have implications for relationships of power between players in the food system (e.g. between farmers and large corporations. For example, Who retains ownership of the data generated by applications like Monsanto Corproation's Weed I.D . “app”? Are there privacy implications with the data gathered by John Deere's precision agricultural equipment? Systematically tracing the digital revolution in agriculture, and charting the affordances as well as the limitations of Big Data applied to food and agriculture, should be a broad research goal for Big Data scholarship. Such a goal brings data scholarship into conversation with food studies and it allows for a focus on the material consequences of big data in society.

  9. Big data and biomedical informatics: a challenging opportunity.

    Science.gov (United States)

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  10. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  11. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  12. Designing Cloud Infrastructure for Big Data in E-government

    Directory of Open Access Journals (Sweden)

    Jelena Šuh

    2015-03-01

    Full Text Available The development of new information services and technologies, especially in domains of mobile communications, Internet of things, and social media, has led to appearance of the large quantities of unstructured data. The pervasive computing also affects the e-government systems, where big data emerges and cannot be processed and analyzed in a traditional manner due to its complexity, heterogeneity and size. The subject of this paper is the design of the cloud infrastructure for big data storage and processing in e-government. The goal is to analyze the potential of cloud computing for big data infrastructure, and propose a model for effective storing, processing and analyzing big data in e-government. The paper provides an overview of current relevant concepts related to cloud infrastructure design that should provide support for big data. The second part of the paper gives a model of the cloud infrastructure based on the concepts of software defined networks and multi-tenancy. The final goal is to support projects in the field of big data in e-government

  13. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    Science.gov (United States)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  14. An optimal big data workflow for biomedical image analysis

    Directory of Open Access Journals (Sweden)

    Aurelle Tchagna Kouanou

    Full Text Available Background and objective: In the medical field, data volume is increasingly growing, and traditional methods cannot manage it efficiently. In biomedical computation, the continuous challenges are: management, analysis, and storage of the biomedical data. Nowadays, big data technology plays a significant role in the management, organization, and analysis of data, using machine learning and artificial intelligence techniques. It also allows a quick access to data using the NoSQL database. Thus, big data technologies include new frameworks to process medical data in a manner similar to biomedical images. It becomes very important to develop methods and/or architectures based on big data technologies, for a complete processing of biomedical image data. Method: This paper describes big data analytics for biomedical images, shows examples reported in the literature, briefly discusses new methods used in processing, and offers conclusions. We argue for adapting and extending related work methods in the field of big data software, using Hadoop and Spark frameworks. These provide an optimal and efficient architecture for biomedical image analysis. This paper thus gives a broad overview of big data analytics to automate biomedical image diagnosis. A workflow with optimal methods and algorithm for each step is proposed. Results: Two architectures for image classification are suggested. We use the Hadoop framework to design the first, and the Spark framework for the second. The proposed Spark architecture allows us to develop appropriate and efficient methods to leverage a large number of images for classification, which can be customized with respect to each other. Conclusions: The proposed architectures are more complete, easier, and are adaptable in all of the steps from conception. The obtained Spark architecture is the most complete, because it facilitates the implementation of algorithms with its embedded libraries. Keywords: Biomedical images, Big

  15. Slaves to Big Data. Or Are We?

    Directory of Open Access Journals (Sweden)

    Mireille Hildebrandt

    2013-10-01

    Full Text Available

    In this contribution, the notion of Big Data is discussed in relation to the monetisation of personal data. The claim of some proponents, as well as adversaries, that Big Data implies that ‘n = all’, meaning that we no longer need to rely on samples because we have all the data, is scrutinised and found to be both overly optimistic and unnecessarily pessimistic. A set of epistemological and ethical issues is presented, focusing on the implications of Big Data for our perception, cognition, fairness, privacy and due process. The article then looks into the idea of user-centric personal data management to investigate to what extent it provides solutions for some of the problems triggered by the Big Data conundrum. Special attention is paid to the core principle of data protection legislation, namely purpose binding. Finally, this contribution seeks to inquire into the influence of Big Data politics on self, mind and society, and asks how we can prevent ourselves from becoming slaves to Big Data.

  16. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  17. Big data: survey, technologies, opportunities, and challenges.

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  18. Big Data: Survey, Technologies, Opportunities, and Challenges

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  19. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  20. Interactive Exploration of Big Scientific Data: New Representations and Techniques.

    Science.gov (United States)

    Hjelmervik, Jon M; Barrowclough, Oliver J D

    2016-01-01

    Although splines have been in popular use in CAD for more than half a century, spline research is still an active field, driven by the challenges we are facing today within isogeometric analysis and big data. Splines are likely to play a vital future role in enabling effective big data exploration techniques in 3D, 4D, and beyond.

  1. BIG: a large-scale data integration tool for renal physiology.

    Science.gov (United States)

    Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A

    2016-10-01

    Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.

  2. [Relevance of big data for molecular diagnostics].

    Science.gov (United States)

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  3. Recent big flare

    International Nuclear Information System (INIS)

    Moriyama, Fumio; Miyazawa, Masahide; Yamaguchi, Yoshisuke

    1978-01-01

    The features of three big solar flares observed at Tokyo Observatory are described in this paper. The active region, McMath 14943, caused a big flare on September 16, 1977. The flare appeared on both sides of a long dark line which runs along the boundary of the magnetic field. Two-ribbon structure was seen. The electron density of the flare observed at Norikura Corona Observatory was 3 x 10 12 /cc. Several arc lines which connect both bright regions of different magnetic polarity were seen in H-α monochrome image. The active region, McMath 15056, caused a big flare on December 10, 1977. At the beginning, several bright spots were observed in the region between two main solar spots. Then, the area and the brightness increased, and the bright spots became two ribbon-shaped bands. A solar flare was observed on April 8, 1978. At first, several bright spots were seen around the solar spot in the active region, McMath 15221. Then, these bright spots developed to a large bright region. On both sides of a dark line along the magnetic neutral line, bright regions were generated. These developed to a two-ribbon flare. The time required for growth was more than one hour. A bright arc which connects two ribbons was seen, and this arc may be a loop prominence system. (Kato, T.)

  4. Big Data and Biomedical Informatics: A Challenging Opportunity

    Science.gov (United States)

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  5. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  6. Ensemble control of the Hardhof well field under constraints

    Science.gov (United States)

    Marti, Beatrice; McLaughlin, Dennis; Kinzelbach, Wolfgang; Kaiser, Hans-Peter

    2013-04-01

    Practical control of flow in aquifers has been based on deterministic models, not including stochastic information in the optimization (Bauser et al., 2010 or Marti et al., 2012). Only recently robust ensemble control of aquatic systems has been analyzed in linear and synthetic problems (Lin, B., 2012). We propose a control under constraints, which takes into account the stochastic information contained in an ensemble of realizations of a groundwater flow model with uncertain parameters, boundary and initial conditions. This control is applied to a real life problem setting (the Hardhof well field in Zurich) and analyzed with regard to efficiency of the control compared to a similar control based on a deterministic model. The Hardhof well field, which lies in the city of Zurich, Switzerland, provides roughly 15% of the town's drinking water demand from the Limmat valley aquifer. Groundwater and river filtrate are withdrawn in four large horizontal wells, each with a capacity of up to 48'000 m3 per day. The well field is threatened by potential pollution from leachate of a nearby land fill, possible accidents on the adjacent rail and road lines, and by diffuse pollution from former industrial sites and sewers located upstream of the well field. A line of recharge wells and basins forms a hydraulic barrier against the potentially contaminated water and increases the capacity of the well field. The amount and distribution of the artificial recharge to 3 infiltration basins and 12 infiltration wells has to be controlled on a daily basis to guarantee the effectiveness of the hydraulic barrier in the highly dynamic flow field. The Hardhof well field is simulated with a 2D-real-time groundwater flow model. The model is coupled to a controller, minimizing the inflow of potentially contaminated groundwater to the drinking water wells under various constraints (i.e. keeping the groundwater level between given thresholds, guaranteeing production of the drinking water demand

  7. Hot big bang or slow freeze?

    Energy Technology Data Exchange (ETDEWEB)

    Wetterich, C.

    2014-09-07

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  8. Hot big bang or slow freeze?

    International Nuclear Information System (INIS)

    Wetterich, C.

    2014-01-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe

  9. Hot big bang or slow freeze?

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2014-09-01

    Full Text Available We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  10. Hydrogenic donor in a quantum well with an electric field

    International Nuclear Information System (INIS)

    Jayakumar, K.; Balasubramanian, S.; Tomak, M.

    1985-08-01

    Variational calculations of the binding energy of a hydrogenic donor in a quantum well formed by GaAs and Gasub(1-x)A1sub(x)As with a constant electric field are performed for different electric fields and well widths. A critical electric field is defined and its variation with well width is presented. (author)

  11. Harnessing Big Data for Systems Pharmacology.

    Science.gov (United States)

    Xie, Lei; Draizen, Eli J; Bourne, Philip E

    2017-01-06

    Systems pharmacology aims to holistically understand mechanisms of drug actions to support drug discovery and clinical practice. Systems pharmacology modeling (SPM) is data driven. It integrates an exponentially growing amount of data at multiple scales (genetic, molecular, cellular, organismal, and environmental). The goal of SPM is to develop mechanistic or predictive multiscale models that are interpretable and actionable. The current explosions in genomics and other omics data, as well as the tremendous advances in big data technologies, have already enabled biologists to generate novel hypotheses and gain new knowledge through computational models of genome-wide, heterogeneous, and dynamic data sets. More work is needed to interpret and predict a drug response phenotype, which is dependent on many known and unknown factors. To gain a comprehensive understanding of drug actions, SPM requires close collaborations between domain experts from diverse fields and integration of heterogeneous models from biophysics, mathematics, statistics, machine learning, and semantic webs. This creates challenges in model management, model integration, model translation, and knowledge integration. In this review, we discuss several emergent issues in SPM and potential solutions using big data technology and analytics. The concurrent development of high-throughput techniques, cloud computing, data science, and the semantic web will likely allow SPM to be findable, accessible, interoperable, reusable, reliable, interpretable, and actionable.

  12. Pre-big bang in M-theory

    OpenAIRE

    Cavaglia, Marco

    2001-01-01

    We discuss a simple cosmological model derived from M-theory. Three assumptions lead naturally to a pre-big bang scenario: (a) 11-dimensional supergravity describes the low-energy world; (b) non-gravitational fields live on a three-dimensional brane; and (c) asymptotically past triviality.

  13. Solution of a braneworld big crunch/big bang cosmology

    International Nuclear Information System (INIS)

    McFadden, Paul L.; Turok, Neil; Steinhardt, Paul J.

    2007-01-01

    We solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed V divided by the speed of light c. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in (V/c) 2 . At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios

  14. Big data - smart health strategies. Findings from the yearbook 2014 special theme.

    Science.gov (United States)

    Koutkias, V; Thiessard, F

    2014-08-15

    To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future.

  15. Field Testing of Activated Carbon Injection Options for Mercury Control at TXU's Big Brown Station

    Energy Technology Data Exchange (ETDEWEB)

    John Pavlish; Jeffrey Thompson; Christopher Martin; Mark Musich; Lucinda Hamre

    2009-01-07

    The primary objective of the project was to evaluate the long-term feasibility of using activated carbon injection (ACI) options to effectively reduce mercury emissions from Texas electric generation plants in which a blend of lignite and subbituminous coal is fired. Field testing of ACI options was performed on one-quarter of Unit 2 at TXU's Big Brown Steam Electric Station. Unit 2 has a design output of 600 MW and burns a blend of 70% Texas Gulf Coast lignite and 30% subbituminous Powder River Basin coal. Big Brown employs a COHPAC configuration, i.e., high air-to-cloth baghouses following cold-side electrostatic precipitators (ESPs), for particulate control. When sorbent injection is added between the ESP and the baghouse, the combined technology is referred to as TOXECON{trademark} and is patented by the Electric Power Research Institute in the United States. Key benefits of the TOXECON configuration include better mass transfer characteristics of a fabric filter compared to an ESP for mercury capture and contamination of only a small percentage of the fly ash with AC. The field testing consisted of a baseline sampling period, a parametric screening of three sorbent injection options, and a month long test with a single mercury control technology. During the baseline sampling, native mercury removal was observed to be less than 10%. Parametric testing was conducted for three sorbent injection options: injection of standard AC alone; injection of an EERC sorbent enhancement additive, SEA4, with ACI; and injection of an EERC enhanced AC. Injection rates were determined for all of the options to achieve the minimum target of 55% mercury removal as well as for higher removals approaching 90%. Some of the higher injection rates were not sustainable because of increased differential pressure across the test baghouse module. After completion of the parametric testing, a month long test was conducted using the enhanced AC at a nominal rate of 1.5 lb/Macf. During

  16. Big Data: Survey, Technologies, Opportunities, and Challenges

    Directory of Open Access Journals (Sweden)

    Nawsher Khan

    2014-01-01

    Full Text Available Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  17. Adapting bioinformatics curricula for big data

    Science.gov (United States)

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  18. The High Five: Associations of the Five Positive Factors with the Big Five and Well-being

    Directory of Open Access Journals (Sweden)

    Alejandro C. Cosentino

    2017-07-01

    Full Text Available The study of individual differences in positive characteristics has mainly focused on moral traits. The objectives of this research were to study individual differences in positive characteristics from the point of view of the layperson, including non-moral individual characteristics, and to generate a replicable model of positive factors. Three studies based on a lexical approach were conducted. The first study generated a corpus of words which resulted in a refined list of socially shared positive characteristics. The second study produced a five-factor model of positive characteristics: erudition, peace, cheerfulness, honesty, and tenacity. The third study confirmed the model with a different sample. The five-positive-factor model not only showed positive associations with emotional, psychological and social well-being, but it also accounted for the variance beyond that accounted for by the Big Five factors in predicting these well-being dimensions. In addition, the presence of convergent and divergent validity of the five positive factors is shown with relation to the Values-in-Action (VIA classification of character strengths proposed by Peterson and Seligman (2004.

  19. How to use Big Data technologies to optimize operations in Upstream Petroleum Industry

    Directory of Open Access Journals (Sweden)

    Abdelkader Baaziz

    2013-12-01

    Full Text Available “Big Data is the oil of the new economy” is the most famous citation during the three last years. It has even been adopted by the World Economic Forum in 2011. In fact, Big Data is like crude! It’s valuable, but if unrefined it cannot be used. It must be broken down, analyzed for it to have value. But what about Big Data generated by the Petroleum Industry and particularly its upstream segment? Upstream is no stranger to Big Data. Understanding and leveraging data in the upstream segment enables firms to remain competitive throughout planning, exploration, delineation, and field development.Oil & Gas Companies conduct advanced geophysics modeling and simulation to support operations where 2D, 3D & 4D Seismic generate significant data during exploration phases. They closely monitor the performance of their operational assets. To do this, they use tens of thousands of data-collecting sensors in subsurface wells and surface facilities to provide continuous and real-time monitoring of assets and environmental conditions. Unfortunately, this information comes in various and increasingly complex forms, making it a challenge to collect, interpret, and leverage the disparate data. As an example, Chevron’s internal IT traffic alone exceeds 1.5 terabytes a day.Big Data technologies integrate common and disparate data sets to deliver the right information at the appropriate time to the correct decision-maker. These capabilities help firms act on large volumes of data, transforming decision-making from reactive to proactive and optimizing all phases of exploration, development and production. Furthermore, Big Data offers multiple opportunities to ensure safer, more responsible operations. Another invaluable effect of that would be shared learning.The aim of this paper is to explain how to use Big Data technologies to optimize operations. How can Big Data help experts to decision-making leading the desired outcomes?Keywords:Big Data; Analytics

  20. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    Science.gov (United States)

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  1. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  2. Military Simulation Big Data: Background, State of the Art, and Challenges

    Directory of Open Access Journals (Sweden)

    Xiao Song

    2015-01-01

    Full Text Available Big data technology has undergone rapid development and attained great success in the business field. Military simulation (MS is another application domain producing massive datasets created by high-resolution models and large-scale simulations. It is used to study complicated problems such as weapon systems acquisition, combat analysis, and military training. This paper firstly reviewed several large-scale military simulations producing big data (MS big data for a variety of usages and summarized the main characteristics of result data. Then we looked at the technical details involving the generation, collection, processing, and analysis of MS big data. Two frameworks were also surveyed to trace the development of the underlying software platform. Finally, we identified some key challenges and proposed a framework as a basis for future work. This framework considered both the simulation and big data management at the same time based on layered and service oriented architectures. The objective of this review is to help interested researchers learn the key points of MS big data and provide references for tackling the big data problem and performing further research.

  3. A Survey on Domain-Specific Languages for Machine Learning in Big Data

    OpenAIRE

    Portugal, Ivens; Alencar, Paulo; Cowan, Donald

    2016-01-01

    The amount of data generated in the modern society is increasing rapidly. New problems and novel approaches of data capture, storage, analysis and visualization are responsible for the emergence of the Big Data research field. Machine Learning algorithms can be used in Big Data to make better and more accurate inferences. However, because of the challenges Big Data imposes, these algorithms need to be adapted and optimized to specific applications. One important decision made by software engi...

  4. Personalizing Medicine Through Hybrid Imaging and Medical Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Laszlo Papp

    2018-06-01

    Full Text Available Medical imaging has evolved from a pure visualization tool to representing a primary source of analytic approaches toward in vivo disease characterization. Hybrid imaging is an integral part of this approach, as it provides complementary visual and quantitative information in the form of morphological and functional insights into the living body. As such, non-invasive imaging modalities no longer provide images only, but data, as stated recently by pioneers in the field. Today, such information, together with other, non-imaging medical data creates highly heterogeneous data sets that underpin the concept of medical big data. While the exponential growth of medical big data challenges their processing, they inherently contain information that benefits a patient-centric personalized healthcare. Novel machine learning approaches combined with high-performance distributed cloud computing technologies help explore medical big data. Such exploration and subsequent generation of knowledge require a profound understanding of the technical challenges. These challenges increase in complexity when employing hybrid, aka dual- or even multi-modality image data as input to big data repositories. This paper provides a general insight into medical big data analysis in light of the use of hybrid imaging information. First, hybrid imaging is introduced (see further contributions to this special Research Topic, also in the context of medical big data, then the technological background of machine learning as well as state-of-the-art distributed cloud computing technologies are presented, followed by the discussion of data preservation and data sharing trends. Joint data exploration endeavors in the context of in vivo radiomics and hybrid imaging will be presented. Standardization challenges of imaging protocol, delineation, feature engineering, and machine learning evaluation will be detailed. Last, the paper will provide an outlook into the future role of hybrid

  5. Entering the 'big data' era in medicinal chemistry: molecular promiscuity analysis revisited.

    Science.gov (United States)

    Hu, Ye; Bajorath, Jürgen

    2017-06-01

    The 'big data' concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate.

  6. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  7. From big data to deep insight in developmental science.

    Science.gov (United States)

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  8. Big data in psychology: A framework for research advancement.

    Science.gov (United States)

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  10. [Big Data- challenges and risks].

    Science.gov (United States)

    Krauß, Manuela; Tóth, Tamás; Hanika, Heinrich; Kozlovszky, Miklós; Dinya, Elek

    2015-12-06

    The term "Big Data" is commonly used to describe the growing mass of information being created recently. New conclusions can be drawn and new services can be developed by the connection, processing and analysis of these information. This affects all aspects of life, including health and medicine. The authors review the application areas of Big Data, and present examples from health and other areas. However, there are several preconditions of the effective use of the opportunities: proper infrastructure, well defined regulatory environment with particular emphasis on data protection and privacy. These issues and the current actions for solution are also presented.

  11. Semantic Web technologies for the big data in life sciences.

    Science.gov (United States)

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  12. Intense laser field effects on a Woods-Saxon potential quantum well

    Science.gov (United States)

    Restrepo, R. L.; Morales, A. L.; Akimov, V.; Tulupenko, V.; Kasapoglu, E.; Ungan, F.; Duque, C. A.

    2015-11-01

    This paper presents the results of the theoretical study of the effects of non-resonant intense laser field and electric and magnetic fields on the optical properties in an quantum well (QW) make with Woods-Saxon potential profile. The electric field and intense laser field are applied along the growth direction of the Woods-Saxon quantum well and the magnetic field is oriented perpendicularly. To calculate the energy and the wave functions of the electron in the Woods-Saxon quantum well, the effective mass approximation and the method of envelope wave function are used. The confinement in the Woods-Saxon quantum well is changed drastically by the application of intense laser field or either the effect of electric and magnetic fields. The optical properties are calculated using the compact density matrix.

  13. Modeling and processing for next-generation big-data technologies with applications and case studies

    CERN Document Server

    Barolli, Leonard; Barolli, Admir; Papajorgji, Petraq

    2015-01-01

    This book covers the latest advances in Big Data technologies and provides the readers with a comprehensive review of the state-of-the-art in Big Data processing, analysis, analytics, and other related topics. It presents new models, algorithms, software solutions and methodologies, covering the full data cycle, from data gathering to their visualization and interaction, and includes a set of case studies and best practices. New research issues, challenges and opportunities shaping the future agenda in the field of Big Data are also identified and presented throughout the book, which is intended for researchers, scholars, advanced students, software developers and practitioners working at the forefront in their field.

  14. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  15. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig proteins.

    Directory of Open Access Journals (Sweden)

    Rajeev Raman

    Full Text Available BACKGROUND: Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. PRINCIPAL FINDINGS: We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th (Lig A9 and 10(th repeats (Lig A10; and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon. All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm, probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. CONCLUSIONS: We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  16. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  17. Summary of field operations, well TRN-1

    International Nuclear Information System (INIS)

    Fritts, J.E.; Thomas, E.; McCord, J.P.

    1996-03-01

    TRN-1 was drilled near the SE corner of Kirtland Air Force Base to a depth of 510 feet. This well is in the Site-Wide Hydrogeologic Characterization task field program, which is part of Sandia's Environmental Restoration Project. After drilling, the borehole was logged, plugged to a depth of 352 ft, and completed as a monitoring well. Sand pack interval is from 305 to 352 ft and the screen interval is from 320 to 340 ft. During field operations, important subsurface geologic and hydrologic data were obtained (drill cuttings, geophysical logs of alluvial cover). Identification of the Abo formation in the subsurface will be useful. The subsurface hydrologic data will help define the local hydrostratigraphic framework within the bedrock. Future aquifer testing will be conducted for transmissivity, etc

  18. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  19. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  20. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  1. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach

    Directory of Open Access Journals (Sweden)

    Mike W.-L. Cheung

    2016-05-01

    Full Text Available Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists – and probably the most crucial one – is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  2. Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach.

    Science.gov (United States)

    Cheung, Mike W-L; Jak, Suzanne

    2016-01-01

    Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.

  3. Big and complex data analysis methodologies and applications

    CERN Document Server

    2017-01-01

    This volume conveys some of the surprises, puzzles and success stories in high-dimensional and complex data analysis and related fields. Its peer-reviewed contributions showcase recent advances in variable selection, estimation and prediction strategies for a host of useful models, as well as essential new developments in the field. The continued and rapid advancement of modern technology now allows scientists to collect data of increasingly unprecedented size and complexity. Examples include epigenomic data, genomic data, proteomic data, high-resolution image data, high-frequency financial data, functional and longitudinal data, and network data. Simultaneous variable selection and estimation is one of the key statistical problems involved in analyzing such big and complex data. The purpose of this book is to stimulate research and foster interaction between researchers in the area of high-dimensional data analysis. More concretely, its goals are to: 1) highlight and expand the breadth of existing methods in...

  4. Quantum nature of the big bang: An analytical and numerical investigation

    International Nuclear Information System (INIS)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-01-01

    Analytical and numerical methods are developed to analyze the quantum nature of the big bang in the setting of loop quantum cosmology. They enable one to explore the effects of quantum geometry both on the gravitational and matter sectors and significantly extend the known results on the resolution of the big bang singularity. Specifically, the following results are established for the homogeneous isotropic model with a massless scalar field: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the 'emergent time' idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime. Our constructions also provide a conceptual framework and technical tools which can be used in more general models. In this sense, they provide foundations for analyzing physical issues associated with the Planck regime of loop quantum cosmology as a whole

  5. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  6. Entering the ‘big data’ era in medicinal chemistry: molecular promiscuity analysis revisited

    Science.gov (United States)

    Hu, Ye; Bajorath, Jürgen

    2017-01-01

    The ‘big data’ concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate. PMID:28670471

  7. Big data business models: Challenges and opportunities

    Directory of Open Access Journals (Sweden)

    Ralph Schroeder

    2016-12-01

    Full Text Available This paper, based on 28 interviews from a range of business leaders and practitioners, examines the current state of big data use in business, as well as the main opportunities and challenges presented by big data. It begins with an account of the current landscape and what is meant by big data. Next, it draws distinctions between the ways organisations use data and provides a taxonomy of big data business models. We observe a variety of different business models, depending not only on sector, but also on whether the main advantages derive from analytics capabilities or from having ready access to valuable data sources. Some major challenges emerge from this account, including data quality and protectiveness about sharing data. The conclusion discusses these challenges, and points to the tensions and differing perceptions about how data should be governed as between business practitioners, the promoters of open data, and the wider public.

  8. Central America : Big Data in Action for Development

    OpenAIRE

    World Bank

    2014-01-01

    This report stemmed from a World Bank pilot activity to explore the potential of big data to address development challenges in Central American countries. As part of this activity we collected and analyzed a number of examples of leveraging big data for development. Because of the growing interest in this topic this report makes available to a broader audience those examples as well as the...

  9. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  10. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  11. How should we do the history of Big Data?

    OpenAIRE

    David Beer

    2016-01-01

    Taking its lead from Ian Hacking’s article ‘How should we do the history of statistics?’, this article reflects on how we might develop a sociologically informed history of big data. It argues that within the history of social statistics we have a relatively well developed history of the material phenomenon of big data. Yet this article argues that we now need to take the concept of ‘big data’ seriously, there is a pressing need to explore the type of work that is being done by that concept. ...

  12. Phantom inflation and the 'Big Trip'

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Diaz, Pedro F. [Colina de los Chopos, Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)]. E-mail: p.gonzalezdiaz@imaff.cfmac.csic.es; Jimenez-Madrid, Jose A. [Colina de los Chopos, Instituto de Matematicas y Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 121, 28006 Madrid (Spain)

    2004-08-19

    Primordial inflation is regarded to be driven by a phantom field which is here implemented as a scalar field satisfying an equation of state p={omega}{rho}, with {omega}-1. Being even aggravated by the weird properties of phantom energy, this will pose a serious problem with the exit from the inflationary phase. We argue, however, in favor of the speculation that a smooth exit from the phantom inflationary phase can still be tentatively recovered by considering a multiverse scenario where the primordial phantom universe would travel in time toward a future universe filled with usual radiation, before reaching the big rip. We call this transition the 'Big Trip' and assume it to take place with the help of some form of anthropic principle which chooses our current universe as being the final destination of the time transition.

  13. Big Data: an exploration of research, technologies and application cases

    Directory of Open Access Journals (Sweden)

    Emilcy J. Hernández-Leal

    2017-05-01

    Full Text Available Big Data has become a worldwide trend and although still lacks a scientific or academic consensual concept, every day it portends greater market growth that surrounds and the associated research areas. This paper reports a systematic review of the literature on Big Data considering a state of the art about techniques and technologies associated with Big Data, which include capture, processing, analysis and data visualization. The characteristics, strengths, weaknesses and opportunities for some applications and Big Data models that include support mainly for modeling, analysis, and data mining are explored. Likewise, some of the future trends for the development of Big Data are introduced by basic aspects, scope, and importance of each one. The methodology used for exploration involves the application of two strategies, the first corresponds to a scientometric analysis and the second corresponds to a categorization of documents through a web tool to support the process of literature review. As results, a summary and conclusions about the subject are generated and possible scenarios arise for research work in the field.

  14. A proposed framework of big data readiness in public sectors

    Science.gov (United States)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  15. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  16. Big Science, co-publication and collaboration: getting to the core

    Energy Technology Data Exchange (ETDEWEB)

    Kahn, M.

    2016-07-01

    International collaboration in science has risen considerably in the last two decades (UNESCO, 2010). In the same period Big Science collaborations have proliferated in physics, astronomy, astrophysics, and medicine. Publications that use Big Science data draw on the expertise of those who design and build the equipment and software, as well as the scientific community. Over time a set of ‘rules of use’ has emerged that protects their intellectual property but that may have the unintended consequence of enhancing co-publication counts. This in turn distorts the use of co-publication data as a proxy for collaboration. The distorting effects are illustrated by means of a case study of the BRICS countries that recently issued a declaration on scientific and technological cooperation with specific fields allocated to each country. It is found that with a single exception the dominant research areas of collaboration are different to individual country specializations. The disjuncture between such ‘collaboration’ and the intent of the declaration raises questions of import to science policy, for the BRICS in particular and the measurement of scientific collaboration more generally. (Author)

  17. What Is Big Data and Why Is It Important?

    Science.gov (United States)

    Pence, Harry E.

    2014-01-01

    Big Data Analytics is a topic fraught with both positive and negative potential. Big Data is defined not just by the amount of information involved but also its variety and complexity, as well as the speed with which it must be analyzed or delivered. The amount of data being produced is already incredibly great, and current developments suggest…

  18. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  19. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  20. Burgernomics: a big MacT guide to purchasing power parity

    OpenAIRE

    Michael R. Pakko; Patricia S. Pollard

    2003-01-01

    The theory of purchasing power parity (PPP) has long been a staple of international economic analysis. Recent years have seen the rise in popularity of a tongue-in-cheek, fast-food version of PPP: The Big Mac™ index. In this article, Michael Pakko and Patricia Pollard describe how comparisons of Big Mac prices around the world contain the ingredients necessary to demonstrate the fundamental principles of PPP. They show that the Big Mac index does nearly as well as more comprehensive measures ...

  1. Introduction to big bang nucleosynthesis and modern cosmology

    Science.gov (United States)

    Mathews, Grant J.; Kusakabe, Motohiko; Kajino, Toshitaka

    Primordial nucleosynthesis remains as one of the pillars of modern cosmology. It is the testing ground upon which many cosmological models must ultimately rest. It is our only probe of the universe during the important radiation-dominated epoch in the first few minutes of cosmic expansion. This paper reviews the basic equations of space-time, cosmology, and big bang nucleosynthesis. We also summarize the current state of observational constraints on primordial abundances along with the key nuclear reactions and their uncertainties. We summarize which nuclear measurements are most crucial during the big bang. We also review various cosmological models and their constraints. In particular, we analyze the constraints that big bang nucleosynthesis places upon the possible time variation of fundamental constants, along with constraints on the nature and origin of dark matter and dark energy, long-lived supersymmetric particles, gravity waves, and the primordial magnetic field.

  2. Investigation of well redevelopment techniques for the MWD Well Field, Savannah River Site, South Carolina

    International Nuclear Information System (INIS)

    Kroening, D.E.; Snipes, D.S.; Falta, R.W.; Benson, S.M.

    1994-01-01

    Clemson University, in cooperation with the Savannah River Site (SRS) is investigating well treatment techniques at the Mixed Waste Disposal (MWD) Well Field at SRS. This well field consists of fifteen wells screened in three aquifers with a downward trending head gradient. Based on aquifer performance tests of the MWD wells, it has been determined that many of the wells exhibit low well efficiencies and high skin factors, indicative of damaged wells. Bacterial investigations show that the biological activity in these wells is low, probably due to a high pH environment. Evaluation of the Calcite Saturation Index for each well indicates that nearly all of the MWD wells have the potential for precipitating calcite and calcite deposits have been observed on downhole equipment. The calcite deposits may occur due to the dissolution of the grout mixtures by waters infiltrating down the well annulus driven by the downward head gradient with subsequent precipitation of calcite in the higher pH sand pack. Well rehabilitation techniques currently under investigation include acidification, hydraulic fracturing and traditional physical methods. In addition to treating the wells at MWD, the authors plan to perform aquifer performance tests and evaluate post-treatment skin factors. Further research into the long term effects of well treatment will be conducted, focusing on long term chemical changes brought about by the treatments

  3. Accessibility of the pre-big-bang models to LIGO

    International Nuclear Information System (INIS)

    Mandic, Vuk; Buonanno, Alessandra

    2006-01-01

    The recent search for a stochastic background of gravitational waves with LIGO interferometers has produced a new upper bound on the amplitude of this background in the 100 Hz region. We investigate the implications of the current and future LIGO results on pre-big-bang models of the early Universe, determining the exclusion regions in the parameter space of the minimal pre-big-bang scenario. Although the current LIGO reach is still weaker than the indirect bound from big bang nucleosynthesis, future runs by LIGO, in the coming year, and by Advanced LIGO (∼2009) should further constrain the parameter space, and in some parts surpass the Big Bang nucleosynthesis bound. It will be more difficult to constrain the parameter space in nonminimal pre-big bang models, which are characterized by multiple cosmological phases in the yet not well understood stringy phase, and where the higher-order curvature and/or quantum-loop corrections in the string effective action should be included

  4. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  5. An atomic model of the Big Bang

    Science.gov (United States)

    Lasukov, V. V.

    2013-03-01

    An atomic model of the Big Bang has been developed on the basis of quantum geometrodynamics with a nonzero Hamiltonian and on the concept of gravitation developed by Logunov asymptotically combined with the Gliner's idea of a material interpretation of the cosmological constant. The Lemaître primordial atom in superpace-time, whose spatial coordinate is the so-called scaling factor of the Logunov metric of the effective Riemann space, acts as the Big Bang model. The primordial atom in superspace-time corresponds to spatialtime structures(spheres, lines, and surfaces of a level) of the Minkowski spacetime real within the Logunov gravitation theory, the foregoing structures being filled with a scalar field with a negative density of potential energy.

  6. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  7. About the Big Graphs Arising when Forming the Diagnostic Models in a Reconfigurable Computing Field of Functional Monitoring and Diagnostics System of the Spacecraft Onboard Control Complex

    Directory of Open Access Journals (Sweden)

    L. V. Savkin

    2015-01-01

    Full Text Available One of the problems in implementation of the multipurpose complete systems based on the reconfigurable computing fields (RCF is the problem of optimum redistribution of logicalarithmetic resources in growing scope of functional tasks. Irrespective of complexity, all of them are transformed into an orgraph, which functional and topological structure is appropriately imposed on the RCF based, as a rule, on the field programmable gate array (FPGA.Due to limitation of the hardware configurations and functions realized by means of the switched logical blocks (SLB, the abovementioned problem becomes even more critical when there is a need, within the strictly allocated RCF fragment, to realize even more complex challenge in comparison with the problem which was solved during the previous computing step. In such cases it is possible to speak about graphs of big dimensions with respect to allocated RCF fragment.The article considers this problem through development of diagnostic algorithms to implement diagnostics and control of an onboard control complex of the spacecraft using RCF. It gives examples of big graphs arising with respect to allocated RCF fragment when forming the hardware levels of a diagnostic model, which, in this case, is any hardware-based algorithm of diagnostics in RCF.The article reviews examples of arising big graphs when forming the complicated diagnostic models due to drastic difference in formation of hardware levels on closely located RCF fragments. It also pays attention to big graphs emerging when the multichannel diagnostic models are formed.Three main ways to solve the problem of big graphs with respect to allocated RCF fragment are given. These are: splitting the graph into fragments, use of pop-up windows with relocating and memorizing intermediate values of functions of high hardware levels of diagnostic models, and deep adaptive update of diagnostic model.It is shown that the last of three ways is the most efficient

  8. Consideration of the direction for improving RI-biomics information system for using big data in radiation field

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Hyun; Kim, Joo Yeon; Park, Tai Jin [Korean Association for Radiation Application, Seoul (Korea, Republic of); Lim, Young Khi [Dept. of Radiological Science, Gachon University, Incheon (Korea, Republic of)

    2017-03-15

    RI-Biomics is a fusion technology in radiation felds for evaluating in-vivo dynamics such as absorption, distribution, metabolism and excretion (RI-ADME) of new drugs and materials using radioisotopes and quantitative evaluation of their effcacy. RI-Biomics information is being provided by RIBio-Info developed as information system for distributing its information and three requirements for improving RIBio-Info system have been derived through reviewing recent big data trends in this study. Three requirements are defined as resource, technology and manpower, and some reviews for applying big data in RIBio-In system are suggested. Fist, applicable external big data have to be obtained, second, some infrastructures for realizing applying big data to be expanded, and finally, data scientists able to analyze large scale of information to be trained. Therefore, an original technology driven to analyze for atypical and large scale of data can be created and this stated technology can contribute to obtain a basis to create a new value in RI-Biomics feld.

  9. Consideration of the direction for improving RI-biomics information system for using big data in radiation field

    International Nuclear Information System (INIS)

    Lee, Seung Hyun; Kim, Joo Yeon; Park, Tai Jin; Lim, Young Khi

    2017-01-01

    RI-Biomics is a fusion technology in radiation felds for evaluating in-vivo dynamics such as absorption, distribution, metabolism and excretion (RI-ADME) of new drugs and materials using radioisotopes and quantitative evaluation of their effcacy. RI-Biomics information is being provided by RIBio-Info developed as information system for distributing its information and three requirements for improving RIBio-Info system have been derived through reviewing recent big data trends in this study. Three requirements are defined as resource, technology and manpower, and some reviews for applying big data in RIBio-In system are suggested. Fist, applicable external big data have to be obtained, second, some infrastructures for realizing applying big data to be expanded, and finally, data scientists able to analyze large scale of information to be trained. Therefore, an original technology driven to analyze for atypical and large scale of data can be created and this stated technology can contribute to obtain a basis to create a new value in RI-Biomics feld

  10. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    Science.gov (United States)

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  11. Bohmian quantization of the big rip

    International Nuclear Information System (INIS)

    Pinto-Neto, Nelson; Pantoja, Diego Moraes

    2009-01-01

    It is shown in this paper that minisuperspace quantization of homogeneous and isotropic geometries with phantom scalar fields, when examined in the light of the Bohm-de Broglie interpretation of quantum mechanics, does not eliminate, in general, the classical big rip singularity present in the classical model. For some values of the Hamilton-Jacobi separation constant present in a class of quantum state solutions of the Wheeler-De Witt equation, the big rip can be either completely eliminated or may still constitute a future attractor for all expanding solutions. This is contrary to the conclusion presented in [M. P. Dabrowski, C. Kiefer, and B. Sandhofer, Phys. Rev. D 74, 044022 (2006).], using a different interpretation of the wave function, where the big rip singularity is completely eliminated ('smoothed out') through quantization, independently of such a separation constant and for all members of the above mentioned class of solutions. This is an example of the very peculiar situation where different interpretations of the same quantum state of a system are predicting different physical facts, instead of just giving different descriptions of the same observable facts: in fact, there is nothing more observable than the fate of the whole Universe.

  12. Big Data Application in Biomedical Research and Health Care: A Literature Review.

    Science.gov (United States)

    Luo, Jake; Wu, Min; Gopukumar, Deepika; Zhao, Yiqing

    2016-01-01

    Big data technologies are increasingly used for biomedical and health-care informatics research. Large amounts of biological and clinical data have been generated and collected at an unprecedented speed and scale. For example, the new generation of sequencing technologies enables the processing of billions of DNA sequence data per day, and the application of electronic health records (EHRs) is documenting large amounts of patient data. The cost of acquiring and analyzing biomedical data is expected to decrease dramatically with the help of technology upgrades, such as the emergence of new sequencing machines, the development of novel hardware and software for parallel computing, and the extensive expansion of EHRs. Big data applications present new opportunities to discover new knowledge and create novel methods to improve the quality of health care. The application of big data in health care is a fast-growing field, with many new discoveries and methodologies published in the last five years. In this paper, we review and discuss big data application in four major biomedical subdisciplines: (1) bioinformatics, (2) clinical informatics, (3) imaging informatics, and (4) public health informatics. Specifically, in bioinformatics, high-throughput experiments facilitate the research of new genome-wide association studies of diseases, and with clinical informatics, the clinical field benefits from the vast amount of collected patient data for making intelligent decisions. Imaging informatics is now more rapidly integrated with cloud platforms to share medical image data and workflows, and public health informatics leverages big data techniques for predicting and monitoring infectious disease outbreaks, such as Ebola. In this paper, we review the recent progress and breakthroughs of big data applications in these health-care domains and summarize the challenges, gaps, and opportunities to improve and advance big data applications in health care.

  13. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  14. Big Data for Infectious Disease Surveillance and Modeling

    OpenAIRE

    Bansal, Shweta; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro; Viboud, Cécile

    2016-01-01

    We devote a special issue of the Journal of Infectious Diseases to review the recent advances of big data in strengthening disease surveillance, monitoring medical adverse events, informing transmission models, and tracking patient sentiments and mobility. We consider a broad definition of big data for public health, one encompassing patient information gathered from high-volume electronic health records and participatory surveillance systems, as well as mining of digital traces such as socia...

  15. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  16. Maximizing probable oil field profit: uncertainties on well spacing

    International Nuclear Information System (INIS)

    MacKay, J.A.; Lerche, I.

    1997-01-01

    The influence of uncertainties in field development costs, well costs, lifting costs, selling price, discount factor, and oil field reserves are evaluated for their impact on assessing probable ranges of uncertainty on present day worth (PDW), oil field lifetime τ 2/3 , optimum number of wells (OWI), and the minimum (n-) and maximum (n+) number of wells to produce a PDW ≥ O. The relative importance of different factors in contributing to the uncertainties in PDW, τ 2/3 , OWI, nsub(-) and nsub(+) is also analyzed. Numerical illustrations indicate how the maximum PDW depends on the ranges of parameter values, drawn from probability distributions using Monte Carlo simulations. In addition, the procedure illustrates the relative importance of contributions of individual factors to the total uncertainty, so that one can assess where to place effort to improve ranges of uncertainty; while the volatility of each estimate allows one to determine when such effort is needful. (author)

  17. Time-depth and velocity trend analysis of the Wasagu field ...

    African Journals Online (AJOL)

    From this study of data sets from Wasagu field in the Niger Delta, it has been found ... relief) cause big difference in bed velocities or where anisotropy is severe. ... of seismic data and checkshot data sets, which lie three wells, a relationship ...

  18. Big Bang Darkleosynthesis

    OpenAIRE

    Krnjaic, Gordan; Sigurdson, Kris

    2014-01-01

    In a popular class of models, dark matter comprises an asymmetric population of composite particles with short range interactions arising from a confined nonabelian gauge group. We show that coupling this sector to a well-motivated light mediator particle yields efficient darkleosynthesis , a dark-sector version of big-bang nucleosynthesis (BBN), in generic regions of parameter space. Dark matter self-interaction bounds typically require the confinement scale to be above ΛQCD , which generica...

  19. Data privacy foundations, new developments and the big data challenge

    CERN Document Server

    Torra, Vicenç

    2017-01-01

    This book offers a broad, cohesive overview of the field of data privacy. It discusses, from a technological perspective, the problems and solutions of the three main communities working on data privacy: statistical disclosure control (those with a statistical background), privacy-preserving data mining (those working with data bases and data mining), and privacy-enhancing technologies (those involved in communications and security) communities. Presenting different approaches, the book describes alternative privacy models and disclosure risk measures as well as data protection procedures for respondent, holder and user privacy. It also discusses specific data privacy problems and solutions for readers who need to deal with big data.

  20. Big-hole drilling - the state of the art

    International Nuclear Information System (INIS)

    Lackey, M.D.

    1983-01-01

    The art of big-hole drilling has been in a continual state of evolution at the Nevada Test Site since the start of underground testing in 1961. Emplacement holes for nuclear devices are still being drilled by the rotary-drilling process, but almost all the hardware and systems have undergone many changes during the intervening years. The current design of bits, cutters, and other big-hole-drilling hardware results from contributions of manufacturers and Test Site personnel. The dual-string, air-lift, reverse-circulation system was developed at the Test Site. Necessity was really the Mother of this invention, but this circulation system is worthy of consideration under almost any condition. Drill rigs for big-hole drilling are usually adaptations of large oil-well drill rigs with minor modifications required to handle the big bits and drilling assemblies. Steel remains the favorite shaft lining material, but a lot of thought is being given to concrete linings, especially precast concrete

  1. Game, cloud architecture and outreach for The BIG Bell Test

    Science.gov (United States)

    Abellan, Carlos; Tura, Jordi; Garcia, Marta; Beduini, Federica; Hirschmann, Alina; Pruneri, Valerio; Acin, Antonio; Marti, Maria; Mitchell, Morgan

    The BIG Bell test uses the input from the Bellsters, self-selected human participants introducing zeros and ones through an online videogame, to perform a suite of quantum physics experiments. In this talk, we will explore the videogame, the data infrastructure and the outreach efforts of the BIG Bell test collaboration. First, we will discuss how the game was designed so as to eliminate possible feedback mechanisms that could influence people's behavior. Second, we will discuss the cloud architecture design for scalability as well as explain how we sent each individual bit from the users to the labs. Also, and using all the bits collected via the BIG Bell test interface, we will show a data analysis on human randomness, e.g. are younger Bellsters more random than older Bellsters? Finally, we will talk about the outreach and communication efforts of the BIG Bell test collaboration, exploring both the social media campaigns as well as the close interaction with teachers and educators to bring the project into classrooms.

  2. Pengembangan Aplikasi Antarmuka Layanan Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Gede Karya

    2017-11-01

    Full Text Available In the 2016 Higher Competitive Grants Research (Hibah Bersaing Dikti, we have been successfully developed models, infrastructure and modules of Hadoop-based big data analysis application. It has also successfully developed a virtual private network (VPN network that allows integration and access to the infrastructure from outside the FTIS Computer Laboratorium. Infrastructure and application modules of analysis are then wanted to be presented as services to small and medium enterprises (SMEs in Indonesia. This research aims to develop application of big data analysis service interface integrated with Hadoop-Cluster. The research begins with finding appropriate methods and techniques for scheduling jobs, calling for ready-made Java Map-Reduce (MR application modules, and techniques for tunneling input / output and meta-data construction of service request (input and service output. The above methods and techniques are then developed into a web-based service application, as well as an executable module that runs on Java and J2EE based programming environment and can access Hadoop-Cluster in the FTIS Computer Lab. The resulting application can be accessed by the public through the site http://bigdata.unpar.ac.id. Based on the test results, the application has functioned well in accordance with the specifications and can be used to perform big data analysis. Keywords: web based service, big data analysis, Hadop, J2EE Abstrak Pada penelitian Hibah Bersaing Dikti tahun 2016 telah berhasil dikembangkan model, infrastruktur dan modul-modul aplikasi big data analysis berbasis Hadoop. Selain itu juga telah berhasil dikembangkan jaringan virtual private network (VPN yang memungkinkan integrasi dan akses infrastruktur tersebut dari luar Laboratorium Komputer FTIS. Infrastruktur dan modul aplikasi analisis tersebut selanjutnya ingin dipresentasikan sebagai layanan kepada usaha kecil dan menengah (UKM di Indonesia. Penelitian ini bertujuan untuk mengembangkan

  3. Particle localization in a double-well potential by pseudo-supersymmetric fields

    International Nuclear Information System (INIS)

    Bagrov, V. G.; Samsonov, B. F.; Shamshutdinova, V. V.

    2011-01-01

    We study properties of a particle moving in a double-well potential in the two-level approximation placed in an additional external time-dependent field. Using previously established property (J. Phys. A 41, 244023 (2008)) that any two-level system possesses a pseudo-supersymmetry we introduce the notion of pseudo-supersymmetric field. It is shown that these fields, even if their time dependence is not periodical, may produce the effect of localization of the particle in one of the wells of the double-well potential.

  4. Getting started with Greenplum for big data analytics

    CERN Document Server

    Gollapudi, Sunila

    2013-01-01

    Standard tutorial-based approach.""Getting Started with Greenplum for Big Data"" Analytics is great for data scientists and data analysts with a basic knowledge of Data Warehousing and Business Intelligence platforms who are new to Big Data and who are looking to get a good grounding in how to use the Greenplum Platform. It's assumed that you will have some experience with database design and programming as well as be familiar with analytics tools like R and Weka.

  5. a New Look at the Big Bang

    Science.gov (United States)

    Wesson, Paul S.

    We give a mathematically exact and physically faithful embedding of curved 4D cosmology in a flat 5D space, thereby enabling visualization of the big bang in a new and informative way. In fact, in unified theories of fields and particles with real extra dimensions, it is possible to dispense with the initial singularity.

  6. The Case for "Big History."

    Science.gov (United States)

    Christian, David

    1991-01-01

    Urges an approach to the teaching of history that takes the largest possible perspective, crossing time as well as space. Discusses the problems and advantages of such an approach. Describes a course on "big" history that begins with time, creation myths, and astronomy, and moves on to paleontology and evolution. (DK)

  7. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  8. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  9. [Big data in imaging].

    Science.gov (United States)

    Sewerin, Philipp; Ostendorf, Benedikt; Hueber, Axel J; Kleyer, Arnd

    2018-04-01

    Until now, most major medical advancements have been achieved through hypothesis-driven research within the scope of clinical trials. However, due to a multitude of variables, only a certain number of research questions could be addressed during a single study, thus rendering these studies expensive and time consuming. Big data acquisition enables a new data-based approach in which large volumes of data can be used to investigate all variables, thus opening new horizons. Due to universal digitalization of the data as well as ever-improving hard- and software solutions, imaging would appear to be predestined for such analyses. Several small studies have already demonstrated that automated analysis algorithms and artificial intelligence can identify pathologies with high precision. Such automated systems would also seem well suited for rheumatology imaging, since a method for individualized risk stratification has long been sought for these patients. However, despite all the promising options, the heterogeneity of the data and highly complex regulations covering data protection in Germany would still render a big data solution for imaging difficult today. Overcoming these boundaries is challenging, but the enormous potential advances in clinical management and science render pursuit of this goal worthwhile.

  10. [Big data approaches in psychiatry: examples in depression research].

    Science.gov (United States)

    Bzdok, D; Karrer, T M; Habel, U; Schneider, F

    2017-11-29

    The exploration and therapy of depression is aggravated by heterogeneous etiological mechanisms and various comorbidities. With the growing trend towards big data in psychiatry, research and therapy can increasingly target the individual patient. This novel objective requires special methods of analysis. The possibilities and challenges of the application of big data approaches in depression are examined in closer detail. Examples are given to illustrate the possibilities of big data approaches in depression research. Modern machine learning methods are compared to traditional statistical methods in terms of their potential in applications to depression. Big data approaches are particularly suited to the analysis of detailed observational data, the prediction of single data points or several clinical variables and the identification of endophenotypes. A current challenge lies in the transfer of results into the clinical treatment of patients with depression. Big data approaches enable biological subtypes in depression to be identified and predictions in individual patients to be made. They have enormous potential for prevention, early diagnosis, treatment choice and prognosis of depression as well as for treatment development.

  11. Astronomy in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Yanxia Zhang

    2015-05-01

    Full Text Available The fields of Astrostatistics and Astroinformatics are vital for dealing with the big data issues now faced by astronomy. Like other disciplines in the big data era, astronomy has many V characteristics. In this paper, we list the different data mining algorithms used in astronomy, along with data mining software and tools related to astronomical applications. We present SDSS, a project often referred to by other astronomical projects, as the most successful sky survey in the history of astronomy and describe the factors influencing its success. We also discuss the success of Astrostatistics and Astroinformatics organizations and the conferences and summer schools on these issues that are held annually. All the above indicates that astronomers and scientists from other areas are ready to face the challenges and opportunities provided by massive data volume.

  12. Phantom dark energy and cosmological solutions without the Big Bang singularity

    International Nuclear Information System (INIS)

    Baushev, A.N.

    2010-01-01

    The hypothesis is rapidly gaining popularity that the dark energy pervading our universe is extra-repulsive (-p>ρ). The density of such a substance (usually called phantom energy) grows with the cosmological expansion and may become infinite in a finite time producing a Big Rip. In this Letter we analyze the late stages of the universe evolution and demonstrate that the presence of the phantom energy in the universe is not enough in itself to produce the Big Rip. This singularity occurrence requires the fulfillment of some additional, rather strong conditions. A more probable outcome of the cosmological evolution is the decay of the phantom field into 'normal' matter. The second, more intriguing consequence of the presence of the phantom field is the possibility to introduce a cosmological scenario that does not contain a Big Bang. In the framework of this model the universe eternally expands, while its density and other physical parameters oscillate over a wide range, never reaching the Plank values. Thus, the universe evolution has no singularities at all.

  13. Zero field spin splitting in asymmetric quantum wells

    International Nuclear Information System (INIS)

    Hao Yafei

    2012-01-01

    Spin splitting of asymmetric quantum wells is theoretically investigated in the absence of any electric field, including the contribution of interface-related Rashba spin-orbit interaction as well as linear and cubic Dresselhaus spin-orbit interaction. The effect of interface asymmetry on three types of spin-orbit interaction is discussed. The results show that interface-related Rashba and linear Dresselhaus spin-orbit interaction can be increased and cubic Dresselhaus spin-orbit interaction can be decreased by well structure design. For wide quantum wells, the cubic Dresselhaus spin-orbit interaction dominates under certain conditions, resulting in decreased spin relaxation time.

  14. Challenges and potential solutions for big data implementations in developing countries.

    Science.gov (United States)

    Luna, D; Mayan, J C; García, M J; Almerares, A A; Househ, M

    2014-08-15

    The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: "big data", "developing countries", "data mining", "health information systems", and "computing methodologies". A thematic review of selected articles was performed. There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

  15. Big data and information management: modeling the context decisional supported by sistemography

    Directory of Open Access Journals (Sweden)

    William Barbosa Vianna

    2016-04-01

    Full Text Available Introduction: The study justified by the scarcity of studies in the field of information science that addressing the phenomenon of big data from the perspective of information management. that will allow further development of computer simulation. Objective: The objective is to identify and represent the general elements of the decision-making process in the context of big data. Methodology: It is an exploratory study and theoretical and deductive nature. Results: It resulted in the identification of the main elements involved in decision-making on big data environment and its sistemografic representation. Conclusions: It was possible to develop a representation which will allow further development of computer simulation.

  16. Underbalance well completion - a modern approach for mature gas fields

    Directory of Open Access Journals (Sweden)

    Tătaru Argentina

    2017-01-01

    Full Text Available The exploitation of natural gas fields from Transylvanian Basin started a century ago. The majority of these fields were discovered and developed in the last century, from 1950 to 1970. So, these reservoirs have over 50 years of production historical. These are mature fields with a very low reservoir pressure. Now, for some of these reservoirs the pressure is 10 – 20 % of initial values. The biggest challenge for a production company is to make completions and recompletions in depleted reservoirs wells. At the beginning was not a problem to do a workover in these wells because the completion fluids were lost in the reservoir and the pressure helped the wells to clean up. Now, because the reservoir pressure is very low, it takes time for well to be cleaned. From time to time, some net pays have to be bypassed because the fluid can be lost in the reservoir. So the best method to do recompletion works in these wells is to work underbalance or even with the well under pressure. This paper presents some of the technologies used by Romgaz to accomplish this goal.

  17. Big Bang or vacuum fluctuation

    International Nuclear Information System (INIS)

    Zel'dovich, Ya.B.

    1980-01-01

    Some general properties of vacuum fluctuations in quantum field theory are described. The connection between the ''energy dominance'' of the energy density of vacuum fluctuations in curved space-time and the presence of singularity is discussed. It is pointed out that a de-Sitter space-time (with the energy density of the vacuum fluctuations in the Einstein equations) that matches the expanding Friedman solution may describe the history of the Universe before the Big Bang. (P.L.)

  18. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  19. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  20. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  1. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  2. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  3. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  4. Bigfoot Field Manual, Version 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.L.; Burrows, S.; Gower, S.T.; Cohen, W.B.

    1999-09-01

    The BigFoot Project is funded by the Earth Science Enterprise to collect and organize data to be used in the National Aeronautics and Space Administration's Earth Observing System (EOS) Validation Program. The data collected by the BigFoot Project are unique in being ground-based observations coincident with satellite overpasses. In addition to collecting data, the BigFoot project will develop and test new algorithms for scaling point measurements to the same spatial scales as the EOS satellite products. This BigFoot Field Manual will be used to achieve completeness and consistency of data collected at four initial BigFoot sites and at future sites that may collect similar validation data. Therefore, validation datasets submitted to the Oak Ridge National Laboratory Distributed Active Archive Center that have been compiled in a manner consistent with the field manual will be especially valuable in the validation program.

  5. Regularization of the big bang singularity with random perturbations

    Science.gov (United States)

    Belbruno, Edward; Xue, BingKan

    2018-03-01

    We show how to regularize the big bang singularity in the presence of random perturbations modeled by Brownian motion using stochastic methods. We prove that the physical variables in a contracting universe dominated by a scalar field can be continuously and uniquely extended through the big bang as a function of time to an expanding universe only for a discrete set of values of the equation of state satisfying special co-prime number conditions. This result significantly generalizes a previous result (Xue and Belbruno 2014 Class. Quantum Grav. 31 165002) that did not model random perturbations. This result implies that the extension from a contracting to an expanding universe for the discrete set of co-prime equation of state is robust, which is a surprising result. Implications for a purely expanding universe are discussed, such as a non-smooth, randomly varying scale factor near the big bang.

  6. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  7. The Human Genome Project: big science transforms biology and medicine.

    Science.gov (United States)

    Hood, Leroy; Rowen, Lee

    2013-01-01

    The Human Genome Project has transformed biology through its integrated big science approach to deciphering a reference human genome sequence along with the complete sequences of key model organisms. The project exemplifies the power, necessity and success of large, integrated, cross-disciplinary efforts - so-called 'big science' - directed towards complex major objectives. In this article, we discuss the ways in which this ambitious endeavor led to the development of novel technologies and analytical tools, and how it brought the expertise of engineers, computer scientists and mathematicians together with biologists. It established an open approach to data sharing and open-source software, thereby making the data resulting from the project accessible to all. The genome sequences of microbes, plants and animals have revolutionized many fields of science, including microbiology, virology, infectious disease and plant biology. Moreover, deeper knowledge of human sequence variation has begun to alter the practice of medicine. The Human Genome Project has inspired subsequent large-scale data acquisition initiatives such as the International HapMap Project, 1000 Genomes, and The Cancer Genome Atlas, as well as the recently announced Human Brain Project and the emerging Human Proteome Project.

  8. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  9. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  10. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  11. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  12. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  13. The isotope hydrology of the northern well field, Jwaneng diamond mine, Botswana

    International Nuclear Information System (INIS)

    Verhagen, B.Th.; Brook, M.C.

    1989-01-01

    The northern well field supplies the Jwaneng diamond mine with 4.8x10 6 m 3 /a of good quality ground water. Model predictions, based on hydraulic parameters obtained during initial development, were found to have overestimated well field drawdown and have been twice updated in the past seven years. Since 1983, various surveys of environmental isotope levels in the well field ground water were conducted. Over the period of observation, there has been very little change in the initial conclusion that part of the well field contains suprisingly recent ground water. Active recharge, which was suspected from the chemical composition of the ground water, was therefore confirmed, as well as the fact that the ground water body as a whole is dynamic. The isotopic data are discussed in terms of regional information from the surrounding Karoo aquifers. It is shown that earlier theories of remote recharge to the well field are untenable. More recent data on first strike and other water samples obtained during exploring drilling are incorporated. Estimates based on the hydrological picture presented by the isotope data, indicate economically significant local rain recharge. A simple analytical model of well field behaviour shows that this calculated recharge rate brings the modelled drawdowns into the range of actually observed values. 10 refs., 7 figs

  14. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  15. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  16. D-branes in a big bang/big crunch universe: Nappi-Witten gauged WZW model

    Energy Technology Data Exchange (ETDEWEB)

    Hikida, Yasuaki [School of Physics and BK-21 Physics Division, Seoul National University, Seoul 151-747 (Korea, Republic of); Nayak, Rashmi R. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' ' , Rome 00133 (Italy); Panigrahi, Kamal L. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy)

    2005-05-01

    We study D-branes in the Nappi-Witten model, which is a gauged WZW model based on (SL(2,R) x SU(2))/(U(1) x U(1)). The model describes a four dimensional space-time consisting of cosmological regions with big bang/big crunch singularities and static regions with closed time-like curves. The aim of this paper is to investigate by D-brane probes whether there are pathologies associated with the cosmological singularities and the closed time-like curves. We first classify D-branes in a group theoretical way, and then examine DBI actions for effective theories on the D-branes. In particular, we show that D-brane metric from the DBI action does not include singularities, and wave functions on the D-branes are well behaved even in the presence of closed time-like curves.

  17. Think bigger developing a successful big data strategy for your business

    CERN Document Server

    Van Rijmenam, Mark

    2014-01-01

    Big data--the enormous amount of data that is created as virtually every movement, transaction, and choice we make becomes digitized--is revolutionizing business. Written for a non-technical audience, Think Bigger covers big data trends, best practices, and security concerns--as well as key technologies like Hadoop and MapReduce, and several crucial types of analyses. Offering real-world insight and explanations, this book provides a roadmap for organizations looking to develop a profitable big data strategy...and reveals why it's not something they can leave to the I.T. department.

  18. Adiabatic CMB perturbations in pre-big bang string cosmology

    DEFF Research Database (Denmark)

    Enqvist, Kari; Sloth, Martin Snoager

    2001-01-01

    We consider the pre-big bang scenario with a massive axion field which starts to dominate energy density when oscillating in an instanton-induced potential and subsequently reheats the universe as it decays into photons, thus creating adiabatic CMB perturbations. We find that the fluctuations...

  19. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  20. Molecular evolution of colorectal cancer: from multistep carcinogenesis to the big bang.

    Science.gov (United States)

    Amaro, Adriana; Chiara, Silvana; Pfeffer, Ulrich

    2016-03-01

    Colorectal cancer is characterized by exquisite genomic instability either in the form of microsatellite instability or chromosomal instability. Microsatellite instability is the result of mutation of mismatch repair genes or their silencing through promoter methylation as a consequence of the CpG island methylator phenotype. The molecular causes of chromosomal instability are less well characterized. Genomic instability and field cancerization lead to a high degree of intratumoral heterogeneity and determine the formation of cancer stem cells and epithelial-mesenchymal transition mediated by the TGF-β and APC pathways. Recent analyses using integrated genomics reveal different phases of colorectal cancer evolution. An initial phase of genomic instability that yields many clones with different mutations (big bang) is followed by an important, previously not detected phase of cancer evolution that consists in the stabilization of several clones and a relatively flat outgrowth. The big bang model can best explain the coexistence of several stable clones and is compatible with the fact that the analysis of the bulk of the primary tumor yields prognostic information.

  1. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  2. Electron screening and its effects on big-bang nucleosynthesis

    International Nuclear Information System (INIS)

    Wang Biao; Bertulani, C. A.; Balantekin, A. B.

    2011-01-01

    We study the effects of electron screening on nuclear reaction rates occurring during the big-bang nucleosynthesis epoch. The sensitivity of the predicted elemental abundances on electron screening is studied in detail. It is shown that electron screening does not produce noticeable results in the abundances unless the traditional Debye-Hueckel model for the treatment of electron screening in stellar environments is enhanced by several orders of magnitude. This work rules out electron screening as a relevant ingredient to big-bang nucleosynthesis, confirming a previous study [see Itoh et al., Astrophys. J. 488, 507 (1997)] and ruling out exotic possibilities for the treatment of screening beyond the mean-field theoretical approach.

  3. Nonlinear optical rectification in semiparabolic quantum wells with an applied electric field

    International Nuclear Information System (INIS)

    Karabulut, ibrahim; Safak, Haluk

    2005-01-01

    The optical rectification (OR) in a semiparabolic quantum well with an applied electric field has been theoretically investigated. The electronic states in a semiparabolic quantum well with an applied electric field are calculated exactly, within the envelope function and the displaced harmonic oscillator approach. Numerical results are presented for the typical Al x Ga 1- x As/GaAs quantum well. These results show that the applied electric field and the confining potential frequency of the semiparabolic quantum well have a great influence on the OR coefficient. Moreover, the OR coefficient also depends sensitively on the relaxation rate of the semiparabolic quantum well system

  4. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  5. HARNESSING BIG DATA FOR PRECISION MEDICINE: INFRASTRUCTURES AND APPLICATIONS.

    Science.gov (United States)

    Yu, Kun-Hsing; Hart, Steven N; Goldfeder, Rachel; Zhang, Qiangfeng Cliff; Parker, Stephen C J; Snyder, Michael

    2017-01-01

    Precision medicine is a health management approach that accounts for individual differences in genetic backgrounds and environmental exposures. With the recent advancements in high-throughput omics profiling technologies, collections of large study cohorts, and the developments of data mining algorithms, big data in biomedicine is expected to provide novel insights into health and disease states, which can be translated into personalized disease prevention and treatment plans. However, petabytes of biomedical data generated by multiple measurement modalities poses a significant challenge for data analysis, integration, storage, and result interpretation. In addition, patient privacy preservation, coordination between participating medical centers and data analysis working groups, as well as discrepancies in data sharing policies remain important topics of discussion. In this workshop, we invite experts in omics integration, biobank research, and data management to share their perspectives on leveraging big data to enable precision medicine.Workshop website: http://tinyurl.com/PSB17BigData; HashTag: #PSB17BigData.

  6. What would be outcome of a Big Crunch?

    CERN Document Server

    Hajdukovic, Dragan Slavkov

    2010-01-01

    I suggest the existence of a still undiscovered interaction: repulsion between matter and antimatter. The simplest and the most elegant candidate for such a force is gravitational repulsion between particles and antiparticles. I argue that such a force may give birth to a new Universe; by transforming an eventual Big Crunch of our universe, to an event similar to Big Bang. In fact, when a collapsing Universe is reduced to a supermassive black hole of a small size, a very strong field of the conjectured force may create particle-antiparticle pairs from the surrounding vacuum. The amount of the antimatter created from the physical vacuum is equal to the decrease of mass of "black hole Universe" and violently repelled from it. When the size of the black hole is sufficiently small the creation of antimatter may become so huge and fast, that matter of our Universe may disappear in a fraction of the Planck time. So fast transformation of matter to antimatter may look like a Big Bang with the initial size about 30 o...

  7. Big Rock Point: 35 years of electrical generation

    International Nuclear Information System (INIS)

    Petrosky, T.D.

    1998-01-01

    On September 27, 1962, the 75 MWe boiling water reactor, designed and built by General Electric, of the Big Rock Point Nuclear Power Station went critical for the first time. The US Atomic Energy Commission (AEC) and the plant operator, Consumers Power, had designed the plant also as a research reactor. The first studies were devoted to fuel behavior, higher burnup, and materials research. The reactor was also used for medical technology: Co-60 radiation sources were produced for the treatment of more than 120,000 cancer patients. After the accident at the Three Mile Island-2 nuclear generating unit in 1979, Big Rock Point went through an extensive backfitting phase. Personnel from numerous other American nuclear power plants were trained at the simulator of Big Rock Point. The plant was decommissioned permanently on August 29, 1997 after more than 35 years of operation and a cumulated electric power production of 13,291 GWh. A period of five to seven years is estimated for decommissioning and demolition work up to the 'green field' stage. (orig.) [de

  8. BIG DATA IN SUPPLY CHAIN MANAGEMENT: AN EXPLORATORY STUDY

    Directory of Open Access Journals (Sweden)

    Gheorghe MILITARU

    2015-12-01

    Full Text Available The objective of this paper is to set a framework for examining the conditions under which the big data can create long-term profitability through developing dynamic operations and digital supply networks in supply chain. We investigate the extent to which big data analytics has the power to change the competitive landscape of industries that could offer operational, strategic and competitive advantages. This paper is based upon a qualitative study of the convergence of predictive analytics and big data in the field of supply chain management. Our findings indicate a need for manufacturers to introduce analytics tools, real-time data, and more flexible production techniques to improve their productivity in line with the new business model. By gathering and analysing vast volumes of data, analytics tools help companies to resource allocation and capital spends more effectively based on risk assessment. Finally, implications and directions for future research are discussed.

  9. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  10. How Does National Scientific Funding Support Emerging Interdisciplinary Research: A Comparison Study of Big Data Research in the US and China

    Science.gov (United States)

    Huang, Ying; Zhang, Yi; Youtie, Jan; Porter, Alan L.; Wang, Xuefeng

    2016-01-01

    How do funding agencies ramp-up their capabilities to support research in a rapidly emerging area? This paper addresses this question through a comparison of research proposals awarded by the US National Science Foundation (NSF) and the National Natural Science Foundation of China (NSFC) in the field of Big Data. Big data is characterized by its size and difficulties in capturing, curating, managing and processing it in reasonable periods of time. Although Big Data has its legacy in longstanding information technology research, the field grew very rapidly over a short period. We find that the extent of interdisciplinarity is a key aspect in how these funding agencies address the rise of Big Data. Our results show that both agencies have been able to marshal funding to support Big Data research in multiple areas, but the NSF relies to a greater extent on multi-program funding from different fields. We discuss how these interdisciplinary approaches reflect the research hot-spots and innovation pathways in these two countries. PMID:27219466

  11. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  12. Statistical methods and computing for big data.

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  13. A Big-Data-based platform of workers' behavior: Observations from the field.

    Science.gov (United States)

    Guo, S Y; Ding, L Y; Luo, H B; Jiang, X Y

    2016-08-01

    Behavior-Based Safety (BBS) has been used in construction to observe, analyze and modify workers' behavior. However, studies have identified that BBS has several limitations, which have hindered its effective implementation. To mitigate the negative impact of BBS, this paper uses a case study approach to develop a Big-Data-based platform to classify, collect and store data about workers' unsafe behavior that is derived from a metro construction project. In developing the platform, three processes were undertaken: (1) a behavioral risk knowledge base was established; (2) images reflecting workers' unsafe behavior were collected from intelligent video surveillance and mobile application; and (3) images with semantic information were stored via a Hadoop Distributed File System (HDFS). The platform was implemented during the construction of the metro-system and it is demonstrated that it can effectively analyze semantic information contained in images, automatically extract workers' unsafe behavior and quickly retrieve on HDFS as well. The research presented in this paper can enable construction organizations with the ability to visualize unsafe acts in real-time and further identify patterns of behavior that can jeopardize safety outcomes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Big Data and Public Health Systems: Issues and Opportunities

    Directory of Open Access Journals (Sweden)

    David Rojas

    2018-03-01

    Full Text Available Over the last years, the need for changing the current model of European public health systems has been repeatedly addressed, in order to ensure their sustainability. Following this line, IT has always been referred to as one of the key instruments for enhancing the information management processes of healthcare organizations, thus contributing to the improvement and evolution of health systems. On the IT field, Big Data solutions are expected to play a main role, since they are designed for handling huge amounts of information in a fast and efficient way, allowing users to make important decisions quickly. This article reviews the main features of the European public health system model and the corresponding healthcare and management-related information systems, the challenges that these health systems are currently facing, and the possible contributions of Big Data solutions to this field. To that end, the authors share their professional experience on the Spanish public health system, and review the existing literature related to this topic.

  15. The effects of intense laser field and applied electric and magnetic fields on optical properties of an asymmetric quantum well

    Energy Technology Data Exchange (ETDEWEB)

    Restrepo, R.L., E-mail: pfrire@eia.edu.co [Department of Physics, Cumhuriyet University, 58140 Sivas (Turkey); Escuela de Ingeniería de Antioquia-EIA, Envigado (Colombia); Grupo de Materia Condensada-UdeA, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia-UdeA, Calle 70 No. 52-21, Medellín (Colombia); Ungan, F.; Kasapoglu, E. [Department of Physics, Cumhuriyet University, 58140 Sivas (Turkey); Mora-Ramos, M.E. [Facultad de Ciencias, Universidad Autonóma del Estado de Morelos, Ave. Universidad 1001, CP 62209, Cuernavaca, Morelos (Mexico); Morales, A.L.; Duque, C.A. [Grupo de Materia Condensada-UdeA, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia-UdeA, Calle 70 No. 52-21, Medellín (Colombia)

    2015-01-15

    This paper presents the results of the theoretical study of the effects of non-resonant intense laser field and electric and magnetic fields on the optical properties (the linear and third-order nonlinear refractive index and absorption coefficients) in an asymmetric quantum well. The electric field and intense laser field are applied along the growth direction of the asymmetric quantum well and the magnetic field is oriented perpendicularly. To calculate the energy and the wave functions of the electron in the asymmetric quantum well, the effective mass approximation and the method of envelope wave function are used. The asymmetric quantum well is constructed by using different aluminium concentrations in both right and left barriers. The confinement in the quantum well is changed drastically by either the effect of electric and magnetic fields or by the application of intense laser field. The optical properties are calculated using the compact density matrix approach. The results show that the effect of the intense laser field competes with the effects of the electric and magnetic fields. Consequently, peak position shifts to lower photon energies due to the effect of the intense laser field and it shifts to higher photon energies by the effects of electric and magnetic fields. In general, it is found that the concentration of aluminum, electric and magnetic fields and intense laser field are external agents that modify the optical responses in the asymmetric quantum well.

  16. The universe before the Big Bang cosmology and string theory

    CERN Document Server

    Gasperini, Maurizio

    2008-01-01

    Terms such as "expanding Universe", "big bang", and "initial singularity", are nowadays part of our common language. The idea that the Universe we observe today originated from an enormous explosion (big bang) is now well known and widely accepted, at all levels, in modern popular culture. But what happens to the Universe before the big bang? And would it make any sense at all to ask such a question? In fact, recent progress in theoretical physics, and in particular in String Theory, suggests answers to the above questions, providing us with mathematical tools able in principle to reconstruct the history of the Universe even for times before the big bang. In the emerging cosmological scenario the Universe, at the epoch of the big bang, instead of being a "new born baby" was actually a rather "aged" creature in the middle of its possibly infinitely enduring evolution. The aim of this book is to convey this picture in non-technical language accessibile also to non-specialists. The author, himself a leading cosm...

  17. Big data and visual analytics in anaesthesia and health care.

    Science.gov (United States)

    Simpao, A F; Ahumada, L M; Rehman, M A

    2015-09-01

    Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Concurrence of big data analytics and healthcare: A systematic review.

    Science.gov (United States)

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  19. Application of oil-field well log interpretation techniques to the Cerro Prieto Geothermal Field

    Energy Technology Data Exchange (ETDEWEB)

    Ershaghi, I.; Phillips, L.B.; Dougherty, E.L.; Handy, L.L.

    1979-10-01

    An example is presented of the application of oil-field techniques to the Cerro Prieto Field, Mexico. The lithology in this field (sand-shale lithology) is relatively similar to oil-field systems. The study was undertaken as a part of the first series of case studies supported by the Geothermal Log Interpretation Program (GLIP) of the US Department of Energy. The suites of logs for individual wells were far from complete. This was partly because of adverse borehole conditions but mostly because of unavailability of high-temperature tools. The most complete set of logs was a combination of Dual Induction Laterolog, Compensated Formation Density Gamma Ray, Compensated Neutron Log, and Saraband. Temperature data about the wells were sketchy, and the logs had been run under pre-cooled mud condition. A system of interpretation consisting of a combination of graphic and numerical studies was used to study the logs. From graphical studies, evidence of hydrothermal alteration may be established from the trend analysis of SP (self potential) and ILD (deep induction log). Furthermore, the cross plot techniques using data from density and neutron logs may help in establishing compaction as well as rock density profile with depth. In the numerical method, R/sub wa/ values from three different resistivity logs were computed and brought into agreement. From this approach, values of formation temperature and mud filtrate resistivity effective at the time of logging were established.

  20. A glossary for big data in population and public health: discussion and commentary on terminology and research methods.

    Science.gov (United States)

    Fuller, Daniel; Buote, Richard; Stanley, Kevin

    2017-11-01

    The volume and velocity of data are growing rapidly and big data analytics are being applied to these data in many fields. Population and public health researchers may be unfamiliar with the terminology and statistical methods used in big data. This creates a barrier to the application of big data analytics. The purpose of this glossary is to define terms used in big data and big data analytics and to contextualise these terms. We define the five Vs of big data and provide definitions and distinctions for data mining, machine learning and deep learning, among other terms. We provide key distinctions between big data and statistical analysis methods applied to big data. We contextualise the glossary by providing examples where big data analysis methods have been applied to population and public health research problems and provide brief guidance on how to learn big data analysis methods. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs

    OpenAIRE

    Mohd Usama; Mengchen Liu; Min Chen

    2017-01-01

    At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and ...

  2. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends.

    Science.gov (United States)

    Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by

  3. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  4. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  5. Big data, advanced analytics and the future of comparative effectiveness research.

    Science.gov (United States)

    Berger, Marc L; Doban, Vitalii

    2014-03-01

    The intense competition that accompanied the growth of internet-based companies ushered in the era of 'big data' characterized by major innovations in processing of very large amounts of data and the application of advanced analytics including data mining and machine learning. Healthcare is on the cusp of its own era of big data, catalyzed by the changing regulatory and competitive environments, fueled by growing adoption of electronic health records, as well as efforts to integrate medical claims, electronic health records and other novel data sources. Applying the lessons from big data pioneers will require healthcare and life science organizations to make investments in new hardware and software, as well as in individuals with different skills. For life science companies, this will impact the entire pharmaceutical value chain from early research to postcommercialization support. More generally, this will revolutionize comparative effectiveness research.

  6. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  7. Phantom cosmology without Big Rip singularity

    Energy Technology Data Exchange (ETDEWEB)

    Astashenok, Artyom V. [Baltic Federal University of I. Kant, Department of Theoretical Physics, 236041, 14, Nevsky st., Kaliningrad (Russian Federation); Nojiri, Shin' ichi, E-mail: nojiri@phys.nagoya-u.ac.jp [Department of Physics, Nagoya University, Nagoya 464-8602 (Japan); Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Nagoya 464-8602 (Japan); Odintsov, Sergei D. [Department of Physics, Nagoya University, Nagoya 464-8602 (Japan); Institucio Catalana de Recerca i Estudis Avancats - ICREA and Institut de Ciencies de l' Espai (IEEC-CSIC), Campus UAB, Facultat de Ciencies, Torre C5-Par-2a pl, E-08193 Bellaterra (Barcelona) (Spain); Tomsk State Pedagogical University, Tomsk (Russian Federation); Yurov, Artyom V. [Baltic Federal University of I. Kant, Department of Theoretical Physics, 236041, 14, Nevsky st., Kaliningrad (Russian Federation)

    2012-03-23

    We construct phantom energy models with the equation of state parameter w which is less than -1, w<-1, but finite-time future singularity does not occur. Such models can be divided into two classes: (i) energy density increases with time ('phantom energy' without 'Big Rip' singularity) and (ii) energy density tends to constant value with time ('cosmological constant' with asymptotically de Sitter evolution). The disintegration of bound structure is confirmed in Little Rip cosmology. Surprisingly, we find that such disintegration (on example of Sun-Earth system) may occur even in asymptotically de Sitter phantom universe consistent with observational data. We also demonstrate that non-singular phantom models admit wormhole solutions as well as possibility of Big Trip via wormholes.

  8. Phantom cosmology without Big Rip singularity

    International Nuclear Information System (INIS)

    Astashenok, Artyom V.; Nojiri, Shin'ichi; Odintsov, Sergei D.; Yurov, Artyom V.

    2012-01-01

    We construct phantom energy models with the equation of state parameter w which is less than -1, w<-1, but finite-time future singularity does not occur. Such models can be divided into two classes: (i) energy density increases with time (“phantom energy” without “Big Rip” singularity) and (ii) energy density tends to constant value with time (“cosmological constant” with asymptotically de Sitter evolution). The disintegration of bound structure is confirmed in Little Rip cosmology. Surprisingly, we find that such disintegration (on example of Sun-Earth system) may occur even in asymptotically de Sitter phantom universe consistent with observational data. We also demonstrate that non-singular phantom models admit wormhole solutions as well as possibility of Big Trip via wormholes.

  9. Low field Monte-Carlo calculations in heterojunctions and quantum wells

    NARCIS (Netherlands)

    Hall, van P.J.; Rooij, de R.; Wolter, J.H.

    1990-01-01

    We present results of low-field Monte-Carlo calculations and compare them with experimental results. The negative absolute mobility of minority electrons in p-type quantum wells, as found in recent experiments, is described quite well.

  10. Traffic measurement for big network data

    CERN Document Server

    Chen, Shigang; Xiao, Qingjun

    2017-01-01

    This book presents several compact and fast methods for online traffic measurement of big network data. It describes challenges of online traffic measurement, discusses the state of the field, and provides an overview of the potential solutions to major problems. The authors introduce the problem of per-flow size measurement for big network data and present a fast and scalable counter architecture, called Counter Tree, which leverages a two-dimensional counter sharing scheme to achieve far better memory efficiency and significantly extend estimation range. Unlike traditional approaches to cardinality estimation problems that allocate a separated data structure (called estimator) for each flow, this book takes a different design path by viewing all the flows together as a whole: each flow is allocated with a virtual estimator, and these virtual estimators share a common memory space. A framework of virtual estimators is designed to apply the idea of sharing to an array of cardinality estimation solutions, achi...

  11. Numerical analysis of the big bounce in loop quantum cosmology

    International Nuclear Information System (INIS)

    Laguna, Pablo

    2007-01-01

    Loop quantum cosmology (LQC) homogeneous models with a massless scalar field show that the big-bang singularity can be replaced by a big quantum bounce. To gain further insight on the nature of this bounce, we study the semidiscrete loop quantum gravity Hamiltonian constraint equation from the point of view of numerical analysis. For illustration purposes, we establish a numerical analogy between the quantum bounces and reflections in finite difference discretizations of wave equations triggered by the use of nonuniform grids or, equivalently, reflections found when solving numerically wave equations with varying coefficients. We show that the bounce is closely related to the method for the temporal update of the system and demonstrate that explicit time-updates in general yield bounces. Finally, we present an example of an implicit time-update devoid of bounces and show back-in-time, deterministic evolutions that reach and partially jump over the big-bang singularity

  12. The Opportunity and Challenge of The Age of Big Data

    Science.gov (United States)

    Yunguo, Hong

    2017-11-01

    The arrival of large data age has gradually expanded the scale of information industry in China, which has created favorable conditions for the expansion of information technology and computer network. Based on big data the computer system service function is becoming more and more perfect, and the efficiency of data processing in the system is improving, which provides important guarantee for the implementation of production plan in various industries. At the same time, the rapid development of fields such as Internet of things, social tools, cloud computing and the widen of information channel, these make the amount of data is increase, expand the influence range of the age of big data, we need to take the opportunities and challenges of the age of big data correctly, use data information resources effectively. Based on this, this paper will study the opportunities and challenges of the era of large data.

  13. Hybridization of electron states in a step quantum well in a magnetic field

    International Nuclear Information System (INIS)

    Barseghyan, M.G.; Kirakosyan, A.A.

    2005-01-01

    The quantum states and energy levels of an electrion in a rectangular step quantum well in a magnetic field parallel to the plane of two-dimentional electron gas are investigated. It is shown that the joint effect of the magnetic field and confining potential of the quantum well results in redical change of the electron spectrum. The dependence of the electron energy levels on the quantum well parameters, magnetic field induction and projection of the wave-vector along the magnetic field induction are calculated. Numerical calculations are carried out for a AlAs/GaAlAs/GaAs/AlAs step quantum well

  14. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  15. DEVELOPING THE TRANSDISCIPLINARY AGING RESEARCH AGENDA: NEW DEVELOPMENTS IN BIG DATA.

    Science.gov (United States)

    Callaghan, Christian William

    2017-07-19

    In light of dramatic advances in big data analytics and the application of these advances in certain scientific fields, new potentialities exist for breakthroughs in aging research. Translating these new potentialities to research outcomes for aging populations, however, remains a challenge, as underlying technologies which have enabled exponential increases in 'big data' have not yet enabled a commensurate era of 'big knowledge,' or similarly exponential increases in biomedical breakthroughs. Debates also reveal differences in the literature, with some arguing big data analytics heralds a new era associated with the 'end of theory' or which makes the scientific method obsolete, where correlation supercedes causation, whereby science can advance without theory and hypotheses testing. On the other hand, others argue theory cannot be subordinate to data, no matter how comprehensive data coverage can ultimately become. Given these two tensions, namely between exponential increases in data absent exponential increases in biomedical research outputs, and between the promise of comprehensive data coverage and data-driven inductive versus theory-driven deductive modes of enquiry, this paper seeks to provide a critical review of certain theory and literature that offers useful perspectives of certain developments in big data analytics and their theoretical implications for aging research. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. The big data phenomenon: The business and public impact

    Directory of Open Access Journals (Sweden)

    Chroneos-Krasavac Biljana

    2016-01-01

    Full Text Available The subject of the research in this paper is the emergence of big data phenomenon and application of big data technologies for business' needs with the specific emphasis on marketing and trade. The purpose of the research is to make a comprehensive overview of different discussions about the characteristics, application possibilities, achievements, constraints and the future of big data development. Based on the relevant literature, the concept of big data is presented and the potential of large impact of big data on business activities is discussed. One of the key findings indicates that the most prominent change that big data brings to the business arena is the appearance of new business models, as well as revisions of the existing ones. Substantial part of the paper is devoted to the marketing and marketing research which are under the strong impact of big data. The most exciting outcomes of the research in this domain concerns the new abilities in profiling the customers. In addition to the vast amount of structured data which are used in marketing for a long period, big data initiatives suggest the inclusion of semi-structured and unstructured data, opening up the room for substantial improvements in customer profile analysis. Considering the usage of information communication technologies (ICT as a prerequisite for big data project success, the concept of Networked Readiness Index (NRI is presented and the position of Serbia and regional countries in NRI framework is analyzed. The main outcome of the analysis points out that Serbia, with its NRI score took the lowest position in the region, excluding Albania. Also, Serbia is lagging behind the appropriate EU mean values regarding all observed composite indicators - pillars. Further on, this analysis reveals the domains of ICT usage in Serbia, which could be focused for an improvement and where incentives can be made. These domains are: political and regulatory environment, business and

  17. A field guide for well site geologists: Cable tool drilling

    International Nuclear Information System (INIS)

    Last, G.V.; Liikala, T.L.

    1987-12-01

    This field is intended for use by Pacific Northwest Laboratory well site geologists who are responsible for data collection during the drilling and construction of monitoring wells on the Hanford Site. This guide presents standardized methods for geologic sample collection and description, and well construction documentation. 5 refs., 5 figs., 2 tabs

  18. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  19. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  20. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  1. Linking Metatraits of the Big Five to Well-Being and Ill-Being: Do Basic Psychological Needs Matter?

    Science.gov (United States)

    Simsek, Omer Faruk; Koydemir, Selda

    2013-01-01

    There is considerable evidence that two higher order factors underlie the Big-Five dimensions and that these two factors provide a parsimonious taxonomy. However, not much empirical evidence has been documented as to the extent to which these traits relate to certain psychological constructs. In this study, we tested a structural model to…

  2. Physical properties of superbulky lanthanide metallocenes: synthesis and extraordinary luminescence of [Eu(II)(Cp(BIG))2] (Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl).

    Science.gov (United States)

    Harder, Sjoerd; Naglav, Dominik; Ruspic, Christian; Wickleder, Claudia; Adlung, Matthias; Hermes, Wilfried; Eul, Matthias; Pöttgen, Rainer; Rego, Daniel B; Poineau, Frederic; Czerwinski, Kenneth R; Herber, Rolfe H; Nowik, Israel

    2013-09-09

    The superbulky deca-aryleuropocene [Eu(Cp(BIG))2], Cp(BIG) = (4-nBu-C6H4)5-cyclopentadienyl, was prepared by reaction of [Eu(dmat)2(thf)2], DMAT = 2-Me2N-α-Me3Si-benzyl, with two equivalents of Cp(BIG)H. Recrystallizyation from cold hexane gave the product with a surprisingly bright and efficient orange emission (45% quantum yield). The crystal structure is isomorphic to those of [M(Cp(BIG))2] (M = Sm, Yb, Ca, Ba) and shows the typical distortions that arise from Cp(BIG)⋅⋅⋅Cp(BIG) attraction as well as excessively large displacement parameter for the heavy Eu atom (U(eq) = 0.075). In order to gain information on the true oxidation state of the central metal in superbulky metallocenes [M(Cp(BIG))2] (M = Sm, Eu, Yb), several physical analyses have been applied. Temperature-dependent magnetic susceptibility data of [Yb(Cp(BIG))2] show diamagnetism, indicating stable divalent ytterbium. Temperature-dependent (151)Eu Mössbauer effect spectroscopic examination of [Eu(Cp(BIG))2] was examined over the temperature range 93-215 K and the hyperfine and dynamical properties of the Eu(II) species are discussed in detail. The mean square amplitude of vibration of the Eu atom as a function of temperature was determined and compared to the value extracted from the single-crystal X-ray data at 203 K. The large difference in these two values was ascribed to the presence of static disorder and/or the presence of low-frequency torsional and librational modes in [Eu(Cp(BIG))2]. X-ray absorbance near edge spectroscopy (XANES) showed that all three [Ln(Cp(BIG))2] (Ln = Sm, Eu, Yb) compounds are divalent. The XANES white-line spectra are at 8.3, 7.3, and 7.8 eV, for Sm, Eu, and Yb, respectively, lower than the Ln2O3 standards. No XANES temperature dependence was found from room temperature to 100 K. XANES also showed that the [Ln(Cp(BIG))2] complexes had less trivalent impurity than a [EuI2(thf)x] standard. The complex [Eu(Cp(BIG))2] shows already at room temperature

  3. Detection and Characterisation of Meteors as a Big Data Citizen Science project

    Science.gov (United States)

    Gritsevich, M.

    2017-12-01

    Out of a total around 50,000 meteorites currently known to science, the atmospheric passage was recorded instrumentally in only 30 cases with the potential to derive their atmospheric trajectories and pre-impact heliocentric orbits. Similarly, while the observations of meteors, add thousands of new entries per month to existing databases, it is extremely rare they lead to meteorite recovery. Meteor studies thus represent an excellent example of the Big Data citizen science project, where progress in the field largely depends on the prompt identification and characterisation of meteor events as well as on extensive and valuable contributions by amateur observers. Over the last couple of decades technological advancements in observational techniques have yielded drastic improvements in the quality, quantity and diversity of meteor data, while even more ambitious instruments are about to become operational. This empowers meteor science to boost its experimental and theoretical horizons and seek more advanced scientific goals. We review some of the developments that push meteor science into the Big Data era that requires more complex methodological approaches through interdisciplinary collaborations with other branches of physics and computer science. We argue that meteor science should become an integral part of large surveys in astronomy, aeronomy and space physics, and tackle the complexity of micro-physics of meteor plasma and its interaction with the atmosphere. The recent increased interest in meteor science triggered by the Chelyabinsk fireball helps in building the case for technologically and logistically more ambitious meteor projects. This requires developing new methodological approaches in meteor research, with Big Data science and close collaboration between citizen science, geoscience and astronomy as critical elements. We discuss possibilities for improvements and promote an opportunity for collaboration in meteor science within the currently

  4. The emerging role of Big Data in key development issues: Opportunities, challenges, and concerns

    Directory of Open Access Journals (Sweden)

    Nir Kshetri

    2014-12-01

    Full Text Available This paper presents a review of academic literature, policy documents from government organizations and international agencies, and reports from industries and popular media on the trends in Big Data utilization in key development issues and its worthwhileness, usefulness, and relevance. By looking at Big Data deployment in a number of key economic sectors, it seeks to provide a better understanding of the opportunities and challenges of using it for addressing key issues facing the developing world. It reviews the uses of Big Data in agriculture and farming activities in developing countries to assess the capabilities required at various levels to benefit from Big Data. It also provides insights into how the current digital divide is associated with and facilitated by the pattern of Big Data diffusion and its effective use in key development areas. It also discusses the lessons that developing countries can learn from the utilization of Big Data in big corporations as well as in other activities in industrialized countries.

  5. Calculating Production Rate of each Branch of a Multilateral Well Using Multi-Segment Well Model: Field Example

    Directory of Open Access Journals (Sweden)

    Mohammed S. Al-Jawad

    2017-11-01

    Full Text Available Multilateral wells require a sophisticated type of well model to be applied in reservoir simulators to represent them. The model must be able to determine the flow rate of each fluid and the pressure throughout the well. The production rate calculations are very important because they give an indication about some main issues associated with multi-lateral wells such as one branch may produce water or gas before others, no production rate from one branch, and selecting the best location of a new branch for development process easily. This paper states the way to calculate production rate of each branch of a multilateral well-using multi-segment well model. The pressure behaviour of each branch is simulated dependent on knowing its production rate. This model has divided a multi-lateral well into an arbitrary number of segments depending on the required degree of accuracy and run time of the simulator. The model implemented on a field example (multi-lateral well HF-65ML in Halfaya Oil Field/Mishrif formation. The production rate and pressure behaviour of each branch are simulated during the producing interval of the multilateral well. The conclusion is that production rate of the main branch is slightly larger than a lateral branch.

  6. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  7. Utilizing big data to provide better health at lower cost.

    Science.gov (United States)

    Jones, Laney K; Pulk, Rebecca; Gionfriddo, Michael R; Evans, Michael A; Parry, Dean

    2018-04-01

    The efficient use of big data in order to provide better health at a lower cost is described. As data become more usable and accessible in healthcare, organizations need to be prepared to use this information to positively impact patient care. In order to be successful, organizations need teams with expertise in informatics and data management that can build new infrastructure and restructure existing infrastructure to support quality and process improvements in real time, such as creating discrete data fields that can be easily retrieved and used to analyze and monitor care delivery. Organizations should use data to monitor performance (e.g., process metrics) as well as the health of their populations (e.g., clinical parameters and health outcomes). Data can be used to prevent hospitalizations, combat opioid abuse and misuse, improve antimicrobial stewardship, and reduce pharmaceutical spending. These examples also serve to highlight lessons learned to better use data to improve health. For example, data can inform and create efficiencies in care and engage and communicate with stakeholders early and often, and collaboration is necessary to have complete data. To truly transform care so that it is delivered in a way that is sustainable, responsible, and patient-centered, health systems need to act on these opportunities, invest in big data, and routinely use big data in the delivery of care. Using data efficiently has the potential to improve the care of our patients and lower cost. Despite early successes, barriers to implementation remain including data acquisition, integration, and usability. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  8. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  9. Did the Big Bang begin?

    International Nuclear Information System (INIS)

    Levy-Leblond, J.

    1990-01-01

    It is argued that the age of the universe may well be numerically finite (20 billion years or so) and conceptually infinite. A new and natural time scale is defined on a physical basis using group-theoretical arguments. An additive notion of time is obtained according to which the age of the universe is indeed infinite. In other words, never did the Big Bang begin. This new time scale is not supposed to replace the ordinary cosmic time scale, but to supplement it (in the same way as rapidity has taken a place by the side of velocity in Einsteinian relativity). The question is discussed within the framework of conventional (big-bang) and classical (nonquantum) cosmology, but could easily be extended to more elaborate views, as the purpose is not so much to modify present theories as to reach a deeper understanding of their meaning

  10. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  11. The Big Data Tools Impact on Development of Simulation-Concerned Academic Disciplines

    Directory of Open Access Journals (Sweden)

    A. A. Sukhobokov

    2015-01-01

    Full Text Available The article gives a definition of Big Data on the basis of 5V (Volume, Variety, Velocity, Veracity, Value as well as shows examples of tasks that require using Big Data tools in a diversity of areas, namely: health, education, financial services, industry, agriculture, logistics, retail, information technology, telecommunications and others. An overview of Big Data tools is delivered, including open source products, IBM Bluemix and SAP HANA platforms. Examples of architecture of corporate data processing and management systems using Big Data tools are shown for big Internet companies and for enterprises in traditional industries. Within the overview, a classification of Big Data tools is proposed that fills gaps of previously developed similar classifications. The new classification contains 19 classes and allows embracing several hundreds of existing and emerging products.The uprise and use of Big Data tools, in addition to solving practical problems, affects the development of scientific disciplines concerning the simulation of technical, natural or socioeconomic systems and the solution of practical problems based on developed models. New schools arise in these disciplines. These new schools decide peculiar to each discipline tasks, but for systems with a much bigger number of internal elements and connections between them. Characteristics of the problems to be solved under new schools, not always meet the criteria for Big Data. It is suggested to identify the Big Data as a part of the theory of sorting and searching algorithms. In other disciplines the new schools are called by analogy with Big Data: Big Calculation in numerical methods, Big Simulation in imitational modeling, Big Management in the management of socio-economic systems, Big Optimal Control in the optimal control theory. The paper shows examples of tasks and methods to be developed within new schools. The educed tendency is not limited to the considered disciplines: there are

  12. Low-energy photodisintegration of the deuteron and Big-Bang nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Tornow, W.; Czakon, N.G.; Howell, C.R.; Hutcheson, A.; Kelley, J.H.; Litvinenko, V.N.; Mikhailov, S.F.; Pinayev, I.V.; Weisel, G.J.; Witala, H

    2003-11-06

    The photon analyzing power for the photodisintegration of the deuteron was measured for seven gamma-ray energies between 2.39 and 4.05 MeV using the linearly polarized gamma-ray beam of the high-intensity gamma-ray source at the Duke Free-Electron Laser Laboratory. The data provide a stringent test of theoretical calculations for the inverse reaction, the neutron-proton radiative capture reaction at energies important for Big-Bang nucleosynthesis. Our data are in excellent agreement with potential model and effective field theory calculations. Therefore, the uncertainty in the baryon density {omega}{sub B}h{sup 2} obtained from Big-Bang Nucleosynthesis can be reduced at least by 20%.

  13. Low-energy photodisintegration of the deuteron and Big-Bang nucleosynthesis

    International Nuclear Information System (INIS)

    Tornow, W.; Czakon, N.G.; Howell, C.R.; Hutcheson, A.; Kelley, J.H.; Litvinenko, V.N.; Mikhailov, S.F.; Pinayev, I.V.; Weisel, G.J.; Witala, H.

    2003-01-01

    The photon analyzing power for the photodisintegration of the deuteron was measured for seven gamma-ray energies between 2.39 and 4.05 MeV using the linearly polarized gamma-ray beam of the high-intensity gamma-ray source at the Duke Free-Electron Laser Laboratory. The data provide a stringent test of theoretical calculations for the inverse reaction, the neutron-proton radiative capture reaction at energies important for Big-Bang nucleosynthesis. Our data are in excellent agreement with potential model and effective field theory calculations. Therefore, the uncertainty in the baryon density Ω B h 2 obtained from Big-Bang Nucleosynthesis can be reduced at least by 20%

  14. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  15. Field trial of a pulsed limestone diversion well

    Science.gov (United States)

    Sibrell, Philip L.; Denholm, C.; Dunn, Margaret

    2013-01-01

    The use of limestone diversion wells to treat acid mine drainage (AMD) is well-known, but in many cases, acid neutralization is not as complete as would be desired. Reasons for this include channeling of the water through the limestone bed, and the slow reaction rate of the limestone gravel. A new approach to improve the performance of the diversion well was tested in the field at the Jennings Environmental Education Center, near Slippery Rock, PA. In this approach, a finer size distribution of limestone was used so as to allow fluidization of the limestone bed, thus eliminating channeling and increasing particle surface area for faster reaction rates. Also, water flow was regulated through the use of a dosing siphon, so that consistent fluidization of the limestone sand could be achieved. Testing began late in the summer of 2010, and continued through November of 2011. Initial system performance during the 2010 field season was good, with the production of net alkaline water, but hydraulic problems involving air release and limestone sand retention were observed. In the summer of 2011, a finer size of limestone sand was procured for use in the system. This material fluidized more readily, but acid neutralization tapered off after several days. Subsequent observations indicated that the hydraulics of the system was compromised by the formation of iron oxides in the pipe leading to the limestone bed, which affected water distribution and flow through the bed. Although results from the field trial were mixed, it is believed that without the formation of iron oxides and plugging of the pipe, better acid neutralization and treatment would have occurred. Further tests are being considered using a different hydraulic configuration for the limestone sand fluidized bed.

  16. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  17. Big sized players on the European Union’s financial advisory market

    Directory of Open Access Journals (Sweden)

    Nicolae, C.

    2013-06-01

    Full Text Available The paper presents the activity and the objectives of “The Big Four” Group of Financial Advisory Firms. The “Big Four” are the four largest international professional services networks in accountancy and professional services, offering audit, assurance, tax, consulting, advisory, actuarial, corporate finance and legal services. They handle the vast majority of audits for publicly traded companies as well as many private companies, creating an oligopoly in auditing large companies. It is reported that the Big Four audit all but one of the companies that constitute the FTSE 100, and 240 of the companies in the FTSE 250, an index of the leading mid-cap listing companies.

  18. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  19. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  20. How people are critical to the success of Big Data

    NARCIS (Netherlands)

    Steen, M.G.D.; Boer, J. de; Beurden, M.H.P.H. van

    2016-01-01

    A buzz has emerged around Big Data: an emerging field that is concerned with capturing, storing, combining, visualizing and analysing large and diverse sets of data. Realizing the societal benefits of Data Driven Innovations requires that the innovations are used and adopted by people. In fact like

  1. IDENTIFYING AND ANALYZING THE TRANSIENT AND PERMANENT BARRIERS FOR BIG DATA

    Directory of Open Access Journals (Sweden)

    SARFRAZ NAWAZ BROHI

    2016-12-01

    Full Text Available Auspiciously, big data analytics had made it possible to generate value from immense amounts of raw data. Organizations are able to seek incredible insights which assist them in effective decision making and providing quality of service by establishing innovative strategies to recognize, examine and address the customers’ preferences. However, organizations are reluctant to adopt big data solutions due to several barriers such as data storage and transfer, scalability, data quality, data complexity, timeliness, security, privacy, trust, data ownership, and transparency. Despite the discussion on big data opportunities, in this paper, we present the findings of our in-depth review process that was focused on identifying as well as analyzing the transient and permanent barriers for adopting big data. Although, the transient barriers for big data can be eliminated in the near future with the advent of innovative technical contributions, however, it is challenging to eliminate the permanent barriers enduringly, though their impact could be recurrently reduced with the efficient and effective use of technology, standards, policies, and procedures.

  2. Big Data, data integrity, and the fracturing of the control zone

    Directory of Open Access Journals (Sweden)

    Carl Lagoze

    2014-11-01

    Full Text Available Despite all the attention to Big Data and the claims that it represents a “paradigm shift” in science, we lack understanding about what are the qualities of Big Data that may contribute to this revolutionary impact. In this paper, we look beyond the quantitative aspects of Big Data (i.e. lots of data and examine it from a sociotechnical perspective. We argue that a key factor that distinguishes “Big Data” from “lots of data” lies in changes to the traditional, well-established “control zones” that facilitated clear provenance of scientific data, thereby ensuring data integrity and providing the foundation for credible science. The breakdown of these control zones is a consequence of the manner in which our network technology and culture enable and encourage open, anonymous sharing of information, participation regardless of expertise, and collaboration across geographic, disciplinary, and institutional barriers. We are left with the conundrum—how to reap the benefits of Big Data while re-creating a trust fabric and an accountable chain of responsibility that make credible science possible.

  3. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  4. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  5. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  6. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  7. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  8. Field performance of timber bridges. 9, Big Erick`s stress-laminated deck bridge

    Science.gov (United States)

    J. A. Kainz; J. P. Wacker; M. Nelson

    The Big Erickas bridge was constructed during September 1992 in Baraga County, Michigan. The bridge is 72 ft long, 16 ft wide, and consists of three simple spans: two stress-laminated deck approach spans and a stress-laminated box center span. The bridge is unique in that it is one of the first known stress-laminated timber bridge applications to use Eastern Hemlock...

  9. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  10. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  11. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  12. "Beyond the Big Bang: a new view of cosmology"

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    and parameters? Can one conceive of a completion of the scenario which resolves the big bang singularity and explains the dark energy now coming to dominate? Are we forced to resort to anthropic explanations? In this talk, I will develop an alternate picture, in which the big bang singularity is resolved and in which the value of the dark energy might be fixed by physical processes. The key is a resolution of the singularity. Using a combination of arguments,involving M theory and holography as well as analytic continuation in time within the low energy effective theory, I argue that there is a unique way to match cosmic evolution across the big bang singularity. The latter is no longer the beginning of time but is instead the gateway to an eternal, cyclical universe. If time permits, I shall describe new work c...

  13. The Shadow of Big Data: Data-Citizenship and Exclusion

    DEFF Research Database (Denmark)

    Rossi, Luca; Hjelholt, Morten; Neumayer, Christina

    2016-01-01

    The shadow of Big Data: data-citizenship and exclusion Big data are understood as being able to provide insights on human behaviour at an individual as well as at an aggregated societal level (Manyka et al. 2011). These insights are expected to be more detailed and precise than anything before...... thanks to the large volume of digital data and to the unobstrusive nature of the data collection (Fishleigh 2014). Within this perspective, these two dimensions (volume and unobstrusiveness) define contemporary big data techniques as a socio-technical offering to society, a live representation of itself...... this process "data-citizenship" emerges. Data-citizenship assumes that citizens will be visible to the state through the data they produce. On a general level data-citizenship shifts citizenship from an intrinsic status of a group of people to a status achieved through action. This approach assumes equal...

  14. Priming the Pump for Big Data at Sentara Healthcare.

    Science.gov (United States)

    Kern, Howard P; Reagin, Michael J; Reese, Bertram S

    2016-01-01

    Today's healthcare organizations are facing significant demands with respect to managing population health, demonstrating value, and accepting risk for clinical outcomes across the continuum of care. The patient's environment outside the walls of the hospital and physician's office-and outside the electronic health record (EHR)-has a substantial impact on clinical care outcomes. The use of big data is key to understanding factors that affect the patient's health status and enhancing clinicians' ability to anticipate how the patient will respond to various therapies. Big data is essential to delivering sustainable, highquality, value-based healthcare, as well as to the success of new models of care such as clinically integrated networks (CINs) and accountable care organizations.Sentara Healthcare, based in Norfolk, Virginia, has been an early adopter of the technologies that have readied us for our big data journey: EHRs, telehealth-supported electronic intensive care units, and telehealth primary care support through MDLIVE. Although we would not say Sentara is at the cutting edge of the big data trend, it certainly is among the fast followers. Use of big data in healthcare is still at an early stage compared with other industries. Tools for data analytics are maturing, but traditional challenges such as heightened data security and limited human resources remain the primary focus for regional health systems to improve care and reduce costs. Sentara primarily makes actionable use of big data in our CIN, Sentara Quality Care Network, and at our health plan, Optima Health. Big data projects can be expensive, and justifying the expense organizationally has often been easier in times of crisis. We have developed an analytics strategic plan separate from but aligned with corporate system goals to ensure optimal investment and management of this essential asset.

  15. Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges

    DEFF Research Database (Denmark)

    Li, Songnian; Dragicevic, Suzana; Anton, François

    2016-01-01

    Big data has now become a strong focus of global interest that is increasingly attracting the attention of academia, industry, government and other organizations. Big data can be situated in the disciplinary area of traditional geospatial data handling theory and methods. The increasing volume...... for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current...... developments as well as recommending what needs to be developed further in the near future....

  16. Population-based imaging biobanks as source of big data.

    Science.gov (United States)

    Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian

    2017-06-01

    Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.

  17. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  18. Using Big Data to Discover Diagnostics and Therapeutics for Gastrointestinal and Liver Diseases

    Science.gov (United States)

    Wooden, Benjamin; Goossens, Nicolas; Hoshida, Yujin; Friedman, Scott L.

    2016-01-01

    Technologies such as genome sequencing, gene expression profiling, proteomic and metabolomic analyses, electronic medical records, and patient-reported health information have produced large amounts of data, from various populations, cell types, and disorders (big data). However, these data must be integrated and analyzed if they are to produce models or concepts about physiologic function or mechanisms of pathogenesis. Many of these data are available to the public, allowing researchers anywhere to search for markers of specific biologic processes or therapeutic targets for specific diseases or patient types. We review recent advances in the fields of computational and systems biology, and highlight opportunities for researchers to use big data sets in the fields of gastroenterology and hepatology, to complement traditional means of diagnostic and therapeutic discovery. PMID:27773806

  19. Internet of Things and big data technologies for next generation healthcare

    CERN Document Server

    Dey, Nilanjan; Ashour, Amira

    2017-01-01

    This comprehensive book focuses on better big-data security for healthcare organizations. Following an extensive introduction to the Internet of Things (IoT) in healthcare including challenging topics and scenarios, it offers an in-depth analysis of medical body area networks with the 5th generation of IoT communication technology along with its nanotechnology. It also describes a novel strategic framework and computationally intelligent model to measure possible security vulnerabilities in the context of e-health. Moreover, the book addresses healthcare systems that handle large volumes of data driven by patients’ records and health/personal information, including big-data-based knowledge management systems to support clinical decisions. Several of the issues faced in storing/processing big data are presented along with the available tools, technologies and algorithms to deal with those problems as well as a case study in healthcare analytics. Addressing trust, privacy, and security issues as well as the I...

  20. Pre-big-bang cosmology and circles in the cosmic microwave background

    International Nuclear Information System (INIS)

    Nelson, William; Wilson-Ewing, Edward

    2011-01-01

    We examine the possibility that circles in the cosmic microwave background could be formed by the interaction of a gravitational wave pulse emitted in some pre-big-bang phase of the universe with the last scattering surface. We derive the expected size distribution of such circles, as well as their typical ring width and (for concentric circles) angular separation. We apply these results, in particular, to conformal cyclic cosmology, ekpyrotic cosmology as well as loop quantum cosmology with and without inflation in order to determine how the predicted geometric properties of these circles would vary from one model to the other, and thus, if detected, could allow us to differentiate between various pre-big-bang cosmological models. We also obtain a relation between the angular ring width and the angular radius of such circles that can be used in order to determine whether or not circles observed in the cosmic microwave background are due to energetic pre-big-bang events.

  1. Recent Development in Big Data Analytics for Business Operations and Risk Management.

    Science.gov (United States)

    Choi, Tsan-Ming; Chan, Hing Kai; Yue, Xiaohang

    2017-01-01

    "Big data" is an emerging topic and has attracted the attention of many researchers and practitioners in industrial systems engineering and cybernetics. Big data analytics would definitely lead to valuable knowledge for many organizations. Business operations and risk management can be a beneficiary as there are many data collection channels in the related industrial systems (e.g., wireless sensor networks, Internet-based systems, etc.). Big data research, however, is still in its infancy. Its focus is rather unclear and related studies are not well amalgamated. This paper aims to present the challenges and opportunities of big data analytics in this unique application domain. Technological development and advances for industrial-based business systems, reliability and security of industrial systems, and their operational risk management are examined. Important areas for future research are also discussed and revealed.

  2. Big Data Challenges for Large Radio Arrays

    Science.gov (United States)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  3. Analysis of Big Data in Gait Biomechanics: Current Trends and Future Directions.

    Science.gov (United States)

    Phinyomark, Angkoon; Petri, Giovanni; Ibáñez-Marcelo, Esther; Osis, Sean T; Ferber, Reed

    2018-01-01

    The increasing amount of data in biomechanics research has greatly increased the importance of developing advanced multivariate analysis and machine learning techniques, which are better able to handle "big data". Consequently, advances in data science methods will expand the knowledge for testing new hypotheses about biomechanical risk factors associated with walking and running gait-related musculoskeletal injury. This paper begins with a brief introduction to an automated three-dimensional (3D) biomechanical gait data collection system: 3D GAIT, followed by how the studies in the field of gait biomechanics fit the quantities in the 5 V's definition of big data: volume, velocity, variety, veracity, and value. Next, we provide a review of recent research and development in multivariate and machine learning methods-based gait analysis that can be applied to big data analytics. These modern biomechanical gait analysis methods include several main modules such as initial input features, dimensionality reduction (feature selection and extraction), and learning algorithms (classification and clustering). Finally, a promising big data exploration tool called "topological data analysis" and directions for future research are outlined and discussed.

  4. Technology and Pedagogy: Using Big Data to Enhance Student Learning

    Science.gov (United States)

    Brinton, Christopher Greg

    2016-01-01

    The "big data revolution" has penetrated many fields, from network monitoring to online retail. Education and learning are quickly becoming part of it, too, because today, course delivery platforms can collect unprecedented amounts of behavioral data about students as they interact with learning content online. This data includes, for…

  5. Big data in psychology: Introduction to the special issue.

    Science.gov (United States)

    Harlow, Lisa L; Oswald, Frederick L

    2016-12-01

    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: (a) The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. (b) Availability of large data sets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. (c) Identifying, addressing, and being sensitive to ethical considerations when analyzing large data sets gained from public or private sources. (d) The unavoidable necessity of validating predictive models in big data by applying a model developed on 1 dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Environment and Personal Well-being in Urban China

    Institute of Scientific and Technical Information of China (English)

    Yang Yuwen; Yang Wenya

    2011-01-01

    The aim of this study is to examine the relationship between environment and personal well-being using a sample of 562 urban employees from three cities in Liaoning province in the People's Republic of China. In contrast to previous studies, this study controlled positive affectivity (PA), negative affectivity (NA), job satisfaction and Big Five personality traits. In addition, the research variables of personal well-being index (PWI), positive affectivity, negative affectivity, job satisfaction, Big Five, and environmental satisfaction are measured with multi-item scales. The research finds that environmental satisfaction is positively related to personal well-being, suggesting that improvement of the natural surroundings in the cities can improve people's well-being.

  7. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  9. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  10. Big Data, Deep Learning and Tianhe-2 at Sun Yat-Sen University, Guangzhou

    Science.gov (United States)

    Yuen, D. A.; Dzwinel, W.; Liu, J.; Zhang, K.

    2014-12-01

    In this decade the big data revolution has permeated in many fields, ranging from financial transactions, medical surveys and scientific endeavors, because of the big opportunities people see ahead. What to do with all this data remains an intriguing question. This is where computer scientists together with applied mathematicians have made some significant inroads in developing deep learning techniques for unraveling new relationships among the different variables by means of correlation analysis and data-assimilation methods. Deep-learning and big data taken together is a grand challenge task in High-performance computing which demand both ultrafast speed and large memory. The Tianhe-2 recently installed at Sun Yat-Sen University in Guangzhou is well positioned to take up this challenge because it is currently the world's fastest computer at 34 Petaflops. Each compute node of Tianhe-2 has two CPUs of Intel Xeon E5-2600 and three Xeon Phi accelerators. The Tianhe-2 has a very large fast memory RAM of 88 Gigabytes on each node. The system has a total memory of 1,375 Terabytes. All of these technical features will allow very high dimensional (more than 10) problem in deep learning to be explored carefully on the Tianhe-2. Problems in seismology which can be solved include three-dimensional seismic wave simulations of the whole Earth with a few km resolution and the recognition of new phases in seismic wave form from assemblage of large data sets.

  11. Results of investigations at the Zunil geothermal field, Guatemala: Well logging and brine geochemistry

    Energy Technology Data Exchange (ETDEWEB)

    Adams, A.; Dennis, B.; Van Eeckhout, E.; Goff, F.; Lawton, R.; Trujillo, P.E.; Counce, D.; Archuleta, J. (Los Alamos National Lab., NM (USA)); Medina, V. (Instituto Nacional de Electrificacion, Guatemala City (Guatemala). Unidad de Desarollo Geotermico)

    1991-07-01

    The well logging team from Los Alamos and its counterpart from Central America were tasked to investigate the condition of four producing geothermal wells in the Zunil Geothermal Field. The information obtained would be used to help evaluate the Zunil geothermal reservoir in terms of possible additional drilling and future power plant design. The field activities focused on downhole measurements in four production wells (ZCQ-3, ZCQ-4, ZCQ-5, and ZCQ-6). The teams took measurements of the wells in both static (shut-in) and flowing conditions, using the high-temperature well logging tools developed at Los Alamos National Laboratory. Two well logging missions were conducted in the Zunil field. In October 1988 measurements were made in well ZCQ-3, ZCQ-5, and ZCQ-6. In December 1989 the second field operation logged ZCQ-4 and repeated logs in ZCQ-3. Both field operations included not only well logging but the collecting of numerous fluid samples from both thermal and nonthermal waters. 18 refs., 22 figs., 7 tabs.

  12. Radiative neutron capture on a proton at big-bang nucleosynthesis energies

    International Nuclear Information System (INIS)

    Ando, S.; Cyburt, R. H.; Hong, S. W.; Hyun, C. H.

    2006-01-01

    The total cross section for radiative neutron capture on a proton, np→dγ, is evaluated at big-bang nucleosynthesis (BBN) energies. The electromagnetic transition amplitudes are calculated up to next-to-leading-order within the framework of pionless effective field theory with dibaryon fields. We also calculate the dγ→np cross section and the photon analyzing power for the dγ(vector sign)→np process from the amplitudes. The values of low-energy constants that appear in the amplitudes are estimated by a Markov Chain Monte Carlo analysis using the relevant low-energy experimental data. Our result agrees well with those of other theoretical calculations except for the np→dγ cross section at some energies estimated by an R-matrix analysis. We also study the uncertainties in our estimation of the np→dγ cross section at relevant BBN energies and find that the estimated cross section is reliable to within ∼1% error

  13. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  14. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  15. Big data algorithms, analytics, and applications

    CERN Document Server

    Li, Kuan-Ching; Yang, Laurence T; Cuzzocrea, Alfredo

    2015-01-01

    Data are generated at an exponential rate all over the world. Through advanced algorithms and analytics techniques, organizations can harness this data, discover hidden patterns, and use the findings to make meaningful decisions. Containing contributions from leading experts in their respective fields, this book bridges the gap between the vastness of big data and the appropriate computational methods for scientific and social discovery. It also explores related applications in diverse sectors, covering technologies for media/data communication, elastic media/data storage, cross-network media/

  16. Scalar field cosmology in three-dimensions

    International Nuclear Information System (INIS)

    Oliveira Neto, G.

    2001-01-01

    We study an analytical solution to the Einstein's equations in 2 + 1-dimensions. The space-time is dynamical and has a line symmetry. The matter content is a minimally coupled, massless, scalar field. Depending on the value of certain parameters, this solution represents three distinct space-times. The first one is at space-time. Then, we have a big bang model with a negative curvature scalar and a real scalar field. The last case is a big bang model with event horizons where the curvature scalar vanishes and the scalar field changes from real to purely imaginary. (author)

  17. A Matrix Big Bang

    OpenAIRE

    Craps, Ben; Sethi, Savdeep; Verlinde, Erik

    2005-01-01

    The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matr...

  18. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  19. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  20. Intra-well relaxation process in magnetic fluids subjected to strong polarising fields

    Energy Technology Data Exchange (ETDEWEB)

    Marin, C.N., E-mail: cmarin@physics.uvt.ro [West University of Timisoara, Faculty of Physics, B-dul V. Parvan, No. 4, Timisoara 300223 (Romania); Fannin, P.C. [Department of Electronic and Electrical Engineering, Trinity College, Dublin 2 (Ireland); Malaescu, I.; Barvinschi, P.; Ercuta, A. [West University of Timisoara, Faculty of Physics, B-dul V. Parvan, No. 4, Timisoara 300223 (Romania)

    2012-02-15

    We report on the frequency and field dependent complex magnetic susceptibility measurements of a kerosene-based magnetic fluid with iron oxide nanoparticles, stabilized with oleic acid, in the frequency range 0.1-6 GHz and over the polarising field range of 0-168.4 kA/m. By increasing polarising field, H, a subsidiary loss-peak clearly occurs in the vicinity of the ferromagnetic resonance peak, from which it remains distinct even in strong polarising fields of 168.4 kA/m. This is in contrast to other reported cases in which the intra-well relaxation process is manifested only as a shoulder of the resonance peak, which vanishes in polarising fields larger than that of 100 kA/m. The results of the XRD analysis connected to the anisotropy field results confirm that the investigated sample contains particles of magnetite and of the tetragonal phase of maghemite. Taking into account the characteristics of our sample, the theoretical analysis revealed that the intra-well relaxation process of the small particles of the tetragonal phase of maghemite may be responsible for the subsidiary loss peak of the investigated magnetic fluid. - Highlights: > Intra-well relaxation process in a magnetic fluid is studied. > Sample consists of the tetragonal phase of maghemite and magnetite particles. > A subsidiary relaxation peak is observed in the vicinity of the resonance peak. > Relaxation peak is correlated to the intra-well relaxation process. > It is assigned to the tetragonal phase of maghemite particles.

  1. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  2. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  3. Advances in mobile cloud computing and big data in the 5G era

    CERN Document Server

    Mastorakis, George; Dobre, Ciprian

    2017-01-01

    This book reports on the latest advances on the theories, practices, standards and strategies that are related to the modern technology paradigms, the Mobile Cloud computing (MCC) and Big Data, as the pillars and their association with the emerging 5G mobile networks. The book includes 15 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of Big Data and Mobile Cloud Computing, from basic concepts to advanced findings, reporting the state-of-the-art on Big Data management. It demonstrates and discusses methods and practices to improve multi-source Big Data manipulation techniques, as well as the integration of resources availability through the 3As (Anywhere, Anything, Anytime) paradigm, using the 5G access technologies.

  4. The 'Big Karl' magnetic spectrometer - studies of the 103Ru transition nucleus with (d,p) and (p,d) reactions

    International Nuclear Information System (INIS)

    Huerlimann, W.

    1981-04-01

    The paper describes the structure and characteristics of the spectrometer and its application in a study of the 102 Ru(d,p) 103 Ru and 104 Ru(p,d) 103 Ru reactions. The study is structured as follows: To begin with the theoretical fundamentals, ion-optical characteristics and layout of BIG KARL are described. Field measurements and analyses carried out on the magnets of the spectrometer are described as well as the functioning of the 'Ht correction coils' used here for the first time to prevent faulty imaging. Chapter IV then describes methods employed so far to optimize resolution for large aperture angles of the spectrometer. Finally, chapter V investigates the 103 Ru transition nucleons on the basis of the 102 Ru(d,p) 103 RU and 104 Ru(p,d) 103 Ru transfer reactions measured in BIG KARL. (orig./HSI) [de

  5. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  6. Binding energy of impurity states in an inverse parabolic quantum well under magnetic field

    International Nuclear Information System (INIS)

    Kasapoglu, E.; Sari, H.; Soekmen, I.

    2007-01-01

    We have investigated the effects of the magnetic field which is directed perpendicular to the well on the binding energy of the hydrogenic impurities in an inverse parabolic quantum well (IPQW) with different widths as well as different Al concentrations at the well center. The Al concentration at the barriers was always x max =0.3. The calculations were performed within the effective mass approximation, using a variational method. We observe that IPQW structure turns into parabolic quantum well with the inversion effect of the magnetic field and donor impurity binding energy in IPQW strongly depends on the magnetic field, Al concentration at the well center and well dimensions

  7. The measurement equivalence of Big Five factor markers for persons with different levels of education.

    Science.gov (United States)

    Rammstedt, Beatrice; Goldberg, Lewis R; Borg, Ingwer

    2010-02-01

    Previous findings suggest that the Big-Five factor structure is not guaranteed in samples with lower educational levels. The present study investigates the Big-Five factor structure in two large samples representative of the German adult population. In both samples, the Big-Five factor structure emerged only in a blurry way at lower educational levels, whereas for highly educated persons it emerged with textbook-like clarity. Because well-educated persons are most comparable to the usual subjects of psychological research, it might be asked if the Big Five are limited to such persons. Our data contradict this conclusion. There are strong individual differences in acquiescence response tendencies among less highly educated persons. After controlling for this bias the Big-Five model holds at all educational levels.

  8. Outcome Prediction after Radiotherapy with Medical Big Data.

    Science.gov (United States)

    Magome, Taiki

    2016-01-01

    Data science is becoming more important in many fields. In medical physics field, we are facing huge data every day. Treatment outcomes after radiation therapy are determined by complex interactions between clinical, biological, and dosimetrical factors. A key concept of recent radiation oncology research is to predict the outcome based on medical big data for personalized medicine. Here, some reports, which are analyzing medical databases with machine learning techniques, were reviewed and feasibility of outcome prediction after radiation therapy was discussed. In addition, some strategies for saving manual labors to analyze huge data in medical physics were discussed.

  9. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    Science.gov (United States)

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  10. Personalized medicine beyond genomics: alternative futures in big data-proteomics, environtome and the social proteome.

    Science.gov (United States)

    Özdemir, Vural; Dove, Edward S; Gürsoy, Ulvi K; Şardaş, Semra; Yıldırım, Arif; Yılmaz, Şenay Görücü; Ömer Barlas, I; Güngör, Kıvanç; Mete, Alper; Srivastava, Sanjeeva

    2017-01-01

    No field in science and medicine today remains untouched by Big Data, and psychiatry is no exception. Proteomics is a Big Data technology and a next generation biomarker, supporting novel system diagnostics and therapeutics in psychiatry. Proteomics technology is, in fact, much older than genomics and dates to the 1970s, well before the launch of the international Human Genome Project. While the genome has long been framed as the master or "elite" executive molecule in cell biology, the proteome by contrast is humble. Yet the proteome is critical for life-it ensures the daily functioning of cells and whole organisms. In short, proteins are the blue-collar workers of biology, the down-to-earth molecules that we cannot live without. Since 2010, proteomics has found renewed meaning and international attention with the launch of the Human Proteome Project and the growing interest in Big Data technologies such as proteomics. This article presents an interdisciplinary technology foresight analysis and conceptualizes the terms "environtome" and "social proteome". We define "environtome" as the entire complement of elements external to the human host, from microbiome, ambient temperature and weather conditions to government innovation policies, stock market dynamics, human values, political power and social norms that collectively shape the human host spatially and temporally. The "social proteome" is the subset of the environtome that influences the transition of proteomics technology to innovative applications in society. The social proteome encompasses, for example, new reimbursement schemes and business innovation models for proteomics diagnostics that depart from the "once-a-life-time" genotypic tests and the anticipated hype attendant to context and time sensitive proteomics tests. Building on the "nesting principle" for governance of complex systems as discussed by Elinor Ostrom, we propose here a 3-tiered organizational architecture for Big Data science such as

  11. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  12. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  13. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  14. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  15. Properties of uranium and thorium in host rocks of multi-metal (Ag, Pb, U, Cu, Bi, Z, F) Big Kanimansur deposit (Tajikistan)

    International Nuclear Information System (INIS)

    Fayziev, A.R.

    2007-01-01

    Multi-metal Big Kanimansur Deposit host rocks contain high averages of uranium and thorium which are more than clark averages by 7 and 2.5 times accordingly. The second property of radio-active elements distribution are low ratio of thorium to uranium. That criteria can be used as prospecting sings for flanks and depth of know ore fields as well as for new squares of multi-metal mineralisation

  16. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  17. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  18. Big Data Challenges

    Directory of Open Access Journals (Sweden)

    Alexandru Adrian TOLE

    2013-10-01

    Full Text Available The amount of data that is traveling across the internet today, not only that is large, but is complex as well. Companies, institutions, healthcare system etc., all of them use piles of data which are further used for creating reports in order to ensure continuity regarding the services that they have to offer. The process behind the results that these entities requests represents a challenge for software developers and companies that provide IT infrastructure. The challenge is how to manipulate an impressive volume of data that has to be securely delivered through the internet and reach its destination intact. This paper treats the challenges that Big Data creates.

  19. Big Data and Intelligence: Applications, Human Capital, and Education

    Directory of Open Access Journals (Sweden)

    Michael Landon-Murray

    2016-06-01

    Full Text Available The potential for big data to contribute to the US intelligence mission goes beyond bulk collection, social media and counterterrorism. Applications will speak to a range of issues of major concern to intelligence agencies, from military operations to climate change to cyber security. There are challenges too: procurement lags, data stovepiping, separating signal from noise, sources and methods, a range of normative issues, and central to managing these challenges, human capital. These potential applications and challenges are discussed and a closer look at what data scientists do in the Intelligence Community (IC is offered. Effectively filling the ranks of the IC’s data science workforce will depend on the provision of well-trained data scientists from the higher education system. Program offerings at America’s top fifty universities will thus be surveyed (just a few years ago there were reportedly no degrees in data science. One Master’s program that has melded data science with intelligence is examined as well as a university big data research center focused on security and intelligence. This discussion goes a long way to clarify the prospective uses of data science in intelligence while probing perhaps the key challenge to optimal application of big data in the IC.

  20. Big Five personality traits, job satisfaction and subjective wellbeing in China.

    Science.gov (United States)

    Zhai, Qingguo; Willis, Mike; O'Shea, Bob; Zhai, Yubo; Yang, Yuwen

    2013-01-01

    This paper examines the effect of the Big Five personality traits on job satisfaction and subjective wellbeing (SWB). The paper also examines the mediating role of job satisfaction on the Big Five-SWB relationship. Data were collected from a sample of 818 urban employees from five Chinese cities: Harbin, Changchun, Shenyang, Dalian, and Fushun. All the study variables were measured with well-established multi-item scales that have been validated both in English-speaking populations and in China. The study found only extraversion to have an effect on job satisfaction, suggesting that there could be cultural difference in the relationships between the Big Five and job satisfaction in China and in the West. The study found that three factors in the Big Five--extraversion, conscientiousness, and neuroticism--have an effect on SWB. This finding is similar to findings in the West, suggesting convergence in the relationship between the Big Five and SWB in different cultural contexts. The research found that only the relationship between extraversion and SWB is partially mediated by job satisfaction, implying that the effect of the Big Five on SWB is mainly direct, rather than indirect via job satisfaction. The study also found that extraversion was the strongest predictor of both job satisfaction and SWB. This finding implies that extraversion could be more important than other factors in the Big Five in predicting job satisfaction and SWB in a "high collectivism" and "high power distance" country such as China. The research findings are discussed in the Chinese cultural context. The study also offers suggestions on the directions for future research.

  1. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  2. The theory and method of two-well field test for in-situ leaching uranium

    International Nuclear Information System (INIS)

    Yao Yixuan; Huo Jiandang; Xiang Qiulin; Tang Baobin

    2007-01-01

    Because leaching area in field test for in-situ leaching uranium is not accounted exactly, the reliability of obtaining parameters by calculating can not be ensured, and the whole test needs a long time and great investment. In two-well field test, lixiviant is injected from one well, pregnant solution is pumped out from the other, flow rate of the production well is more than that of the injection well, and uranium is not recoveried. In the case of keeping invariable ratio of pumping capacity to injecting capacity during the testing process, leaching area is not variable, can be exactly calculated. The full field test needs six months to one year. Two-well test is a scientific, rapid, minimal spending field test method, and is widely used in Commonwealth of Independent States. (authors)

  3. Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs

    Directory of Open Access Journals (Sweden)

    Mohd Usama

    2017-11-01

    Full Text Available At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS, Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.

  4. Entomological Collections in the Age of Big Data.

    Science.gov (United States)

    Short, Andrew Edward Z; Dikow, Torsten; Moreau, Corrie S

    2018-01-07

    With a million described species and more than half a billion preserved specimens, the large scale of insect collections is unequaled by those of any other group. Advances in genomics, collection digitization, and imaging have begun to more fully harness the power that such large data stores can provide. These new approaches and technologies have transformed how entomological collections are managed and utilized. While genomic research has fundamentally changed the way many specimens are collected and curated, advances in technology have shown promise for extracting sequence data from the vast holdings already in museums. Efforts to mainstream specimen digitization have taken root and have accelerated traditional taxonomic studies as well as distribution modeling and global change research. Emerging imaging technologies such as microcomputed tomography and confocal laser scanning microscopy are changing how morphology can be investigated. This review provides an overview of how the realization of big data has transformed our field and what may lie in store.

  5. An overview of big data and data science education at South African universities

    Directory of Open Access Journals (Sweden)

    Eduan Kotzé

    2016-02-01

    Full Text Available Man and machine are generating data electronically at an astronomical speed and in such a way that society is experiencing cognitive challenges to analyse this data meaningfully. Big data firms, such as Google and Facebook, identified this problem several years ago and are continuously developing new technologies or improving existing technologies in order to facilitate the cognitive analysis process of these large data sets. The purpose of this article is to contribute to our theoretical understanding of the role that big data might play in creating new training opportunities for South African universities. The article investigates emerging literature on the characteristics and main components of big data, together with the Hadoop application stack as an example of big data technology. Due to the rapid development of big data technology, a paradigm shift of human resources is required to analyse these data sets; therefore, this study examines the state of big data teaching at South African universities. This article also provides an overview of possible big data sources for South African universities, as well as relevant big data skills that data scientists need. The study also investigates existing academic programs in South Africa, where the focus is on teaching advanced database systems. The study found that big data and data science topics are introduced to students on a postgraduate level, but that the scope is very limited. This article contributes by proposing important theoretical topics that could be introduced as part of the existing academic programs. More research is required, however, to expand these programs in order to meet the growing demand for data scientists with big data skills.

  6. Infectious Disease Surveillance in the Big Data Era: Towards Faster and Locally Relevant Systems

    Science.gov (United States)

    Simonsen, Lone; Gog, Julia R.; Olson, Don; Viboud, Cécile

    2016-01-01

    While big data have proven immensely useful in fields such as marketing and earth sciences, public health is still relying on more traditional surveillance systems and awaiting the fruits of a big data revolution. A new generation of big data surveillance systems is needed to achieve rapid, flexible, and local tracking of infectious diseases, especially for emerging pathogens. In this opinion piece, we reflect on the long and distinguished history of disease surveillance and discuss recent developments related to use of big data. We start with a brief review of traditional systems relying on clinical and laboratory reports. We then examine how large-volume medical claims data can, with great spatiotemporal resolution, help elucidate local disease patterns. Finally, we review efforts to develop surveillance systems based on digital and social data streams, including the recent rise and fall of Google Flu Trends. We conclude by advocating for increased use of hybrid systems combining information from traditional surveillance and big data sources, which seems the most promising option moving forward. Throughout the article, we use influenza as an exemplar of an emerging and reemerging infection which has traditionally been considered a model system for surveillance and modeling. PMID:28830112

  7. How to Generate Economic and Sustainability Reports from Big Data? Qualifications of Process Industry

    Directory of Open Access Journals (Sweden)

    Esa Hämäläinen

    2017-11-01

    Full Text Available Big Data may introduce new opportunities, and for this reason it has become a mantra among most industries. This paper focuses on examining how to develop cost and sustainable reporting by utilizing Big Data that covers economic values, production volumes, and emission information. We assume strongly that this use supports cleaner production, while at the same time offers more information for revenue and profitability development. We argue that Big Data brings company-wide business benefits if data queries and interfaces are built to be interactive, intuitive, and user-friendly. The amount of information related to operations, costs, emissions, and the supply chain would increase enormously if Big Data was used in various manufacturing industries. It is essential to expose the relevant correlations between different attributes and data fields. Proper algorithm design and programming are key to making the most of Big Data. This paper introduces ideas on how to refine raw data into valuable information, which can serve many types of end users, decision makers, and even external auditors. Concrete examples are given through an industrial paper mill case, which covers environmental aspects, cost-efficiency management, and process design.

  8. Seeing the "Big" Picture: Big Data Methods for Exploring Relationships Between Usage, Language, and Outcome in Internet Intervention Data.

    Science.gov (United States)

    Carpenter, Jordan; Crutchley, Patrick; Zilca, Ran D; Schwartz, H Andrew; Smith, Laura K; Cobb, Angela M; Parks, Acacia C

    2016-08-31

    Assessing the efficacy of Internet interventions that are already in the market introduces both challenges and opportunities. While vast, often unprecedented amounts of data may be available (hundreds of thousands, and sometimes millions of participants with high dimensions of assessed variables), the data are observational in nature, are partly unstructured (eg, free text, images, sensor data), do not include a natural control group to be used for comparison, and typically exhibit high attrition rates. New approaches are therefore needed to use these existing data and derive new insights that can augment traditional smaller-group randomized controlled trials. Our objective was to demonstrate how emerging big data approaches can help explore questions about the effectiveness and process of an Internet well-being intervention. We drew data from the user base of a well-being website and app called Happify. To explore effectiveness, multilevel models focusing on within-person variation explored whether greater usage predicted higher well-being in a sample of 152,747 users. In addition, to explore the underlying processes that accompany improvement, we analyzed language for 10,818 users who had a sufficient volume of free-text response and timespan of platform usage. A topic model constructed from this free text provided language-based correlates of individual user improvement in outcome measures, providing insights into the beneficial underlying processes experienced by users. On a measure of positive emotion, the average user improved 1.38 points per week (SE 0.01, t122,455=113.60, Peffect on change in well-being over time, illustrating which topics may be more beneficial than others when engaging with the interventions. In particular, topics that are related to addressing negative thoughts and feelings were correlated with improvement over time. Using observational analyses on naturalistic big data, we can explore the relationship between usage and well-being among

  9. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  10. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  11. MACHINE LEARNING TECHNIQUES USED IN BIG DATA

    Directory of Open Access Journals (Sweden)

    STEFANIA LOREDANA NITA

    2016-07-01

    Full Text Available The classical tools used in data analysis are not enough in order to benefit of all advantages of big data. The amount of information is too large for a complete investigation, and the possible connections and relations between data could be missed, because it is difficult or even impossible to verify all assumption over the information. Machine learning is a great solution in order to find concealed correlations or relationships between data, because it runs at scale machine and works very well with large data sets. The more data we have, the more the machine learning algorithm is useful, because it “learns” from the existing data and applies the found rules on new entries. In this paper, we present some machine learning algorithms and techniques used in big data.

  12. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  13. A conceptual review of XBRL in relation to Big Data

    DEFF Research Database (Denmark)

    Krisko, Adam

    The technological developments of the last couple of decades have led to remarkable changes in the role of data within society. Data continues to grow, and the advances in the field of information technology (IT) further accelerate the process. This unfolding phenomenon has resulted in the emerge......The technological developments of the last couple of decades have led to remarkable changes in the role of data within society. Data continues to grow, and the advances in the field of information technology (IT) further accelerate the process. This unfolding phenomenon has resulted...... the increasingly important role of Big Data within the accounting domain, by discussing the potential benefits and challenges the expansion in the volume, velocity, and variety of accounting data carries. One of the possible responses to the potential changes Big Data might foster within accounting...... is the utilisation of eXtensible Business Reporting Language (XBRL). XBRL is an open standard, free of license fees electronic language for communicating financial and business information. Storing data in XBRL format enables it to be machine-readable, and standardises the financial terms through XBRL taxonomy...

  14. Data Management and Preservation Planning for Big Science

    Directory of Open Access Journals (Sweden)

    Juan Bicarregui

    2013-06-01

    Full Text Available ‘Big Science’ - that is, science which involves large collaborations with dedicated facilities, and involving large data volumes and multinational investments – is often seen as different when it comes to data management and preservation planning. Big Science handles its data differently from other disciplines and has data management problems that are qualitatively different from other disciplines. In part, these differences arise from the quantities of data involved, but possibly more importantly from the cultural, organisational and technical distinctiveness of these academic cultures. Consequently, the data management systems are typically and rationally bespoke, but this means that the planning for data management and preservation (DMP must also be bespoke.These differences are such that ‘just read and implement the OAIS specification’ is reasonable Data Management and Preservation (DMP advice, but this bald prescription can and should be usefully supported by a methodological ‘toolkit’, including overviews, case-studies and costing models to provide guidance on developing best practice in DMP policy and infrastructure for these projects, as well as considering OAIS validation, audit and cost modelling.In this paper, we build on previous work with the LIGO collaboration to consider the role of DMP planning within these big science scenarios, and discuss how to apply current best practice. We discuss the result of the MaRDI-Gross project (Managing Research Data Infrastructures – Big Science, which has been developing a toolkit to provide guidelines on the application of best practice in DMP planning within big science projects. This is targeted primarily at projects’ engineering managers, but intending also to help funders collaborate on DMP plans which satisfy the requirements imposed on them.

  15. Hydrogeochemical and stream sediment reconnaissance basic data for Big Delta Quadrangle, Alaska

    International Nuclear Information System (INIS)

    1981-01-01

    Field and laboratory data are presented for 1380 water samples from the Big Delta Quadrangle, Alaska. The samples were collected by Los Alamos Scientific Laboratory; laboratory analysis and data reporting were performed by the Uranium Resource Evaluation Project at Oak Ridge, Tennessee

  16. Crystal orientation effects on wurtzite quantum well electromechanical fields

    DEFF Research Database (Denmark)

    Duggen, Lars; Willatzen, Morten

    2010-01-01

    in the literature for semiconductors, is inaccurate for ZnO/MgZnO heterostructures where shear-strain components play an important role. An interesting observation is that a growth direction apart from [1̅ 21̅ 0] exists for which the electric field in the quantum well region becomes zero. This is important for, e......A one-dimensional continuum model for calculating strain and electric field in wurtzite semiconductor heterostructures with arbitrary crystal orientation is presented and applied to GaN/AlGaN and ZnO/MgZnO heterostructure combinations. The model is self-consistent involving feedback couplings...... of spontaneous polarization, strain, and electric field. Significant differences between fully coupled and semicoupled models are found for the longitudinal and shear-strain components as a function of the crystal-growth direction. In particular, we find that the semicoupled model, typically used...

  17. Space Time Quantization and the Big Bang

    OpenAIRE

    Sidharth, B. G.

    1998-01-01

    A recent cosmological model is recapitulated which deduces the correct mass, radius and age of the universe as also the Hubble constant and other well known apparently coincidental relations. It also predicts an ever expanding accelerating universe as is confirmed by latest supernovae observations. Finally the Big Bang model is recovered as a suitable limiting case.

  18. Ten aspects of the Big Five in the Personality Inventory for DSM-5.

    Science.gov (United States)

    DeYoung, Colin G; Carey, Bridget E; Krueger, Robert F; Ross, Scott R

    2016-04-01

    Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) includes a dimensional model of personality pathology, operationalized in the Personality Inventory for DSM-5 (PID-5), with 25 facets grouped into 5 higher order factors resembling the Big Five personality dimensions. The present study tested how well these 25 facets could be integrated with the 10-factor structure of traits within the Big Five that is operationalized by the Big Five Aspect Scales (BFAS). In 2 healthy adult samples, 10-factor solutions largely confirmed our hypothesis that each of the 10 BFAS would be the highest loading BFAS on 1 and only 1 factor. Varying numbers of PID-5 scales were additional markers of each factor, and the overall factor structure in the first sample was well replicated in the second. Our results allow Cybernetic Big Five Theory (CB5T) to be brought to bear on manifestations of personality disorder, because CB5T offers mechanistic explanations of the 10 factors measured by the BFAS. Future research, therefore, may begin to test hypotheses derived from CB5T regarding the mechanisms that are dysfunctional in specific personality disorders. (c) 2016 APA, all rights reserved).

  19. 10 Aspects of the Big Five in the Personality Inventory for DSM-5

    Science.gov (United States)

    DeYoung, Colin. G.; Carey, Bridget E.; Krueger, Robert F.; Ross, Scott R.

    2015-01-01

    DSM-5 includes a dimensional model of personality pathology, operationalized in the Personality Inventory for DSM-5 (PID-5), with 25 facets grouped into five higher-order factors resembling the Big Five personality dimensions. The present study tested how well these 25 facets could be integrated with the 10-factor structure of traits within the Big Five that is operationalized by the Big Five Aspect Scales (BFAS). In two healthy adult samples, 10-factor solutions largely confirmed our hypothesis that each of the 10 BFAS scales would be the highest loading BFAS scale on one and only one factor. Varying numbers of PID-5 scales were additional markers of each factor, and the overall factor structure in the first sample was well replicated in the second. Our results allow Cybernetic Big Five Theory (CB5T) to be brought to bear on manifestations of personality disorder, because CB5T offers mechanistic explanations of the 10 factors measured by the BFAS. Future research, therefore, may begin to test hypotheses derived from CB5T regarding the mechanisms that are dysfunctional in specific personality disorders. PMID:27032017

  20. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  1. Quantifying the vulnerability of well fields towards anthropogenic pollution: The Netherlands as an example

    NARCIS (Netherlands)

    Mendizabal, I.; Stuijfzand, P.J.; Wiersma, A.

    2011-01-01

    A new method is presented to asses the vulnerability of public supply well fields (PSWFs), other well fields or individual wells. The Intrinsic Vulnerability Index towards Pollution (VIP) is based on the age, redox level, alkalinity (or acidity), and surface water fraction of the pumped water,

  2. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  3. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  4. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  5. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  6. Interaction of Aquifer and River-Canal Network near Well Field.

    Science.gov (United States)

    Ghosh, Narayan C; Mishra, Govinda C; Sandhu, Cornelius S S; Grischek, Thomas; Singh, Vikrant V

    2015-01-01

    The article presents semi-analytical mathematical models to asses (1) enhancements of seepage from a canal and (2) induced flow from a partially penetrating river in an unconfined aquifer consequent to groundwater withdrawal in a well field in the vicinity of the river and canal. The nonlinear exponential relation between seepage from a canal reach and hydraulic head in the aquifer beneath the canal reach is used for quantifying seepage from the canal reach. Hantush's (1967) basic solution for water table rise due to recharge from a rectangular spreading basin in absence of pumping well is used for generating unit pulse response function coefficients for water table rise in the aquifer. Duhamel's convolution theory and method of superposition are applied to obtain water table position due to pumping and recharge from different canal reaches. Hunt's (1999) basic solution for river depletion due to constant pumping from a well in the vicinity of a partially penetrating river is used to generate unit pulse response function coefficients. Applying convolution technique and superposition, treating the recharge from canal reaches as recharge through conceptual injection wells, river depletion consequent to variable pumping and recharge is quantified. The integrated model is applied to a case study in Haridwar (India). The well field consists of 22 pumping wells located in the vicinity of a perennial river and a canal network. The river bank filtrate portion consequent to pumping is quantified. © 2014, National GroundWater Association.

  7. The role of administrative data in the big data revolution in social science research.

    Science.gov (United States)

    Connelly, Roxanne; Playford, Christopher J; Gayle, Vernon; Dibben, Chris

    2016-09-01

    The term big data is currently a buzzword in social science, however its precise meaning is ambiguous. In this paper we focus on administrative data which is a distinctive form of big data. Exciting new opportunities for social science research will be afforded by new administrative data resources, but these are currently under appreciated by the research community. The central aim of this paper is to discuss the challenges associated with administrative data. We emphasise that it is critical for researchers to carefully consider how administrative data has been produced. We conclude that administrative datasets have the potential to contribute to the development of high-quality and impactful social science research, and should not be overlooked in the emerging field of big data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  9. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  10. Workshop C2 Report - Big Data Interoperability for Enterprises

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Iacob, Maria Eugenia; Zelm, Martin; Doumeingts, Guy; Mendonca, Joao Pedro

    Cost-efficient use of big data by enterprises is challenging. Data from multiple heterogeneous sources are typically combined. This data needs to be handled by various interacting components in different systems for automated transformation, filtering, processing and analysis, as well as for

  11. Combined effects of intense laser field, electric and magnetic fields on the nonlinear optical properties of the step-like quantum well

    Energy Technology Data Exchange (ETDEWEB)

    Kasapoglu, E., E-mail: ekasap@cumhuriyet.edu.tr [Department of Physics, Cumhuriyet University, 58140 Sivas (Turkey); Duque, C.A. [Grupo de Materia Condensada-UdeA, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia-UdeA, Calle 70 No. 52-21, Medellín (Colombia); Mora-Ramos, M.E. [Facultad de Ciencias, Universidad Autónoma del Estado de Morelos, Ave. Universidad 1001, CP 62209, Cuernavaca, Morelos (Mexico); Restrepo, R.L. [Department of Physics, Cumhuriyet University, 58140 Sivas (Turkey); Grupo de Materia Condensada-UdeA, Instituto de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Antioquia-UdeA, Calle 70 No. 52-21, Medellín (Colombia); Escuela de Ingeniería de Antioquia-EIA, Medellín (Colombia); Ungan, F.; Yesilgul, U.; Sari, H. [Department of Physics, Cumhuriyet University, 58140 Sivas (Turkey); Sökmen, I. [Department of Physics, Dokuz Eylül University, 35160 Buca, İzmir (Turkey)

    2015-03-15

    In the present work, the effects of the intense laser field on total optical absorption coefficient (the linear and third-order nonlinear) and total refractive index change for transition between two lower-lying electronic levels in the step-like GaAs/Ga{sub 1−x}Al{sub x}As quantum well under external electric and magnetic fields are investigated. The calculations were performed within the compact density-matrix formalism with the use of the effective mass and parabolic band approximations. The obtained results show that both total absorption coefficient and refractive index change are sensitive to the well dimensions and the effects of external fields. By changing the intensities of the electric, magnetic and non-resonant intense laser fields together with the well dimensions, we can obtain the blue or red shift, without the need for the growth of many different samples. - Highlights: • Augmentation of laser-field results in red shift in total AC spectra. • Magnetic field induces a blue-shift in the resonant peak. • Resonant peak position shifts to red with effect of electric field. • Resonant peak of total AC shifts to the higher photon energies with increasing well width.

  12. Combined effects of intense laser field, electric and magnetic fields on the nonlinear optical properties of the step-like quantum well

    International Nuclear Information System (INIS)

    Kasapoglu, E.; Duque, C.A.; Mora-Ramos, M.E.; Restrepo, R.L.; Ungan, F.; Yesilgul, U.; Sari, H.; Sökmen, I.

    2015-01-01

    In the present work, the effects of the intense laser field on total optical absorption coefficient (the linear and third-order nonlinear) and total refractive index change for transition between two lower-lying electronic levels in the step-like GaAs/Ga 1−x Al x As quantum well under external electric and magnetic fields are investigated. The calculations were performed within the compact density-matrix formalism with the use of the effective mass and parabolic band approximations. The obtained results show that both total absorption coefficient and refractive index change are sensitive to the well dimensions and the effects of external fields. By changing the intensities of the electric, magnetic and non-resonant intense laser fields together with the well dimensions, we can obtain the blue or red shift, without the need for the growth of many different samples. - Highlights: • Augmentation of laser-field results in red shift in total AC spectra. • Magnetic field induces a blue-shift in the resonant peak. • Resonant peak position shifts to red with effect of electric field. • Resonant peak of total AC shifts to the higher photon energies with increasing well width

  13. Intelligent Test Mechanism Design of Worn Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available With the continuous development of national economy, big gear was widely applied in metallurgy and mine domains. So, big gear plays an important role in above domains. In practical production, big gear abrasion and breach take place often. It affects normal production and causes unnecessary economic loss. A kind of intelligent test method was put forward on worn big gear mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. The measure equations transformations were made on involute straight gear. Original polar coordinate equations were transformed into rectangular coordinate equations. Big gear abrasion measure principle was introduced. Detection principle diagram was given. Detection route realization method was introduced. OADM12 laser sensor was selected. Detection on big gear abrasion area was realized by detection mechanism. Tested data of unworn gear and worn gear were led in designed calculation program written by Visual Basic language. Big gear abrasion quantity can be obtained. It provides a feasible method for intelligent test and intelligent repair welding on worn big gear.

  14. Kasner asymptotics of mixmaster Horava-Witten and pre-big-bang cosmologies

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2001-01-01

    We discuss various superstring effective actions and, in particular, their common sector which leads to the so-called pre-big-bang cosmology (cosmology in a weak coupling limit of heterotic superstring theory. Using the conformal relationship between these two theories we present Kasner asymptotic solutions of Bianchi type IX geometries within these theories and make predictions about possible emergence of chaos. Finally, we present a possible method of generating Horava-Witten cosmological solutions out of the well-known general relativistic or pre-big-bang solutions

  15. Learning Analytics: The next frontier for computer assisted language learning in big data age

    Directory of Open Access Journals (Sweden)

    Yu Qinglan

    2015-01-01

    Full Text Available Learning analytics (LA has been applied to various learning environments, though it is quite new in the field of computer assisted language learning (CALL. This article attempts to examine the application of learning analytics in the upcoming big data age. It starts with an introduction and application of learning analytics in other fields, followed by a retrospective review of historical interaction between learning and media in CALL, and a penetrating analysis on why people would go to learning analytics to increase the efficiency of foreign language education. As approved in previous research, new technology, including big data mining and analysis, would inevitably enhance the learning of foreign languages. Potential changes that learning analytics would bring to Chinese foreign language education and researches are also presented in the article.

  16. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  17. Drosophila Big bang regulates the apical cytocortex and wing growth through junctional tension.

    Science.gov (United States)

    Tsoumpekos, Giorgos; Nemetschke, Linda; Knust, Elisabeth

    2018-03-05

    Growth of epithelial tissues is regulated by a plethora of components, including signaling and scaffolding proteins, but also by junctional tension, mediated by the actomyosin cytoskeleton. However, how these players are spatially organized and functionally coordinated is not well understood. Here, we identify the Drosophila melanogaster scaffolding protein Big bang as a novel regulator of growth in epithelial cells of the wing disc by ensuring proper junctional tension. Loss of big bang results in the reduction of the regulatory light chain of nonmuscle myosin, Spaghetti squash. This is associated with an increased apical cell surface, decreased junctional tension, and smaller wings. Strikingly, these phenotypic traits of big bang mutant discs can be rescued by expressing constitutively active Spaghetti squash. Big bang colocalizes with Spaghetti squash in the apical cytocortex and is found in the same protein complex. These results suggest that in epithelial cells of developing wings, the scaffolding protein Big bang controls apical cytocortex organization, which is important for regulating cell shape and tissue growth. © 2018 Tsoumpekos et al.

  18. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  19. Do container volume, site preparation, and field fertilization affect restoration potential of Wyoming big sagebrush?

    Science.gov (United States)

    Kayla R. Herriman; Anthony S. Davis; Kent G. Apostol; Olga. A. Kildisheva; Amy L. Ross-Davis; Kas Dumroese

    2016-01-01

    Land management practices, invasive species expansion, and changes in the fire regime greatly impact the distribution of native plants in natural areas. Wyoming big sagebrush (Artemisia tridentata ssp. wyomingensis), a keystone species in the Great Basin, has seen a 50% reduction in its distribution. For many dryland species, reestablishment efforts have...

  20. Current bistability in a weakly coupled multi-quantum well structure: a magnetic field induced 'memory effect'

    International Nuclear Information System (INIS)

    Feu, W H M; Villas-Boas, J M; Cury, L A; Guimaraes, P S S; Vieira, G S; Tanaka, R Y; Passaro, A; Pires, M P; Landi, S M; Souza, P L

    2009-01-01

    A study of magnetotunnelling in weakly coupled multi-quantum wells reveals a new phenomenon which constitutes a kind of memory effect in the sense that the electrical resistance of the sample after application of the magnetic field is different from before and contains the information that a magnetic field was applied previously. The change in the electric field domain configuration triggered by the magnetic field was compared for two samples, one strictly periodic and another with a thicker quantum well inserted into the periodic structure. For applied biases at which two electric field domains are present in the sample, as the magnetic field is increased a succession of discontinuous reductions in the electrical resistance is observed due to the magnetic field-induced rearrangement of the electric field domains, i.e. the domain boundary jumps from well to well as the magnetic field is changed. The memory effect is revealed for the aperiodic structure as the electric field domain configuration triggered by the magnetic field remains stable after the field is reduced back to zero. This effect is related to the multi-stability in the current-voltage characteristics observed in some weakly coupled multi-quantum well structures.

  1. Separating method factors and higher order traits of the Big Five: a meta-analytic multitrait-multimethod approach.

    Science.gov (United States)

    Chang, Luye; Connelly, Brian S; Geeza, Alexis A

    2012-02-01

    Though most personality researchers now recognize that ratings of the Big Five are not orthogonal, the field has been divided about whether these trait intercorrelations are substantive (i.e., driven by higher order factors) or artifactual (i.e., driven by correlated measurement error). We used a meta-analytic multitrait-multirater study to estimate trait correlations after common method variance was controlled. Our results indicated that common method variance substantially inflates trait correlations, and, once controlled, correlations among the Big Five became relatively modest. We then evaluated whether two different theories of higher order factors could account for the pattern of Big Five trait correlations. Our results did not support Rushton and colleagues' (Rushton & Irwing, 2008; Rushton et al., 2009) proposed general factor of personality, but Digman's (1997) α and β metatraits (relabeled by DeYoung, Peterson, and Higgins (2002) as Stability and Plasticity, respectively) produced viable fit. However, our models showed considerable overlap between Stability and Emotional Stability and between Plasticity and Extraversion, raising the question of whether these metatraits are redundant with their dominant Big Five traits. This pattern of findings was robust when we included only studies whose observers were intimately acquainted with targets. Our results underscore the importance of using a multirater approach to studying personality and the need to separate the causes and outcomes of higher order metatraits from those of the Big Five. We discussed the implications of these findings for the array of research fields in which personality is studied.

  2. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  3. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  4. Efficient data management tools for the heterogeneous big data warehouse

    Science.gov (United States)

    Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.

    2016-09-01

    The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.

  5. Big data mining: In-database Oracle data mining over hadoop

    Science.gov (United States)

    Kovacheva, Zlatinka; Naydenova, Ina; Kaloyanova, Kalinka; Markov, Krasimir

    2017-07-01

    Big data challenges different aspects of storing, processing and managing data, as well as analyzing and using data for business purposes. Applying Data Mining methods over Big Data is another challenge because of huge data volumes, variety of information, and the dynamic of the sources. Different applications are made in this area, but their successful usage depends on understanding many specific parameters. In this paper we present several opportunities for using Data Mining techniques provided by the analytical engine of RDBMS Oracle over data stored in Hadoop Distributed File System (HDFS). Some experimental results are given and they are discussed.

  6. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  7. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  8. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  9. Potential of big data analytics in the French in vitro diagnostics market.

    Science.gov (United States)

    Dubois, Nicolas; Garnier, Nicolas; Meune, Christophe

    2017-12-01

    The new paradigm of the big data raises many expectations, particularly in the field of health. Curiously, even though medical biology laboratories generate a great amount of data, the opportunities offered by this new field are poorly documented. For better understanding the clinical context of chronical disease follow-up, for leveraging preventive and/or personalized medicine, the contribution of big data analytics seems very promising. It is within this framework that we have explored to use data of a Breton group of laboratories of medical biology to analyze the possible contributions of their exploitation in the improvement of the clinical practices and to anticipate the evolution of pathologies for the benefit of patients. We report here three practical applications derived from routine laboratory data from a period of 5 years (February 2010-August 2015): follow-up of patients treated with AVK according to the recommendations of the High authority of health (HAS), use of the new troponin markers HS and NT-proBNP in cardiology. While the risks and difficulties of using algorithms in the health domain should not be underestimated - quality, accessibility, and protection of personal data in particular - these first results show that use of tools and technologies of the big data repository could provide decisive support for the concept of "evidence based medicine".

  10. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  11. Big-data-based edge biomarkers: study on dynamical drug sensitivity and resistance in individuals.

    Science.gov (United States)

    Zeng, Tao; Zhang, Wanwei; Yu, Xiangtian; Liu, Xiaoping; Li, Meiyi; Chen, Luonan

    2016-07-01

    Big-data-based edge biomarker is a new concept to characterize disease features based on biomedical big data in a dynamical and network manner, which also provides alternative strategies to indicate disease status in single samples. This article gives a comprehensive review on big-data-based edge biomarkers for complex diseases in an individual patient, which are defined as biomarkers based on network information and high-dimensional data. Specifically, we firstly introduce the sources and structures of biomedical big data accessible in public for edge biomarker and disease study. We show that biomedical big data are typically 'small-sample size in high-dimension space', i.e. small samples but with high dimensions on features (e.g. omics data) for each individual, in contrast to traditional big data in many other fields characterized as 'large-sample size in low-dimension space', i.e. big samples but with low dimensions on features. Then, we demonstrate the concept, model and algorithm for edge biomarkers and further big-data-based edge biomarkers. Dissimilar to conventional biomarkers, edge biomarkers, e.g. module biomarkers in module network rewiring-analysis, are able to predict the disease state by learning differential associations between molecules rather than differential expressions of molecules during disease progression or treatment in individual patients. In particular, in contrast to using the information of the common molecules or edges (i.e.molecule-pairs) across a population in traditional biomarkers including network and edge biomarkers, big-data-based edge biomarkers are specific for each individual and thus can accurately evaluate the disease state by considering the individual heterogeneity. Therefore, the measurement of big data in a high-dimensional space is required not only in the learning process but also in the diagnosing or predicting process of the tested individual. Finally, we provide a case study on analyzing the temporal expression

  12. THE 2H(alpha, gamma6LI REACTION AT LUNA AND BIG BANG NUCLEOSYNTHETIS

    Directory of Open Access Journals (Sweden)

    Carlo Gustavino

    2013-12-01

    Full Text Available The 2H(α, γ6Li reaction is the leading process for the production of 6Li in standard Big Bang Nucleosynthesis. Recent observations of lithium abundance in metal-poor halo stars suggest that there might be a 6Li plateau, similar to the well-known Spite plateau of 7Li. This calls for a re-investigation of the standard production channel for 6Li. As the 2H(α, γ6Li cross section drops steeply at low energy, it has never before been studied directly at Big Bang energies. For the first time the reaction has been studied directly at Big Bang energies at the LUNA accelerator. The preliminary data and their implications for Big Bang nucleosynthesis and the purported 6Li problem will be shown.

  13. Data mining and knowledge discovery for big data methodologies, challenge and opportunities

    CERN Document Server

    2014-01-01

    The field of data mining has made significant and far-reaching advances over the past three decades.  Because of its potential power for solving complex problems, data mining has been successfully applied to diverse areas such as business, engineering, social media, and biological science. Many of these applications search for patterns in complex structural information. In biomedicine for example, modeling complex biological systems requires linking knowledge across many levels of science, from genes to disease.  Further, the data characteristics of the problems have also grown from static to dynamic and spatiotemporal, complete to incomplete, and centralized to distributed, and grow in their scope and size (this is known as big data). The effective integration of big data for decision-making also requires privacy preservation. The contributions to this monograph summarize the advances of data mining in the respective fields. This volume consists of nine chapters that address subjects ranging from mining da...

  14. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    Science.gov (United States)

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  15. The research of approaches of applying the results of big data analysis in higher education

    Science.gov (United States)

    Kochetkov, O. T.; Prokhorov, I. V.

    2017-01-01

    This article briefly discusses the approaches to the use of Big Data in the educational process of higher educational institutions. There is a brief description of nature of big data, their distribution in the education industry and new ways to use Big Data as part of the educational process are offered as well. This article describes a method for the analysis of the relevant requests by using Yandex.Wordstat (for laboratory works on the processing of data) and Google Trends (for actual pictures of interest and preference in a higher education institution).

  16. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  17. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  18. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  19. BIG GEO DATA MANAGEMENT: AN EXPLORATION WITH SOCIAL MEDIA AND TELECOMMUNICATIONS OPEN DATA

    Directory of Open Access Journals (Sweden)

    C. Arias Munoz

    2016-06-01

    Full Text Available The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.

  20. Big Geo Data Management: AN Exploration with Social Media and Telecommunications Open Data

    Science.gov (United States)

    Arias Munoz, C.; Brovelli, M. A.; Corti, S.; Zamboni, G.

    2016-06-01

    The term Big Data has been recently used to define big, highly varied, complex data sets, which are created and updated at a high speed and require faster processing, namely, a reduced time to filter and analyse relevant data. These data is also increasingly becoming Open Data (data that can be freely distributed) made public by the government, agencies, private enterprises and among others. There are at least two issues that can obstruct the availability and use of Open Big Datasets: Firstly, the gathering and geoprocessing of these datasets are very computationally intensive; hence, it is necessary to integrate high-performance solutions, preferably internet based, to achieve the goals. Secondly, the problems of heterogeneity and inconsistency in geospatial data are well known and affect the data integration process, but is particularly problematic for Big Geo Data. Therefore, Big Geo Data integration will be one of the most challenging issues to solve. With these applications, we demonstrate that is possible to provide processed Big Geo Data to common users, using open geospatial standards and technologies. NoSQL databases like MongoDB and frameworks like RASDAMAN could offer different functionalities that facilitate working with larger volumes and more heterogeneous geospatial data sources.

  1. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  2. Effect of the magnetic field on optical properties of GaN/AlN multiple quantum wells

    International Nuclear Information System (INIS)

    Solaimani, M.; Izadifard, Morteza; Arabshahi, H.; Mohammad Reza, Sarkardei

    2013-01-01

    In this paper, the effect of the magnetic field and well number on the optical properties of a GaN/AlN MQWs with different number of wells and the energy levels have been investigated. Our results showed that as the magnetic field increases the values of the absorption coefficient also increases while a blue shift in their peak positions is observed. The blue shift for MQWs with odd well number was larger than the system with the even well number and the biggest blue shifts were related to MQWs with three and four well numbers. As the magnetic field changed, the values of the refractive index changes have shifted towards higher energies. Finally, the effect of the magnetic field on the oscillator strength showed that as the magnetic field increases the oscillator strength decreases and it is also proportional to the number of wells. - Highlights: ► Increase of absorption coefficient by increase of magnetic field will show a blue shift. ► As the magnetic field increased the oscillator strength decreased. ► Total effective intersubband oscillator strength was proportional to the number of wells. ► Minibands form after 10 wells, thus our results are valid for systems with well width up to 3 nm.

  3. On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.

    Science.gov (United States)

    Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N

    2016-04-01

    An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.

  4. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Science.gov (United States)

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  5. Literasi Informasi Pustakawan di Perpustakaan Fakultas Teknik UGM Menggunakan Pengembangan Model The BIG6

    Directory of Open Access Journals (Sweden)

    Yudistira Yudistira

    2017-06-01

    Full Text Available This research aims to know librarian literacy information in the Library of Faculty of Engineering UGM based on the big6 model and to know the stages of librarian literacy information in the Library of Faculty of Engineering UGM for each stage based on the big6 model. The method that the researchers use in this research is descriptive quantitative. This research is a population study, where the entire population is used as a research sample. The population in this research is all librarians in the Library Faculty of Engineering UGM which amounted to 4 people. Data analysis using mean & grand mean formula. Based on the data that has been processed it is known that librarian literacy information in the Library of Faculty of Engineering UGM belong to the category well proved with the grand mean value of 3.20. From the stages the definition of the problem is very good with a value of 3.28. From the stages of information search strategy, classified very well with a value of 3.27. From the stages of the location and access stage, it is very good with 3.38. From the stages of the use of information, quite well with the value of 2.88. From the synthesis stage, it is quite good with a value of 3.21. From the evaluation stage, it is good with 3.20. This research is expected to provide input to the library to maintain and improve skills in the field of information literacy.

  6. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    Science.gov (United States)

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  7. Big Data for Infectious Disease Surveillance and Modeling.

    Science.gov (United States)

    Bansal, Shweta; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro; Viboud, Cécile

    2016-12-01

    We devote a special issue of the Journal of Infectious Diseases to review the recent advances of big data in strengthening disease surveillance, monitoring medical adverse events, informing transmission models, and tracking patient sentiments and mobility. We consider a broad definition of big data for public health, one encompassing patient information gathered from high-volume electronic health records and participatory surveillance systems, as well as mining of digital traces such as social media, Internet searches, and cell-phone logs. We introduce nine independent contributions to this special issue and highlight several cross-cutting areas that require further research, including representativeness, biases, volatility, and validation, and the need for robust statistical and hypotheses-driven analyses. Overall, we are optimistic that the big-data revolution will vastly improve the granularity and timeliness of available epidemiological information, with hybrid systems augmenting rather than supplanting traditional surveillance systems, and better prospects for accurate infectious diseases models and forecasts. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  8. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  9. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  10. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  11. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  12. EMR implementation: big bang or a phased approach?

    Science.gov (United States)

    Owens, Kathleen

    2008-01-01

    There are two major strategies to implementing an EMR: the big-bang approach and the phased, or incremental, approach. Each strategy has pros and cons that must be considered. This article discusses these approaches and the risks and benefits of each as well as some training strategies that can be used with either approach.

  13. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  14. Epidemiology in wonderland: Big Data and precision medicine.

    Science.gov (United States)

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  15. Modeling of virtual particles of the Big Bang

    Science.gov (United States)

    Corral-Bustamante, L. R.; Rodriguez-Corral, A. R.; Amador-Parra, T.; Martinez-Loera, E.; Irigoyen-Chavez, G.

    2012-01-01

    In this work, a mathematical model in four dimensions proposed to predict the behavior of the transport phenomena of mass (energy) in the space-time continuum through a metric tensor in the Planck scale is presented. The Ricci tensor was determined with the aim of measuring the turbulent flow of a mass with a large gravitational field similar to that which is believed to have existed in the Big Bang. Computing the curvature of space-time through tensor analysis, we predict a vacuum solution of the Einstein field equations through numerical integration with approximate solutions. A quantum vacuum is filled with virtual particles of enormous superficial gravity of black holes and wormholes as predicted by other authors. By generating the geodesic equations, we obtain the relativistic equation, which is the carrier of information pertaining to the behavior of the entropy of matter. The results of the measurements of the evolution of the mass during its collapse and evaporation allow us to argue the evidence of virtual particles including all the values (and beyond) of the experimental search by other authors for gauges and Higgs bosons. We conclude that the matter behaves as virtual particles, which appear and disappear in Planck time at speeds greater than that of light, representing those that probably existed during the Big Bang.

  16. Big Data and Analytics in Healthcare.

    Science.gov (United States)

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  17. Big Data for Business Ecosystem Players

    Directory of Open Access Journals (Sweden)

    Perko Igor

    2016-06-01

    Full Text Available In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use-namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.

  18. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  19. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  20. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  1. BIG DATA IN TAMIL: OPPORTUNITIES, BENEFITS AND CHALLENGES

    OpenAIRE

    R.S. Vignesh Raj; Babak Khazaei; Ashik Ali

    2015-01-01

    This paper gives an overall introduction on big data and has tried to introduce Big Data in Tamil. It discusses the potential opportunities, benefits and likely challenges from a very Tamil and Tamil Nadu perspective. The paper has also made original contribution by proposing the ‘big data’s’ terminology in Tamil. The paper further suggests a few areas to explore using big data Tamil on the lines of the Tamil Nadu Government ‘vision 2023’. Whilst, big data has something to offer everyone, it ...

  2. [Big Data and Public Health - Results of the Working Group 1 of the Forum Future Public Health, Berlin 2016].

    Science.gov (United States)

    Moebus, Susanne; Kuhn, Joseph; Hoffmann, Wolfgang

    2017-11-01

    Big Data is a diffuse term, which can be described as an approach to linking gigantic and often unstructured data sets. Big Data is used in many corporate areas. For Public Health (PH), however, Big Data is not a well-developed topic. In this article, Big Data is explained according to the intention of use, information efficiency, prediction and clustering. Using the example of application in science, patient care, equal opportunities and smart cities, typical challenges and open questions of Big Data for PH are outlined. In addition to the inevitable use of Big Data, networking is necessary, especially with knowledge-carriers and decision-makers from politics and health care practice. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  5. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  6. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  7. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  8. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    Science.gov (United States)

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  9. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  10. Trapping of dilute ion components in wells and double wells in higher equatorial magnetic regions: A kinetic theory including collisions, varying background and additional fields

    Energy Technology Data Exchange (ETDEWEB)

    Oeien, Alf H.

    2001-08-01

    The component of the ambipolar field along the magnetic field B, though weak, may, acting together with the gravitational field, give rise to along-B ''ambipolar wells'' where light ions (test particles) in the ionosphere in equatorial regions are trapped. We also take into account magnetic field wells, especially in cases when the along-B velocity of test particles are much less than the transverse-B velocities. For heavy ions, or, for light ions high up, when the ambipolar trap ceases to function, the along-B ambipolar- and gravitational field effects may combine with the magnetic field trap to form a double well for the along-B movement of test particles. The magnetic field trap and its contribution to the double well may be nearly stationary for particles obeying the same velocity condition as above even when collisional effects between the test particles and the background plasma are incorporated. Ions trapped in wells like this, may ''feel'' a varying background, for instance because of Earth rotation, that may be incorporated as time-variation of parameters in the along-B motion. An along-B kinetic equation for groups of test particles is solved both for the case of simple wells and for double wells, including time-varying collisional coefficients and additional fields, and in some cases analytic solutions are obtained. Peculiar along-B distribution functions may arise due to the time-dependency of coefficients and to various combinations of collision- and field parameter values. In particular ''breathing'' distributions that alternate between wide and narrow forms in phase-space may arise, and also distributions where strange attractors may play some role.

  11. Using Horizontal Wells for Chemical EOR: Field Cases

    Directory of Open Access Journals (Sweden)

    E. Delamaide

    2017-09-01

    Full Text Available Primary production of heavy oil in general only achieves a recovery of less than 10% OOIP. Waterflooding has been applied for a number of years in heavy oil pools and can yield much higher recovery but the efficiency of the process diminishes when viscosity is above a few hundreds cp with high water-cuts and the need to recycle significant volumes of water; in addition, significant quantities of oil are still left behind. To increase recovery beyond that, Enhanced Oil Recovery methods are needed. Thermal methods such as steam injection or Steam-Assisted Gravity Drainage (SAGD are not always applicable, in particular when the pay is thin and in that case chemical EOR can be an alternative. The two main chemical EOR processes are polymer and Alkali-Surfactant-Polymer (ASP flooding. The earlier records of field application of polymer injection in heavy oil fields date from the 1970’s however; the process had seen very few applications until recently. ASP in heavy oil has seen even fewer applications. A major specificity of chemical EOR in heavy oil is that the highly viscous oil bank is difficult to displace and that injectivity with vertical wells can be limited, particularly in thin reservoirs which are the prime target for chemical EOR. This situation has changed with the development of horizontal drilling and as a result, several chemical floods in heavy oil have been implemented in the past 10 years, using horizontal wells. The goal of this paper is to present some of the best documented field cases. The most successful and largest of these is the Pelican Lake polymer flood in Canada, operated by CNRL and Cenovus which is currently producing over 60,000 bbl/d. The Patos Marinza polymer flood by Bankers Petroleum in Albania and the Mooney project (polymer, ASP by BlackPearl (again in Canada are also worthy of discussion.

  12. Astronomical Surveys and Big Data

    Directory of Open Access Journals (Sweden)

    Mickaelian Areg M.

    2016-03-01

    Full Text Available Recent all-sky and large-area astronomical surveys and their catalogued data over the whole range of electromagnetic spectrum, from γ-rays to radio waves, are reviewed, including such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and POSS II-based catalogues (APM, MAPS, USNO, GSC in the optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio range, and many others, as well as the most important surveys giving optical images (DSS I and II, SDSS, etc., proper motions (Tycho, USNO, Gaia, variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS, and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA. An overall understanding of the coverage along the whole wavelength range and comparisons between various surveys are given: galaxy redshift surveys, QSO/AGN, radio, Galactic structure, and Dark Energy surveys. Astronomy has entered the Big Data era, with Astrophysical Virtual Observatories and Computational Astrophysics playing an important role in using and analyzing big data for new discoveries.

  13. Room-temperature near-field reflection spectroscopy of single quantum wells

    DEFF Research Database (Denmark)

    Langbein, Wolfgang Werner; Hvam, Jørn Marcher; Madsen, Steen

    1997-01-01

    . This technique suppresses efficiently the otherwise dominating far-field background and reduces topographic artifacts. We demonstrate its performance on a thin, strained near-surface CdS/ZnS single quantum well at room temperature. The optical structure of these topographically flat samples is due to Cd...

  14. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  15. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  16. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  17. Electromagnetic holographic sensitivity field of two-phase flow in horizontal wells

    Science.gov (United States)

    Zhang, Kuo; Wu, Xi-Ling; Yan, Jing-Fu; Cai, Jia-Tie

    2017-03-01

    Electromagnetic holographic data are characterized by two modes, suggesting that image reconstruction requires a dual-mode sensitivity field as well. We analyze an electromagnetic holographic field based on tomography theory and Radon inverse transform to derive the expression of the electromagnetic holographic sensitivity field (EMHSF). Then, we apply the EMHSF calculated by using finite-element methods to flow simulations and holographic imaging. The results suggest that the EMHSF based on the partial derivative of radius of the complex electric potential φ is closely linked to the Radon inverse transform and encompasses the sensitivities of the amplitude and phase data. The flow images obtained with inversion using EMHSF better agree with the actual flow patterns. The EMHSF overcomes the limitations of traditional single-mode sensitivity fields.

  18. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    Science.gov (United States)

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  19. Limits to the primordial helium abundance in the baryon-inhomogeneous big bang

    Science.gov (United States)

    Mathews, G. J.; Schramm, D. N.; Meyer, B. S.

    1993-01-01

    The parameter space for baryon inhomogeneous big bang models is explored with the goal of determining the minimum helium abundance obtainable in such models while still satisfying the other light-element constraints. We find that the constraint of (D + He-3)/H less than 10 exp -4 restricts the primordial helium mass fraction from baryon-inhomogeneous big bang models to be greater than 0.231 even for a scenario which optimizes the effects of the inhomogeneities and destroys the excess lithium production. Thus, this modification to the standard big bang as well as the standard homogeneous big bang model itself would be falsifiable by observation if the primordial He-4 abundance were observed to be less than 0.231. Furthermore, a present upper limit to the observed helium mass fraction of Y(obs)(p) less than 0.24 implies that the maximum baryon-to-photon ratio allowable in the inhomogeneous models corresponds to eta less than 2.3 x 10 exp -9 (omega(b) h-squared less than 0.088) even if all conditions are optimized.

  20. Big data processing in the cloud - Challenges and platforms

    Science.gov (United States)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.