WorldWideScience

Sample records for big losses lead

  1. Big losses lead to irrational decision-making in gambling situations: relationship between deliberation and impulsivity.

    Directory of Open Access Journals (Sweden)

    Yuji Takano

    Full Text Available In gambling situations, we found a paradoxical reinforcing effect of high-risk decision-making after repeated big monetary losses. The computerized version of the Iowa Gambling Task (Bechara et al., 2000, which contained six big loss cards in deck B', was conducted on normal healthy college students. The results indicated that the total number of selections from deck A' and deck B' decreased across trials. However, there was no decrease in selections from deck B'. Detailed analysis of the card selections revealed that some people persisted in selecting from the "risky" deck B' as the number of big losses increased. This tendency was prominent in self-rated deliberative people. However, they were implicitly impulsive, as revealed by the matching familiar figure test. These results suggest that the gap between explicit deliberation and implicit impulsivity drew them into pathological gambling.

  2. Poker Player Behavior After Big Wins and Big Losses

    OpenAIRE

    Gary Smith; Michael Levere; Robert Kurtzman

    2009-01-01

    We find that experienced poker players typically change their style of play after winning or losing a big pot--most notably, playing less cautiously after a big loss, evidently hoping for lucky cards that will erase their loss. This finding is consistent with Kahneman and Tversky's (Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2) 263-292) break-even hypothesis and suggests that when investors incur a large loss, it might be time to take ...

  3. A Brief Review on Leading Big Data Models

    Directory of Open Access Journals (Sweden)

    Sugam Sharma

    2014-11-01

    Full Text Available Today, science is passing through an era of transformation, where the inundation of data, dubbed data deluge is influencing the decision making process. The science is driven by the data and is being termed as data science. In this internet age, the volume of the data has grown up to petabytes, and this large, complex, structured or unstructured, and heterogeneous data in the form of “Big Data” has gained significant attention. The rapid pace of data growth through various disparate sources, especially social media such as Facebook, has seriously challenged the data analytic capabilities of traditional relational databases. The velocity of the expansion of the amount of data gives rise to a complete paradigm shift in how new age data is processed. Confidence in the data engineering of the existing data processing systems is gradually fading whereas the capabilities of the new techniques for capturing, storing, visualizing, and analyzing data are evolving. In this review paper, we discuss some of the modern Big Data models that are leading contributors in the NoSQL era and claim to address Big Data challenges in reliable and efficient ways. Also, we take the potential of Big Data into consideration and try to reshape the original operationaloriented definition of “Big Science” (Furner, 2003 into a new data-driven definition and rephrase it as “The science that deals with Big Data is Big Science.”

  4. Simulated big sagebrush regeneration supports predicted changes at the trailing and leading edges of distribution shifts

    Science.gov (United States)

    Schlaepfer, Daniel R.; Taylor, Kyle A.; Pennington, Victoria E.; Nelson, Kellen N.; Martin, Trace E.; Rottler, Caitlin M.; Lauenroth, William K.; Bradford, John B.

    2015-01-01

    Many semi-arid plant communities in western North America are dominated by big sagebrush. These ecosystems are being reduced in extent and quality due to economic development, invasive species, and climate change. These pervasive modifications have generated concern about the long-term viability of sagebrush habitat and sagebrush-obligate wildlife species (notably greater sage-grouse), highlighting the need for better understanding of the future big sagebrush distribution, particularly at the species' range margins. These leading and trailing edges of potential climate-driven sagebrush distribution shifts are likely to be areas most sensitive to climate change. We used a process-based regeneration model for big sagebrush, which simulates potential germination and seedling survival in response to climatic and edaphic conditions and tested expectations about current and future regeneration responses at trailing and leading edges that were previously identified using traditional species distribution models. Our results confirmed expectations of increased probability of regeneration at the leading edge and decreased probability of regeneration at the trailing edge below current levels. Our simulations indicated that soil water dynamics at the leading edge became more similar to the typical seasonal ecohydrological conditions observed within the current range of big sagebrush ecosystems. At the trailing edge, an increased winter and spring dryness represented a departure from conditions typically supportive of big sagebrush. Our results highlighted that minimum and maximum daily temperatures as well as soil water recharge and summer dry periods are important constraints for big sagebrush regeneration. Overall, our results confirmed previous predictions, i.e., we see consistent changes in areas identified as trailing and leading edges; however, we also identified potential local refugia within the trailing edge, mostly at sites at higher elevation. Decreasing

  5. Blood lead levels and chronic blood loss

    Energy Technology Data Exchange (ETDEWEB)

    Manci, E.A.; Cabaniss, M.L.; Boerth, R.C.; Blackburn, W.R.

    1986-03-01

    Over 90% of lead in blood is bound to the erythrocytes. This high affinity of lead for red cells may mean that chronic blood loss is a significant means for excretion of lead. This study sought correlations between blood lead levels and clinical conditions involving chronic blood loss. During May, June and July, 146 patients with normal hematocrits and red cell indices were identified from the hospital and clinic populations. For each patient, age, race, sex and medical history were noted, and a whole blood sample was analyzed by flameless atomic absorption spectrophotometry. Age-and race-matched pairs showed a significant correlation of chronic blood loss with lead levels. Patients with the longest history of blood loss (menstruating women) had the lowest level (mean 6.13 ..mu..g/dl, range 3.6-10.3 ..mu..g/dl). Post-menopausal women had levels (7.29 ..mu..g/dl, 1.2-14 ..mu..g/dl) comparable to men with peptic ulcer disease, or colon carcinoma (7.31 ..mu..g/dl, 5.3-8.6 ..mu..g/dl). The highest levels were among men who had no history of bleeding problems (12.39 ..mu..g/dl, 2.08-39.35 ..mu..g/dl). Chronic blood loss may be a major factor responsible for sexual differences in blood lead levels. Since tissue deposition of environmental pollutants is implicated in diseases, menstruation may represent a survival advantage for women.

  6. Snapping turtles (Chelydra serpentina) as biomonitors of lead contamination of the Big River in Missouri`s Old Lead Belt

    Energy Technology Data Exchange (ETDEWEB)

    Overmann, S.R.; Krajicek, J.J. [Southeast Missouri State Univ., Cape Girardeau, MO (United States). Dept. of Biology

    1995-04-01

    The usefulness of common snapping turtles (Chelydra serpentina) as biomonitors of lead (Pb) contamination of aquatic ecosystems was assessed. Thirty-seven snapping turtles were collected from three sites on the Big River, an Ozarkian stream contaminated with Pb mine tailings. Morphometric measurements, tissue Pb concentrations (muscle, blood, bone, carapace, brain, and liver), {delta}-aminolevulinic acid dehydratase ({delta}-ALAD) activity, hematocrit, hemoglobin, plasma glucose, osmolality, and chloride ion content were measured. The data showed no effects of Pb contamination on capture success or morphological measurements. Tissue Pb concentrations were related to capture location. Hematocrit, plasma osmolality, plasma glucose, and plasma chloride ion content were not significantly different with respect to capture location. The {delta}-ALAD activity levels were decreased in turtles taken from contaminated sites. Lead levels in the Big River do not appear to be adversely affecting the snapping turtles of the river. Chelydra serpentina is a useful species for biomonitoring of Pb-contaminated aquatic environments.

  7. An analysis of cross-sectional differences in big and non-big public accounting firms' audit programs

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans); Drieenhuizen, F.; Stein, M.T.; Simunic, D.A.

    2006-01-01

    A significant body of prior research has shown that audits by the Big 5 (now Big 4) public accounting firms are quality differentiated relative to non-Big 5 audits. This result can be derived analytically by assuming that Big 5 and non-Big 5 firms face different loss functions for "audit failures"

  8. Losses as ecological guides: minor losses lead to maximization and not to avoidance.

    Science.gov (United States)

    Yechiam, Eldad; Retzer, Matan; Telpaz, Ariel; Hochman, Guy

    2015-06-01

    Losses are commonly thought to result in a neuropsychological avoidance response. We suggest that losses also provide ecological guidance by increasing focus on the task at hand, and that this effect may override the avoidance response. This prediction was tested in a series of studies. In Study 1a we found that minor losses did not lead to an avoidance response. Instead, they guided participants to make advantageous choices (in terms of expected value) and to avoid disadvantageous choices. Moreover, losses were associated with less switching between options after the first block of exploration. In Study 1b we found that this effect was not simply a by-product of the increase in visual contrast with losses. In Study 1c we found that the effect of losses did not emerge when alternatives did not differ in their expected value but only in their risk level. In Study 2 we investigated the autonomic arousal dynamics associated with this behavioral pattern via pupillometric responses. The results showed increased pupil diameter following losses compared to gains. However, this increase was not associated with a tendency to avoid losses, but rather with a tendency to select more advantageously. These findings suggest that attention and reasoning processes induced by losses can out-weigh the influence of affective processes leading to avoidance. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Cumulative exergy losses associated with the production of lead metal

    Energy Technology Data Exchange (ETDEWEB)

    Szargut, J [Technical Univ. of Silesia, Gliwice (PL). Inst. of Thermal-Engineering; Morris, D R [New Brunswick Univ., Fredericton, NB (Canada). Dept. of Chemical Engineering

    1990-08-01

    Cumulative exergy losses result from the irreversibility of the links of a technological network leading from raw materials and fuels extracted from nature to the product under consideration. The sum of these losses can be apportioned into partial exergy losses (associated with particular links of the technological network) or into constituent exergy losses (associated with constituent subprocesses of the network). The methods of calculation of the partial and constituent exergy losses are presented, taking into account the useful byproducts substituting the major products of other processes. Analyses of partial and constituent exergy losses are made for the technological network of lead metal production. (author).

  10. Analysis of total productive maintenance (TPM) implementation using overall equipment effectiveness (OEE) and six big losses: A case study

    Science.gov (United States)

    Martomo, Zenithia Intan; Laksono, Pringgo Widyo

    2018-02-01

    In improving the productivity of the machine, the management of the decision or maintenance policy must be appropriate. In Spinning II unit at PT Apac Inti Corpora, there are 124 ring frame machines that often have breakdown and cause a high downtime so that the production target is not achieved, so this research was conducted on the ring frame machine. This study aims to measure the value of equipment effectiveness, find the root cause of the problem and provide suggestions for improvement. This research begins with measuring the achievement of overall equipment effectiveness (OEE) value, then identifying the six big losses that occur. The results show that the average value of OEE in the ring frame machine is 79.96%, the effectiveness value is quite low because the standard of OEE value for world class company ideally is 85%. The biggest factor that influences the low value of OEE is performance rate with percentage factor six big losses at reduced speed losses of 17.303% of all time loss. Proposed improvement actions are the application of autonomous maintenance, providing training for operators and maintenance technicians and supervising operators in the workplace.

  11. AC losses for the various voltage-leads in a semi-triple layer BSCCO conductor

    International Nuclear Information System (INIS)

    Li, Z.; Ryu, K.; Hwang, S.D.; Cha, G.; Song, H.J.

    2011-01-01

    Two voltage-leads (inner-lead, outer-lead) were soldered to the wires in each layer. Voltage-lead (total-lead) was soldered to the inner layer and arranged on the surface of the outer layer. The loss from the total-lead significantly differs from the sum of the wire losses. In order to investigate the AC loss of the multilayer conductor in a high temperature superconductor cable, a voltage-lead was generally attached to the outermost layer of the conductor. But the conductor's AC loss has not been completely cleared due to the various contact positions and arrangements of the voltage-lead. In this paper, we prepared a semi-triple layer conductor consisting of an inner layer and an outer layer with double layer structure. To measure the AC loss of the conductor, two voltage-leads (inner-lead, outer-lead) were soldered to the wires in each layer and arranged along their surfaces, as well as another voltage-lead (total-lead) was soldered to the inner layer and arranged on the surface of the outer layer. The results show that the AC losses for each layer measured from the inner-lead and the outer-lead, respectively, are identical to the sum of the wire losses. The AC losses in the semi-triple layer conductor measured from the total-lead and the outer-lead are identical for the uniform layer current density, and similar to the sum of the wire losses in both layers. However, the losses measured for the non-uniform layer current density from three voltage-leads are unequal to each other, and the loss from the total-lead significantly differs from the sum of the wire losses.

  12. Big Data, Big Problems: A Healthcare Perspective.

    Science.gov (United States)

    Househ, Mowafa S; Aldosari, Bakheet; Alanazi, Abdullah; Kushniruk, Andre W; Borycki, Elizabeth M

    2017-01-01

    Much has been written on the benefits of big data for healthcare such as improving patient outcomes, public health surveillance, and healthcare policy decisions. Over the past five years, Big Data, and the data sciences field in general, has been hyped as the "Holy Grail" for the healthcare industry promising a more efficient healthcare system with the promise of improved healthcare outcomes. However, more recently, healthcare researchers are exposing the potential and harmful effects Big Data can have on patient care associating it with increased medical costs, patient mortality, and misguided decision making by clinicians and healthcare policy makers. In this paper, we review the current Big Data trends with a specific focus on the inadvertent negative impacts that Big Data could have on healthcare, in general, and specifically, as it relates to patient and clinical care. Our study results show that although Big Data is built up to be as a the "Holy Grail" for healthcare, small data techniques using traditional statistical methods are, in many cases, more accurate and can lead to more improved healthcare outcomes than Big Data methods. In sum, Big Data for healthcare may cause more problems for the healthcare industry than solutions, and in short, when it comes to the use of data in healthcare, "size isn't everything."

  13. Global fluctuation spectra in big-crunch-big-bang string vacua

    International Nuclear Information System (INIS)

    Craps, Ben; Ovrut, Burt A.

    2004-01-01

    We study big-crunch-big-bang cosmologies that correspond to exact world-sheet superconformal field theories of type II strings. The string theory spacetime contains a big crunch and a big bang cosmology, as well as additional 'whisker' asymptotic and intermediate regions. Within the context of free string theory, we compute, unambiguously, the scalar fluctuation spectrum in all regions of spacetime. Generically, the big crunch fluctuation spectrum is altered while passing through the bounce singularity. The change in the spectrum is characterized by a function Δ, which is momentum and time dependent. We compute Δ explicitly and demonstrate that it arises from the whisker regions. The whiskers are also shown to lead to 'entanglement' entropy in the big bang region. Finally, in the Milne orbifold limit of our superconformal vacua, we show that Δ→1 and, hence, the fluctuation spectrum is unaltered by the big-crunch-big-bang singularity. We comment on, but do not attempt to resolve, subtleties related to gravitational back reaction and light winding modes when interactions are taken into account

  14. Wildfire and forest disease interaction lead to greater loss of soil nutrients and carbon.

    Science.gov (United States)

    Cobb, Richard C; Meentemeyer, Ross K; Rizzo, David M

    2016-09-01

    Fire and forest disease have significant ecological impacts, but the interactions of these two disturbances are rarely studied. We measured soil C, N, Ca, P, and pH in forests of the Big Sur region of California impacted by the exotic pathogen Phytophthora ramorum, cause of sudden oak death, and the 2008 Basin wildfire complex. In Big Sur, overstory tree mortality following P. ramorum invasion has been extensive in redwood and mixed evergreen forests, where the pathogen kills true oaks and tanoak (Notholithocarpus densiflorus). Sampling was conducted across a full-factorial combination of disease/no disease and burned/unburned conditions in both forest types. Forest floor organic matter and associated nutrients were greater in unburned redwood compared to unburned mixed evergreen forests. Post-fire element pools were similar between forest types, but lower in burned-invaded compared to burned-uninvaded plots. We found evidence disease-generated fuels led to increased loss of forest floor C, N, Ca, and P. The same effects were associated with lower %C and higher PO4-P in the mineral soil. Fire-disease interactions were linear functions of pre-fire host mortality which was similar between the forest types. Our analysis suggests that these effects increased forest floor C loss by as much as 24.4 and 21.3 % in redwood and mixed evergreen forests, respectively, with similar maximum losses for the other forest floor elements. Accumulation of sudden oak death generated fuels has potential to increase fire-related loss of soil nutrients at the region-scale of this disease and similar patterns are likely in other forests, where fire and disease overlap.

  15. Application of the hybrid Big Bang-Big Crunch algorithm to optimal reconfiguration and distributed generation power allocation in distribution systems

    International Nuclear Information System (INIS)

    Sedighizadeh, Mostafa; Esmaili, Masoud; Esmaeili, Mobin

    2014-01-01

    In this paper, a multi-objective framework is proposed for simultaneous optimal network reconfiguration and DG (distributed generation) power allocation. The proposed method encompasses objective functions of power losses, voltage stability, DG cost, and greenhouse gas emissions and it is optimized subject to power system operational and technical constraints. In order to solve the optimization problem, the HBB-BC (Hybrid Big Bang-Big Crunch) algorithm as one of the most recent heuristic tools is modified and employed here by introducing a mutation operator to enhance its exploration capability. To resolve the scaling problem of differently-scaled objective functions, a fuzzy membership is used to bring them into a same scale and then, the fuzzy fitness of the final objective function is utilized to measure the satisfaction level of the obtained solution. The proposed method is tested on balanced and unbalanced test systems and its results are comprehensively compared with previous methods considering different scenarios. According to results, the proposed method not only offers an enhanced exploration capability but also has a better converge rate compared with previous methods. In addition, the simultaneous network reconfiguration and DG power allocation leads to a more optimal result than separately doing tasks of reconfiguration and DG power allocation. - Highlights: • Hybrid Big Bang-Big Crunch algorithm is applied to network reconfiguration problem. • Joint reconfiguration and DG power allocation leads to a more optimal solution. • A mutation operator is used to improve the exploration capability of HBB-BC method. • The HBB-BC has a better convergence rate than the compared algorithms

  16. Big Egos in Big Science

    DEFF Research Database (Denmark)

    Andersen, Kristina Vaarst; Jeppesen, Jacob

    In this paper we investigate the micro-mechanisms governing structural evolution and performance of scientific collaboration. Scientific discovery tends not to be lead by so called lone ?stars?, or big egos, but instead by collaboration among groups of researchers, from a multitude of institutions...

  17. Measuring the Promise of Big Data Syllabi

    Science.gov (United States)

    Friedman, Alon

    2018-01-01

    Growing interest in Big Data is leading industries, academics and governments to accelerate Big Data research. However, how teachers should teach Big Data has not been fully examined. This article suggests criteria for redesigning Big Data syllabi in public and private degree-awarding higher education establishments. The author conducted a survey…

  18. "small problems, Big Trouble": An Art and Science Collaborative Exhibition Reflecting Seemingly small problems Leading to Big Threats

    Science.gov (United States)

    Waller, J. L.; Brey, J. A.

    2014-12-01

    disasters continues to inspire new chapters in their "Layers: Places in Peril" exhibit! A slide show includes images of paintings for "small problems, Big Trouble". Brey and Waller will lead a discussion on their process of incorporating broader collaboration with geoscientists and others in an educational art exhibition.

  19. Big data challenges

    DEFF Research Database (Denmark)

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which...... they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterprises, associations and research institutions, and focused literature reviews allowed not only...... framework are also relevant. For large enterprises and startups specialized in big data, it is typically easier to overcome the challenges than it is for other enterprises and public administration bodies....

  20. Big data bioinformatics.

    Science.gov (United States)

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.

  1. Transient characteristics of current lead losses for the large scale high-temperature superconducting rotating machine

    International Nuclear Information System (INIS)

    Le, T. D.; Kim, J. H.; Park, S. I.; Kim, D. J.; Kim, H. M.; Lee, H. G.; Yoon, Y. S.; Jo, Y. S.; Yoon, K. Y.

    2014-01-01

    To minimize most heat loss of current lead for high-temperature superconducting (HTS) rotating machine, the choice of conductor properties and lead geometry - such as length, cross section, and cooling surface area - are one of the various significant factors must be selected. Therefore, an optimal lead for large scale of HTS rotating machine has presented before. Not let up with these trends, this paper continues to improve of diminishing heat loss for HTS part according to different model. It also determines the simplification conditions for an evaluation of the main flux flow loss and eddy current loss transient characteristics during charging and discharging period.

  2. Identify too big to fail banks and capital insurance: An equilibrium approach

    OpenAIRE

    Katerina Ivanov

    2017-01-01

    The objective of this paper is develop a rational expectation equilibrium model of capital insurance to identify too big to fail banks. The main results of this model include (1) too big to fail banks can be identified explicitly by a systemic risk measure, loss betas, of all banks in the entire financial sector; (2) the too big to fail feature can be largely justified by a high level of loss beta; (3) the capital insurance proposal benefits market participants and reduces the systemic risk; ...

  3. Intelligent Test Mechanism Design of Worn Big Gear

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available With the continuous development of national economy, big gear was widely applied in metallurgy and mine domains. So, big gear plays an important role in above domains. In practical production, big gear abrasion and breach take place often. It affects normal production and causes unnecessary economic loss. A kind of intelligent test method was put forward on worn big gear mainly aimed at the big gear restriction conditions of high production cost, long production cycle and high- intensity artificial repair welding work. The measure equations transformations were made on involute straight gear. Original polar coordinate equations were transformed into rectangular coordinate equations. Big gear abrasion measure principle was introduced. Detection principle diagram was given. Detection route realization method was introduced. OADM12 laser sensor was selected. Detection on big gear abrasion area was realized by detection mechanism. Tested data of unworn gear and worn gear were led in designed calculation program written by Visual Basic language. Big gear abrasion quantity can be obtained. It provides a feasible method for intelligent test and intelligent repair welding on worn big gear.

  4. Big Data: Implications for Health System Pharmacy.

    Science.gov (United States)

    Stokes, Laura B; Rogers, Joseph W; Hertig, John B; Weber, Robert J

    2016-07-01

    Big Data refers to datasets that are so large and complex that traditional methods and hardware for collecting, sharing, and analyzing them are not possible. Big Data that is accurate leads to more confident decision making, improved operational efficiency, and reduced costs. The rapid growth of health care information results in Big Data around health services, treatments, and outcomes, and Big Data can be used to analyze the benefit of health system pharmacy services. The goal of this article is to provide a perspective on how Big Data can be applied to health system pharmacy. It will define Big Data, describe the impact of Big Data on population health, review specific implications of Big Data in health system pharmacy, and describe an approach for pharmacy leaders to effectively use Big Data. A few strategies involved in managing Big Data in health system pharmacy include identifying potential opportunities for Big Data, prioritizing those opportunities, protecting privacy concerns, promoting data transparency, and communicating outcomes. As health care information expands in its content and becomes more integrated, Big Data can enhance the development of patient-centered pharmacy services.

  5. Challenges of Big Data Analysis.

    Science.gov (United States)

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  6. Storage and Database Management for Big Data

    Science.gov (United States)

    2015-07-27

    cloud models that satisfy different problem 1.2. THE BIG DATA CHALLENGE 3 Enterprise Big Data - Interactive - On-demand - Virtualization - Java ...replication. Data loss can only occur if three drives fail prior to any one of the failures being corrected. Hadoop is written in Java and is installed in a...visible view into a dataset. There are many popular database management systems such as MySQL [4], PostgreSQL [63], and Oracle [5]. Most commonly

  7. Big data governance an emerging imperative

    CERN Document Server

    Soares, Sunil

    2012-01-01

    Written by a leading expert in the field, this guide focuses on the convergence of two major trends in information management-big data and information governance-by taking a strategic approach oriented around business cases and industry imperatives. With the advent of new technologies, enterprises are expanding and handling very large volumes of data; this book, nontechnical in nature and geared toward business audiences, encourages the practice of establishing appropriate governance over big data initiatives and addresses how to manage and govern big data, highlighting the relevant processes,

  8. Big data analytics turning big data into big money

    CERN Document Server

    Ohlhorst, Frank J

    2012-01-01

    Unique insights to implement big data analytics and reap big returns to your bottom line Focusing on the business and financial value of big data analytics, respected technology journalist Frank J. Ohlhorst shares his insights on the newly emerging field of big data analytics in Big Data Analytics. This breakthrough book demonstrates the importance of analytics, defines the processes, highlights the tangible and intangible values and discusses how you can turn a business liability into actionable material that can be used to redefine markets, improve profits and identify new business opportuni

  9. Loss of mitochondrial SIRT4 shortens lifespan and leads to a ...

    Indian Academy of Sciences (India)

    Sweta Parik

    2018-04-21

    Apr 21, 2018 ... determinant of longevity and its loss leads to early aging. Keywords. Activity; aging ... manner, using control and dSir2/Sirt1 perturbed flies, which were otherwise ..... on Sarcopenia in Older People. Age Ageing 39 412–423.

  10. Big Opportunities and Big Concerns of Big Data in Education

    Science.gov (United States)

    Wang, Yinying

    2016-01-01

    Against the backdrop of the ever-increasing influx of big data, this article examines the opportunities and concerns over big data in education. Specifically, this article first introduces big data, followed by delineating the potential opportunities of using big data in education in two areas: learning analytics and educational policy. Then, the…

  11. Big nuclear accidents

    International Nuclear Information System (INIS)

    Marshall, W.; Billingon, D.E.; Cameron, R.F.; Curl, S.J.

    1983-09-01

    Much of the debate on the safety of nuclear power focuses on the large number of fatalities that could, in theory, be caused by extremely unlikely but just imaginable reactor accidents. This, along with the nuclear industry's inappropriate use of vocabulary during public debate, has given the general public a distorted impression of the risks of nuclear power. The paper reviews the way in which the probability and consequences of big nuclear accidents have been presented in the past and makes recommendations for the future, including the presentation of the long-term consequences of such accidents in terms of 'loss of life expectancy', 'increased chance of fatal cancer' and 'equivalent pattern of compulsory cigarette smoking'. The paper presents mathematical arguments, which show the derivation and validity of the proposed methods of presenting the consequences of imaginable big nuclear accidents. (author)

  12. Determining tissue-lead levels in large game mammals harvested with lead bullets: human health concerns.

    Science.gov (United States)

    Tsuji, L J S; Wainman, B C; Jayasinghe, R K; VanSpronsen, E P; Liberda, E N

    2009-04-01

    Recently, the use of lead isotope ratios has definitively identified lead ammunition as a source of lead exposure for First Nations people, but the isotope ratios for lead pellets and bullets were indistinguishable. Thus, lead-contaminated meat from game harvested with lead bullets may also be contributing to the lead body burden; however, few studies have determined if lead bullet fragments are present in big game carcasses. We found elevated tissue-lead concentrations (up to 5,726.0 microg/g ww) in liver (5/9) and muscle (6/7) samples of big game harvested with lead bullets and radiographic evidence of lead fragments. Thus, we would advise that the tissue surrounding the wound channel be removed and discarded, as this tissue may be contaminated by lead bullet fragments.

  13. The ethics of biomedical big data

    CERN Document Server

    Mittelstadt, Brent Daniel

    2016-01-01

    This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...

  14. Identify too big to fail banks and capital insurance: An equilibrium approach

    Directory of Open Access Journals (Sweden)

    Katerina Ivanov

    2017-09-01

    Full Text Available The objective of this paper is develop a rational expectation equilibrium model of capital insurance to identify too big to fail banks. The main results of this model include (1 too big to fail banks can be identified explicitly by a systemic risk measure, loss betas, of all banks in the entire financial sector; (2 the too big to fail feature can be largely justified by a high level of loss beta; (3 the capital insurance proposal benefits market participants and reduces the systemic risk; (4 the implicit guarantee subsidy can be estimated endogenously; and lastly, (5 the capital insurance proposal can be used to resolve the moral hazard issue. We implement this model and document that the too big to fail issue has been considerably reduced in the pro-crisis period. As a result, the capital insurance proposal could be a useful macro-regulation innovation policy tool

  15. Legacy sediment, lead, and zinc storage in channel and floodplain deposits of the Big River, Old Lead Belt Mining District, Missouri, USA

    Science.gov (United States)

    Pavlowsky, Robert T.; Lecce, Scott A.; Owen, Marc R.; Martin, Derek J.

    2017-12-01

    The Old Lead Belt of southeastern Missouri was one of the leading producers of Pb ore for more than a century (1869-1972). Large quantities of contaminated mine waste have been, and continue to be, supplied to local streams. This study assessed the magnitude and spatial distribution of mining-contaminated legacy sediment stored in channel and floodplain deposits of the Big River in the Ozark Highlands of southeastern Missouri. Although metal concentrations decline downstream from the mine sources, the channel and floodplain sediments are contaminated above background levels with Pb and Zn along its entire 171-km length below the mine sources. Mean concentrations in floodplain cores > 2000 mg kg- 1 for Pb and > 1000 mg kg- 1 for Zn extend 40-50 km downstream from the mining area in association with the supply of fine tailings particles that were easily dispersed downstream in the suspended load. Mean concentrations in channel bed and bar sediments ranging from 1400 to 1700 mg kg- 1 for Pb extend 30 km below the mines, while Zn concentrations of 1000-3000 mg kg- 1 extend 20 km downstream. Coarse dolomite fragments in the 2-16 mm channel sediment fraction provide significant storage of Pb and Zn, representing 13-20% of the bulk sediment storage mass in the channel and can contain concentrations of > 4000 mg kg- 1 for Pb and > 1000 mg kg- 1 for Zn. These coarse tailings have been transported a maximum distance of only about 30 km from the source over a period of 120 years for an average of about 250 m/y. About 37% of the Pb and 9% of the Zn that was originally released to the watershed in tailings wastes is still stored in the Big River. A total of 157 million Mg of contaminated sediment is stored along the Big River, with 92% of it located in floodplain deposits that are typically contaminated to depths of 1.5-3.5 m. These contaminated sediments store a total of 188,549 Mg of Pb and 34,299 Mg of Zn, of which 98% of the Pb and 95% of the Zn are stored in floodplain

  16. Dose estimates in a loss of lead shielding truck accident.

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, Matthew L.; Osborn, Douglas M.; Weiner, Ruth F.; Heames, Terence John (Alion Science & Technology Albuquerque, NM)

    2009-08-01

    The radiological transportation risk & consequence program, RADTRAN, has recently added an updated loss of lead shielding (LOS) model to it most recent version, RADTRAN 6.0. The LOS model was used to determine dose estimates to first-responders during a spent nuclear fuel transportation accident. Results varied according to the following: type of accident scenario, percent of lead slump, distance to shipment, and time spent in the area. This document presents a method of creating dose estimates for first-responders using RADTRAN with potential accident scenarios. This may be of particular interest in the event of high speed accidents or fires involving cask punctures.

  17. The relationship between blood lead levels and periodontal bone loss in the United States, 1988-1994.

    OpenAIRE

    Dye, Bruce A; Hirsch, Rosemarie; Brody, Debra J

    2002-01-01

    An association between bone disease and bone lead has been reported. Studies have suggested that lead stored in bone may adversely affect bone mineral metabolism and blood lead (PbB) levels. However, the relationship between PbB levels and bone loss attributed to periodontal disease has never been reported. In this study we examined the relationship between clinical parameters that characterize bone loss due to periodontal disease and PbB levels in the U.S. population. We used data from the T...

  18. New 'bigs' in cosmology

    International Nuclear Information System (INIS)

    Yurov, Artyom V.; Martin-Moruno, Prado; Gonzalez-Diaz, Pedro F.

    2006-01-01

    This paper contains a detailed discussion on new cosmic solutions describing the early and late evolution of a universe that is filled with a kind of dark energy that may or may not satisfy the energy conditions. The main distinctive property of the resulting space-times is that they make to appear twice the single singular events predicted by the corresponding quintessential (phantom) models in a manner which can be made symmetric with respect to the origin of cosmic time. Thus, big bang and big rip singularity are shown to take place twice, one on the positive branch of time and the other on the negative one. We have also considered dark energy and phantom energy accretion onto black holes and wormholes in the context of these new cosmic solutions. It is seen that the space-times of these holes would then undergo swelling processes leading to big trip and big hole events taking place on distinct epochs along the evolution of the universe. In this way, the possibility is considered that the past and future be connected in a non-paradoxical manner in the universes described by means of the new symmetric solutions

  19. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  20. [Overall digitalization: leading innovation of endodontics in big data era].

    Science.gov (United States)

    Ling, J Q

    2016-04-09

    In big data era, digital technologies bring great challenges and opportunities to modern stomatology. The applications of digital technologies, such as cone-beam CT(CBCT), computer aided design,(CAD)and computer aided manufacture(CAM), 3D printing and digital approaches for education , provide new concepts and patterns to the treatment and study of endodontic diseases. This review provides an overview of the application and prospect of commonly used digital technologies in the development of endodontics.

  1. Cool horizons lead to information loss

    Science.gov (United States)

    Chowdhury, Borun D.

    2013-10-01

    There are two evidences for information loss during black hole evaporation: (i) a pure state evolves to a mixed state and (ii) the map from the initial state to final state is non-invertible. Any proposed resolution of the information paradox must address both these issues. The firewall argument focuses only on the first and this leads to order one deviations from the Unruh vacuum for maximally entangled black holes. The nature of the argument does not extend to black holes in pure states. It was shown by Avery, Puhm and the author that requiring the initial state to final state map to be invertible mandates structure at the horizon even for pure states. The proof works if black holes can be formed in generic states and in this paper we show that this is indeed the case. We also demonstrate how models proposed by Susskind, Papadodimas et al. and Maldacena et al. end up making the initial to final state map non-invertible and thus make the horizon "cool" at the cost of unitarity.

  2. Big bang nucleosynthesis - Predictions and uncertainties

    International Nuclear Information System (INIS)

    Krauss, L.M.; Romanelli, P.

    1990-01-01

    A detailed reexamination is made of primordial big-bang nucleosynthesis (BBN), concentrating on the data for the main nuclear reactions leading to the production of Li-7, He-3 and D, and on the neutron half-life, relevant for He-4 production. The new values for reaction rates and uncertainties are then used as input in a Monte Carlo analysis of big bang nucleosynthesis of light elements. This allows confidence levels for the predictions of the standard BBN model to be high. 70 refs

  3. How Big Are "Martin's Big Words"? Thinking Big about the Future.

    Science.gov (United States)

    Gardner, Traci

    "Martin's Big Words: The Life of Dr. Martin Luther King, Jr." tells of King's childhood determination to use "big words" through biographical information and quotations. In this lesson, students in grades 3 to 5 explore information on Dr. King to think about his "big" words, then they write about their own…

  4. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  5. BIG DATA IN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Logica BANICA

    2015-06-01

    Full Text Available In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big Data. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big Data have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big Data, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big Data concept, the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and NodeXL.

  6. The Relationship between Occupational Exposure to Lead and Hearing Loss in a Cross-Sectional Survey of Iranian Workers.

    Science.gov (United States)

    Ghiasvand, Masoumeh; Mohammadi, Saber; Roth, Brett; Ranjbar, Mostafa

    2016-01-01

    Ototoxic effect of exposure to lead has been reported by many researchers. This study was undertaken with a view to investigate the relationship between blood lead level (BLL) and hearing loss in workers in a lead-acid battery manufacturing plant in Tehran, Iran. In a cross-sectional study, 609 male workers were recruited from different locations in the factory. Association between BLL and hearing loss in different frequencies were measured. Relationships were analyzed by logistic regressions. Statistical significance was defined as p-value workers with mean age 40 ± 7 years and mean noise exposure level of 80 (75-85) dB were evaluated. BLLs were categorized into four quartiles, and hearing loss in each quartile was compared to the first one. In our regression models, BLL was associated significantly with high frequency hearing loss, adjusted odds ratios for the comparison of the fourth, third, and second quartiles to the first one are respectively: 3.98 (95% CI: 1.63-9.71, p hearing loss, after adjusting for potential confounders (age, body mass index, work duration, smoking, and occupational noise exposure) in logistic regressions. It is concluded that periodic hearing assessment by pure tone audiometry in workers exposed to lead should be recommended. However, additional studies are required to clarify the mechanisms of lead ototoxicity.

  7. How should we do the history of Big Data?

    OpenAIRE

    David Beer

    2016-01-01

    Taking its lead from Ian Hacking’s article ‘How should we do the history of statistics?’, this article reflects on how we might develop a sociologically informed history of big data. It argues that within the history of social statistics we have a relatively well developed history of the material phenomenon of big data. Yet this article argues that we now need to take the concept of ‘big data’ seriously, there is a pressing need to explore the type of work that is being done by that concept. ...

  8. The association between low levels of lead in blood and occupational noise-induced hearing loss in steel workers

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Yaw-Huei [Institute of Occupational Medicine and Industrial Hygiene, College of Public Health, National Taiwan University, Taipei, Taiwan, ROC Rm. 735, 17, Xu-Zhou Rd., Taipei, Taiwan, ROC (China); Department of Public Health, College of Public Health, National Taiwan University, Taipei, Taiwan, ROC Rm. 735, 17, Xu-Zhou Rd., Taipei, Taiwan, ROC (China); Chiang, Han-Yueh [Institute of Occupational Medicine and Industrial Hygiene, College of Public Health, National Taiwan University, Taipei, Taiwan, ROC Rm. 735, 17, Xu-Zhou Rd., Taipei, Taiwan, ROC (China); Yen-Jean, Mei-Chu [Division of Family Medicine, E-Da Hospital, Taiwan, ROC 1, E-Da Rd., Jiau-Shu Tsuen, Yan-Chau Shiang, Kaohsiung County, Taiwan, ROC (China); I-Shou University, Kaohsiung County, Taiwan, ROC 1, Sec. 1, Syuecheng Rd., Da-Shu Shiang, Kaohsiung County, Taiwan, ROC (China); Wang, Jung-Der, E-mail: jdwang@ntu.edu.tw [Institute of Occupational Medicine and Industrial Hygiene, College of Public Health, National Taiwan University, Taipei, Taiwan, ROC Rm. 735, 17, Xu-Zhou Rd., Taipei, Taiwan, ROC (China); Department of Public Health, College of Public Health, National Taiwan University, Taipei, Taiwan, ROC Rm. 735, 17, Xu-Zhou Rd., Taipei, Taiwan, ROC (China); Department of Internal Medicine, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan, ROC No. 1, Chang-Teh St., Taipei, Taiwan, ROC (China)

    2009-12-15

    As the use of leaded gasoline has ceased in the last decade, background lead exposure has generally been reduced. The aim of this study was to examine the effect of low-level lead exposure on human hearing loss. This study was conducted in a steel plant and 412 workers were recruited from all over the plant. Personal information such as demographics and work history was obtained through a questionnaire. All subjects took part in an audiometric examination of hearing thresholds, for both ears, with air-conducted pure tones at frequencies of 500, 1000, 2000, 3000, 4000, 6000 and 8000 Hz. Subjects' blood samples were collected and analyzed for levels of manganese, copper, zinc, arsenic, cadmium and lead with inductive couple plasma-mass spectrometry. Meanwhile, noise levels in different working zones were determined using a sound level meter with A-weighting network. Only subjects with hearing loss difference of no more than 15 dB between both ears and had no congenital abnormalities were included in further data analysis. Lead was the only metal in blood found significantly correlated with hearing loss for most tested sound frequencies (p < 0.05 to p < 0.0001). After adjustment for age and noise level, the logistic regression model analysis indicated that elevated blood lead over 7 {mu}g/dL was significantly associated with hearing loss at the sound frequencies of 3000 through 8000 Hz with odds ratios raging from 3.06 to 6.26 (p < 0.05 {approx} p < 0.005). We concluded that elevated blood lead at level below 10 {mu}g/dL might enhance the noise-induced hearing loss. Future research needs to further explore the detailed mechanism.

  9. The association between low levels of lead in blood and occupational noise-induced hearing loss in steel workers

    International Nuclear Information System (INIS)

    Hwang, Yaw-Huei; Chiang, Han-Yueh; Yen-Jean, Mei-Chu; Wang, Jung-Der

    2009-01-01

    As the use of leaded gasoline has ceased in the last decade, background lead exposure has generally been reduced. The aim of this study was to examine the effect of low-level lead exposure on human hearing loss. This study was conducted in a steel plant and 412 workers were recruited from all over the plant. Personal information such as demographics and work history was obtained through a questionnaire. All subjects took part in an audiometric examination of hearing thresholds, for both ears, with air-conducted pure tones at frequencies of 500, 1000, 2000, 3000, 4000, 6000 and 8000 Hz. Subjects' blood samples were collected and analyzed for levels of manganese, copper, zinc, arsenic, cadmium and lead with inductive couple plasma-mass spectrometry. Meanwhile, noise levels in different working zones were determined using a sound level meter with A-weighting network. Only subjects with hearing loss difference of no more than 15 dB between both ears and had no congenital abnormalities were included in further data analysis. Lead was the only metal in blood found significantly correlated with hearing loss for most tested sound frequencies (p < 0.05 to p < 0.0001). After adjustment for age and noise level, the logistic regression model analysis indicated that elevated blood lead over 7 μg/dL was significantly associated with hearing loss at the sound frequencies of 3000 through 8000 Hz with odds ratios raging from 3.06 to 6.26 (p < 0.05 ∼ p < 0.005). We concluded that elevated blood lead at level below 10 μg/dL might enhance the noise-induced hearing loss. Future research needs to further explore the detailed mechanism.

  10. Big Surveys, Big Data Centres

    Science.gov (United States)

    Schade, D.

    2016-06-01

    Well-designed astronomical surveys are powerful and have consistently been keystones of scientific progress. The Byurakan Surveys using a Schmidt telescope with an objective prism produced a list of about 3000 UV-excess Markarian galaxies but these objects have stimulated an enormous amount of further study and appear in over 16,000 publications. The CFHT Legacy Surveys used a wide-field imager to cover thousands of square degrees and those surveys are mentioned in over 1100 publications since 2002. Both ground and space-based astronomy have been increasing their investments in survey work. Survey instrumentation strives toward fair samples and large sky coverage and therefore strives to produce massive datasets. Thus we are faced with the "big data" problem in astronomy. Survey datasets require specialized approaches to data management. Big data places additional challenging requirements for data management. If the term "big data" is defined as data collections that are too large to move then there are profound implications for the infrastructure that supports big data science. The current model of data centres is obsolete. In the era of big data the central problem is how to create architectures that effectively manage the relationship between data collections, networks, processing capabilities, and software, given the science requirements of the projects that need to be executed. A stand alone data silo cannot support big data science. I'll describe the current efforts of the Canadian community to deal with this situation and our successes and failures. I'll talk about how we are planning in the next decade to try to create a workable and adaptable solution to support big data science.

  11. Recht voor big data, big data voor recht

    NARCIS (Netherlands)

    Lafarre, Anne

    Big data is een niet meer weg te denken fenomeen in onze maatschappij. Het is de hype cycle voorbij en de eerste implementaties van big data-technieken worden uitgevoerd. Maar wat is nu precies big data? Wat houden de vijf V's in die vaak genoemd worden in relatie tot big data? Ter inleiding van

  12. Improving Healthcare Using Big Data Analytics

    Directory of Open Access Journals (Sweden)

    Revanth Sonnati

    2017-03-01

    Full Text Available In daily terms we call the current era as Modern Era which can also be named as the era of Big Data in the field of Information Technology. Our daily lives in todays world are rapidly advancing never quenching ones thirst. The fields of science engineering and technology are producing data at an exponential rate leading to Exabytes of data every day. Big data helps us to explore and re-invent many areas not limited to education health and law. The primary purpose of this paper is to provide an in-depth analysis in the area of Healthcare using the big data and analytics. The main purpose is to emphasize on the usage of the big data which is being stored all the time helping to look back in the history but this is the time to emphasize on the analyzation to improve the medication and services. Although many big data implementations happen to be in-house development this proposed implementation aims to propose a broader extent using Hadoop which just happen to be the tip of the iceberg. The focus of this paper is not limited to the improvement and analysis of the data it also focusses on the strengths and drawbacks compared to the conventional techniques available.

  13. Big bang nucleosynthesis: The strong nuclear force meets the weak anthropic principle

    International Nuclear Information System (INIS)

    MacDonald, J.; Mullan, D. J.

    2009-01-01

    Contrary to a common argument that a small increase in the strength of the strong force would lead to destruction of all hydrogen in the big bang due to binding of the diproton and the dineutron with a catastrophic impact on life as we know it, we show that provided the increase in strong force coupling constant is less than about 50% substantial amounts of hydrogen remain. The reason is that an increase in strong force strength leads to tighter binding of the deuteron, permitting nucleosynthesis to occur earlier in the big bang at higher temperature than in the standard big bang. Photodestruction of the less tightly bound diproton and dineutron delays their production to after the bulk of nucleosynthesis is complete. The decay of the diproton can, however, lead to relatively large abundances of deuterium.

  14. Environmental lead pollution and its possible influence on tooth loss and hard dental tissue lesions.

    Science.gov (United States)

    Cenić-Milosević, Desanka; Mileusnić, Ivan; Kolak, Veljko; Pejanović, Djordje; Ristić, Tamara; Jakovljević, Ankica; Popović, Milica; Pesić, Dragana; Melih, Irena

    2013-08-01

    Environmental lead (Pb) pollution is a global problem. Hard dental tissue is capable of accumulating lead and other hard metals from the environment. The aim of this study was to investigate any correlation between the concentration of lead in teeth extracted from inhabitants of Pancevo and Belgrade, Serbia, belonging to different age groups and occurrence of tooth loss, caries and non-carious lesions. A total of 160 volunteers were chosen consecutively from Pancevo (the experimental group) and Belgrade (the control group) and divided into 5 age subgroups of 32 subjects each. Clinical examination consisted of caries and hard dental tissue diagnostics. The Decayed Missing Filled Teeth (DMFT) Index and Significant Caries Index were calculated. Extracted teeth were freed of any organic residue by UV digestion and subjected to voltammetric analysis for the content of lead. The average DMFT scores in Pancevo (20.41) were higher than in Belgrade (16.52); in the patients aged 31-40 and 41-50 years the difference was significant (p lead concentration and the number of extracted teeth, number of carious lesions and non-carious lesions found in the patients living in Pancevo, one possible cause of tooth loss and hard dental tissue damage could be a long-term environmental exposure to lead.

  15. Concentration Trends for Lead and Calcium-Normalized Lead in Fish Fillets from the Big River, a Mining-Contaminated Stream in Southeastern Missouri USA.

    Science.gov (United States)

    Schmitt, Christopher J; McKee, Michael J

    2016-11-01

    Lead (Pb) and calcium (Ca) concentrations were measured in fillet samples of longear sunfish (Lepomis megalotis) and redhorse suckers (Moxostoma spp.) collected in 2005-2012 from the Big River, which drains a historical mining area in southeastern Missouri and where a consumption advisory is in effect due to elevated Pb concentrations in fish. Lead tends to accumulated in Ca-rich tissues such as bone and scale. Concentrations of Pb in fish muscle are typically low, but can become elevated in fillets from Pb-contaminated sites depending in part on how much bone, scale, and skin is included in the sample. We used analysis-of-covariance to normalize Pb concentration to the geometric mean Ca concentration (415 ug/g wet weight, ww), which reduced variation between taxa, sites, and years, as was the number of samples that exceeded Missouri consumption advisory threshold (300 ng/g ww). Concentrations of Pb in 2005-2012 were lower than in the past, especially after Ca-normalization, but the consumption advisory is still warranted because concentrations were >300 ng/g ww in samples of both taxa from contaminated sites. For monitoring purposes, a simple linear regression model is proposed for estimating Ca-normalized Pb concentrations in fillets from Pb:Ca molar ratios as a way of reducing the effects of differing preparation methods on fillet Pb variation.

  16. Concentration trends for lead and calcium-normalized lead in fish fillets from the Big River, a mining-contaminated stream in southeastern Missouri USA

    Science.gov (United States)

    Schmitt, Christopher J.; McKee, Michael J.

    2016-01-01

    Lead (Pb) and calcium (Ca) concentrations were measured in fillet samples of longear sunfish (Lepomis megalotis) and redhorse suckers (Moxostoma spp.) collected in 2005–2012 from the Big River, which drains a historical mining area in southeastern Missouri and where a consumption advisory is in effect due to elevated Pb concentrations in fish. Lead tends to accumulated in Ca-rich tissues such as bone and scale. Concentrations of Pb in fish muscle are typically low, but can become elevated in fillets from Pb-contaminated sites depending in part on how much bone, scale, and skin is included in the sample. We used analysis-of-covariance to normalize Pb concentration to the geometric mean Ca concentration (415 ug/g wet weight, ww), which reduced variation between taxa, sites, and years, as was the number of samples that exceeded Missouri consumption advisory threshold (300 ng/g ww). Concentrations of Pb in 2005–2012 were lower than in the past, especially after Ca-normalization, but the consumption advisory is still warranted because concentrations were >300 ng/g ww in samples of both taxa from contaminated sites. For monitoring purposes, a simple linear regression model is proposed for estimating Ca-normalized Pb concentrations in fillets from Pb:Ca molar ratios as a way of reducing the effects of differing preparation methods on fillet Pb variation.

  17. Clinical validation of a public health policy-making platform for hearing loss (EVOTION): protocol for a big data study.

    Science.gov (United States)

    Dritsakis, Giorgos; Kikidis, Dimitris; Koloutsou, Nina; Murdin, Louisa; Bibas, Athanasios; Ploumidou, Katherine; Laplante-Lévesque, Ariane; Pontoppidan, Niels Henrik; Bamiou, Doris-Eva

    2018-02-15

    The holistic management of hearing loss (HL) requires an understanding of factors that predict hearing aid (HA) use and benefit beyond the acoustics of listening environments. Although several predictors have been identified, no study has explored the role of audiological, cognitive, behavioural and physiological data nor has any study collected real-time HA data. This study will collect 'big data', including retrospective HA logging data, prospective clinical data and real-time data via smart HAs, a mobile application and biosensors. The main objective is to enable the validation of the EVOTION platform as a public health policy-making tool for HL. This will be a big data international multicentre study consisting of retrospective and prospective data collection. Existing data from approximately 35 000 HA users will be extracted from clinical repositories in the UK and Denmark. For the prospective data collection, 1260 HA candidates will be recruited across four clinics in the UK and Greece. Participants will complete a battery of audiological and other assessments (measures of patient-reported HA benefit, mood, cognition, quality of life). Patients will be offered smart HAs and a mobile phone application and a subset will also be given wearable biosensors, to enable the collection of dynamic real-life HA usage data. Big data analytics will be used to detect correlations between contextualised HA usage and effectiveness, and different factors and comorbidities affecting HL, with a view to informing public health decision-making. Ethical approval was received from the London South East Research Ethics Committee (17/LO/0789), the Hippokrateion Hospital Ethics Committee (1847) and the Athens Medical Center's Ethics Committee (KM140670). Results will be disseminated through national and international events in Greece and the UK, scientific journals, newsletters, magazines and social media. Target audiences include HA users, clinicians, policy-makers and the

  18. Characterization and Architectural Implications of Big Data Workloads

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Jia, Zhen; Han, Rui

    2015-01-01

    Big data areas are expanding in a fast way in terms of increasing workloads and runtime systems, and this situation imposes a serious challenge to workload characterization, which is the foundation of innovative system and architecture design. The previous major efforts on big data benchmarking either propose a comprehensive but a large amount of workloads, or only select a few workloads according to so-called popularity, which may lead to partial or even biased observations. In this paper, o...

  19. Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Biological Big Data Produced by NGS Technologies.

    Science.gov (United States)

    de Brevern, Alexandre G; Meyniel, Jean-Philippe; Fairhead, Cécile; Neuvéglise, Cécile; Malpertuy, Alain

    2015-01-01

    Sequencing the human genome began in 1994, and 10 years of work were necessary in order to provide a nearly complete sequence. Nowadays, NGS technologies allow sequencing of a whole human genome in a few days. This deluge of data challenges scientists in many ways, as they are faced with data management issues and analysis and visualization drawbacks due to the limitations of current bioinformatics tools. In this paper, we describe how the NGS Big Data revolution changes the way of managing and analysing data. We present how biologists are confronted with abundance of methods, tools, and data formats. To overcome these problems, focus on Big Data Information Technology innovations from web and business intelligence. We underline the interest of NoSQL databases, which are much more efficient than relational databases. Since Big Data leads to the loss of interactivity with data during analysis due to high processing time, we describe solutions from the Business Intelligence that allow one to regain interactivity whatever the volume of data is. We illustrate this point with a focus on the Amadea platform. Finally, we discuss visualization challenges posed by Big Data and present the latest innovations with JavaScript graphic libraries.

  20. Pre-big bang in M-theory

    OpenAIRE

    Cavaglia, Marco

    2001-01-01

    We discuss a simple cosmological model derived from M-theory. Three assumptions lead naturally to a pre-big bang scenario: (a) 11-dimensional supergravity describes the low-energy world; (b) non-gravitational fields live on a three-dimensional brane; and (c) asymptotically past triviality.

  1. Really big numbers

    CERN Document Server

    Schwartz, Richard Evan

    2014-01-01

    In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...

  2. A Model-Driven Methodology for Big Data Analytics-as-a-Service

    OpenAIRE

    Damiani, Ernesto; Ardagna, Claudio Agostino; Ceravolo, Paolo; Bellandi, Valerio; Bezzi, Michele; Hebert, Cedric

    2017-01-01

    The Big Data revolution has promised to build a data-driven ecosystem where better decisions are supported by enhanced analytics and data management. However, critical issues still need to be solved in the road that leads to commodization of Big Data Analytics, such as the management of Big Data complexity and the protection of data security and privacy. In this paper, we focus on the first issue and propose a methodology based on Model Driven Engineering (MDE) that aims to substantially lowe...

  3. BigOP: Generating Comprehensive Big Data Workloads as a Benchmarking Framework

    OpenAIRE

    Zhu, Yuqing; Zhan, Jianfeng; Weng, Chuliang; Nambiar, Raghunath; Zhang, Jinchao; Chen, Xingzhen; Wang, Lei

    2014-01-01

    Big Data is considered proprietary asset of companies, organizations, and even nations. Turning big data into real treasure requires the support of big data systems. A variety of commercial and open source products have been unleashed for big data storage and processing. While big data users are facing the choice of which system best suits their needs, big data system developers are facing the question of how to evaluate their systems with regard to general big data processing needs. System b...

  4. Capture reactions on C-14 in nonstandard big bang nucleosynthesis

    Science.gov (United States)

    Wiescher, Michael; Gorres, Joachim; Thielemann, Friedrich-Karl

    1990-01-01

    Nonstandard big bang nucleosynthesis leads to the production of C-14. The further reaction path depends on the depletion of C-14 by either photon, alpha, or neutron capture reactions. The nucleus C-14 is of particular importance in these scenarios because it forms a bottleneck for the production of heavier nuclei A greater than 14. The reaction rates of all three capture reactions at big bang conditions are discussed, and it is shown that the resulting reaction path, leading to the production of heavier elements, is dominated by the (p, gamma) and (n, gamma) rates, contrary to earlier suggestions.

  5. How Big Is Too Big?

    Science.gov (United States)

    Cibes, Margaret; Greenwood, James

    2016-01-01

    Media Clips appears in every issue of Mathematics Teacher, offering readers contemporary, authentic applications of quantitative reasoning based on print or electronic media. This issue features "How Big is Too Big?" (Margaret Cibes and James Greenwood) in which students are asked to analyze the data and tables provided and answer a…

  6. Big Data and reality

    Directory of Open Access Journals (Sweden)

    Ryan Shaw

    2015-11-01

    Full Text Available DNA sequencers, Twitter, MRIs, Facebook, particle accelerators, Google Books, radio telescopes, Tumblr: what do these things have in common? According to the evangelists of “data science,” all of these are instruments for observing reality at unprecedentedly large scales and fine granularities. This perspective ignores the social reality of these very different technological systems, ignoring how they are made, how they work, and what they mean in favor of an exclusive focus on what they generate: Big Data. But no data, big or small, can be interpreted without an understanding of the process that generated them. Statistical data science is applicable to systems that have been designed as scientific instruments, but is likely to lead to confusion when applied to systems that have not. In those cases, a historical inquiry is preferable.

  7. Rotational inhomogeneities from pre-big bang?

    International Nuclear Information System (INIS)

    Giovannini, Massimo

    2005-01-01

    The evolution of the rotational inhomogeneities is investigated in the specific framework of four-dimensional pre-big bang models. While minimal (dilaton-driven) scenarios do not lead to rotational fluctuations, in the case of non-minimal (string-driven) models, fluid sources are present in the pre-big bang phase. The rotational modes of the geometry, coupled to the divergenceless part of the velocity field, can then be amplified depending upon the value of the barotropic index of the perfect fluids. In the light of a possible production of rotational inhomogeneities, solutions describing the coupled evolution of the dilaton field and of the fluid sources are scrutinized in both the string and Einstein frames. In semi-realistic scenarios, where the curvature divergences are regularized by means of a non-local dilaton potential, the rotational inhomogeneities are amplified during the pre-big bang phase but they decay later on. Similar analyses can also be performed when a contraction occurs directly in the string frame metric

  8. Rotational inhomogeneities from pre-big bang?

    Energy Technology Data Exchange (ETDEWEB)

    Giovannini, Massimo [Department of Physics, Theory Division, CERN, 1211 Geneva 23 (Switzerland)

    2005-01-21

    The evolution of the rotational inhomogeneities is investigated in the specific framework of four-dimensional pre-big bang models. While minimal (dilaton-driven) scenarios do not lead to rotational fluctuations, in the case of non-minimal (string-driven) models, fluid sources are present in the pre-big bang phase. The rotational modes of the geometry, coupled to the divergenceless part of the velocity field, can then be amplified depending upon the value of the barotropic index of the perfect fluids. In the light of a possible production of rotational inhomogeneities, solutions describing the coupled evolution of the dilaton field and of the fluid sources are scrutinized in both the string and Einstein frames. In semi-realistic scenarios, where the curvature divergences are regularized by means of a non-local dilaton potential, the rotational inhomogeneities are amplified during the pre-big bang phase but they decay later on. Similar analyses can also be performed when a contraction occurs directly in the string frame metric.

  9. Hearing loss in children with e-waste lead and cadmium exposure.

    Science.gov (United States)

    Liu, Yu; Huo, Xia; Xu, Long; Wei, Xiaoqin; Wu, Wengli; Wu, Xianguang; Xu, Xijin

    2018-05-15

    Environmental chemical exposure can cause neurotoxicity and has been recently linked to hearing loss in general population, but data are limited in early life exposure to lead (Pb) and cadmium (Cd) especially for children. We aimed to evaluate the association of their exposure with pediatric hearing ability. Blood Pb and urinary Cd were collected form 234 preschool children in 3-7years of age from an electronic waste (e-waste) recycling area and a reference area matched in Shantou of southern China. Pure-tone air conduction (PTA) was used to test child hearing thresholds at frequencies of 0.25, 0.5, 1, 2, 4 and 8kHz. A PTA≥25dB was defined as hearing loss. A higher median blood Pb level was found in the exposed group (4.94±0.20 vs 3.85±1.81μg/dL, phearing loss (28.8% vs 13.6%, phearing thresholds at average low and high frequency, and single frequency of 0.5, 1 and 2kHz were all increased in the exposed group. Positive correlations of child age and nail biting habit with Pb, and negative correlations of parent education level and child washing hands before dinner with Pb and Cd exposure were observed. Logistic regression analyses showed the adjusted OR of hearing loss for Pb exposure was 1.24 (95% CI: 1.029, 1.486). Our data suggest that early childhood exposure to Pb may be an important risk factor for hearing loss, and the developmental auditory system might be affected in e-waste polluted areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Tax Expert Offers Ideas for Monitoring Big Spending on College Sports

    Science.gov (United States)

    Sander, Libby

    2009-01-01

    The federal government could take a cue from its regulation of charitable organizations in monitoring the freewheeling fiscal habits of big-time college athletics, a leading tax lawyer says. The author reports on the ideas offered by John D. Colombo, a professor at the University of Illinois College of Law, for monitoring big spending on college…

  11. Nursing Needs Big Data and Big Data Needs Nursing.

    Science.gov (United States)

    Brennan, Patricia Flatley; Bakken, Suzanne

    2015-09-01

    Contemporary big data initiatives in health care will benefit from greater integration with nursing science and nursing practice; in turn, nursing science and nursing practice has much to gain from the data science initiatives. Big data arises secondary to scholarly inquiry (e.g., -omics) and everyday observations like cardiac flow sensors or Twitter feeds. Data science methods that are emerging ensure that these data be leveraged to improve patient care. Big data encompasses data that exceed human comprehension, that exist at a volume unmanageable by standard computer systems, that arrive at a velocity not under the control of the investigator and possess a level of imprecision not found in traditional inquiry. Data science methods are emerging to manage and gain insights from big data. The primary methods included investigation of emerging federal big data initiatives, and exploration of exemplars from nursing informatics research to benchmark where nursing is already poised to participate in the big data revolution. We provide observations and reflections on experiences in the emerging big data initiatives. Existing approaches to large data set analysis provide a necessary but not sufficient foundation for nursing to participate in the big data revolution. Nursing's Social Policy Statement guides a principled, ethical perspective on big data and data science. There are implications for basic and advanced practice clinical nurses in practice, for the nurse scientist who collaborates with data scientists, and for the nurse data scientist. Big data and data science has the potential to provide greater richness in understanding patient phenomena and in tailoring interventional strategies that are personalized to the patient. © 2015 Sigma Theta Tau International.

  12. BIG Data - BIG Gains? Understanding the Link Between Big Data Analytics and Innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance for product innovations. Since big data technologies provide new data information practices, they create new decision-making possibilities, which firms can use to realize innovations. Applying German firm-level data we find suggestive evidence that big data analytics matters for the likelihood of becoming a product innovator as well as the market success of the firms’ product innovat...

  13. Networking for big data

    CERN Document Server

    Yu, Shui; Misic, Jelena; Shen, Xuemin (Sherman)

    2015-01-01

    Networking for Big Data supplies an unprecedented look at cutting-edge research on the networking and communication aspects of Big Data. Starting with a comprehensive introduction to Big Data and its networking issues, it offers deep technical coverage of both theory and applications.The book is divided into four sections: introduction to Big Data, networking theory and design for Big Data, networking security for Big Data, and platforms and systems for Big Data applications. Focusing on key networking issues in Big Data, the book explains network design and implementation for Big Data. It exa

  14. Loss of thymidine kinase 2 alters neuronal bioenergetics and leads to neurodegeneration.

    Science.gov (United States)

    Bartesaghi, Stefano; Betts-Henderson, Joanne; Cain, Kelvin; Dinsdale, David; Zhou, Xiaoshan; Karlsson, Anna; Salomoni, Paolo; Nicotera, Pierluigi

    2010-05-01

    Mutations of thymidine kinase 2 (TK2), an essential component of the mitochondrial nucleotide salvage pathway, can give rise to mitochondrial DNA (mtDNA) depletion syndromes (MDS). These clinically heterogeneous disorders are characterized by severe reduction in mtDNA copy number in affected tissues and are associated with progressive myopathy, hepatopathy and/or encephalopathy, depending in part on the underlying nuclear genetic defect. Mutations of TK2 have previously been associated with an isolated myopathic form of MDS (OMIM 609560). However, more recently, neurological phenotypes have been demonstrated in patients carrying TK2 mutations, thus suggesting that loss of TK2 results in neuronal dysfunction. Here, we directly address the role of TK2 in neuronal homeostasis using a knockout mouse model. We demonstrate that in vivo loss of TK2 activity leads to a severe ataxic phenotype, accompanied by reduced mtDNA copy number and decreased steady-state levels of electron transport chain proteins in the brain. In TK2-deficient cerebellar neurons, these abnormalities are associated with impaired mitochondrial bioenergetic function, aberrant mitochondrial ultrastructure and degeneration of selected neuronal types. Overall, our findings demonstrate that TK2 deficiency leads to neuronal dysfunction in vivo, and have important implications for understanding the mechanisms of neurological impairment in MDS.

  15. From big data to deep insight in developmental science.

    Science.gov (United States)

    Gilmore, Rick O

    2016-01-01

    The use of the term 'big data' has grown substantially over the past several decades and is now widespread. In this review, I ask what makes data 'big' and what implications the size, density, or complexity of datasets have for the science of human development. A survey of existing datasets illustrates how existing large, complex, multilevel, and multimeasure data can reveal the complexities of developmental processes. At the same time, significant technical, policy, ethics, transparency, cultural, and conceptual issues associated with the use of big data must be addressed. Most big developmental science data are currently hard to find and cumbersome to access, the field lacks a culture of data sharing, and there is no consensus about who owns or should control research data. But, these barriers are dissolving. Developmental researchers are finding new ways to collect, manage, store, share, and enable others to reuse data. This promises a future in which big data can lead to deeper insights about some of the most profound questions in behavioral science. © 2016 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  16. Big data challenges: Impact, potential responses and research needs

    OpenAIRE

    Bachlechner, Daniel; Leimbach, Timo

    2016-01-01

    Although reports on big data success stories have been accumulating in the media, most organizations dealing with high-volume, high-velocity and high-variety information assets still face challenges. Only a thorough understanding of these challenges puts organizations into a position in which they can make an informed decision for or against big data, and, if the decision is positive, overcome the challenges smoothly. The combination of a series of interviews with leading experts from enterpr...

  17. TELECOM BIG DATA FOR URBAN TRANSPORT ANALYSIS – A CASE STUDY OF SPLIT-DALMATIA COUNTY IN CROATIA

    OpenAIRE

    M. Baučić; N. Jajac; M. Bućan

    2017-01-01

    Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the...

  18. ALICE: Simulated lead-lead collision

    CERN Multimedia

    2003-01-01

    This track is an example of simulated data modelled for the ALICE detector on the Large Hadron Collider (LHC) at CERN, which will begin taking data in 2008. ALICE will focus on the study of collisions between nuclei of lead, a heavy element that produces many different particles when collided. It is hoped that these collisions will produce a new state of matter known as the quark-gluon plasma, which existed billionths of a second after the Big Bang.

  19. Loss of Nfkb1 leads to early onset aging.

    Science.gov (United States)

    Bernal, Giovanna M; Wahlstrom, Joshua S; Crawley, Clayton D; Cahill, Kirk E; Pytel, Peter; Liang, Hua; Kang, Shijun; Weichselbaum, Ralph R; Yamini, Bakhtiar

    2014-11-01

    NF-κB is a major regulator of age-dependent gene expression and the p50/NF-κB1 subunit is an integral modulator of NF-κB signaling. Here, we examined Nfkb1-/- mice to investigate the relationship between this subunit and aging. Although Nfkb1-/- mice appear similar to littermates at six months of age, by 12 months they have a higher incidence of several observable age-related phenotypes. In addition, aged Nfkb1-/- animals have increased kyphosis, decreased cortical bone, increased brain GFAP staining and a decrease in overall lifespan compared to Nfkb1+/+. In vitro, serially passaged primary Nfkb1-/- MEFs have more senescent cells than comparable Nfkb1+/+ MEFs. Also, Nfkb1-/- MEFs have greater amounts of phospho-H2AX foci and lower levels of spontaneous apoptosis than Nfkb1+/+, findings that are mirrored in the brains of Nfkb1-/- animals compared to Nfkb1+/+. Finally, in wildtype animals a substantial decrease in p50 DNA binding is seen in aged tissue compared to young. Together, these data show that loss of Nfkb1 leads to early animal aging that is associated with reduced apoptosis and increased cellular senescence. Moreover, loss of p50 DNA binding is a prominent feature of aged mice relative to young. These findings support the strong link between the NF-κB pathway and mammalian aging.

  20. Big Argumentation?

    Directory of Open Access Journals (Sweden)

    Daniel Faltesek

    2013-08-01

    Full Text Available Big Data is nothing new. Public concern regarding the mass diffusion of data has appeared repeatedly with computing innovations, in the formation before Big Data it was most recently referred to as the information explosion. In this essay, I argue that the appeal of Big Data is not a function of computational power, but of a synergistic relationship between aesthetic order and a politics evacuated of a meaningful public deliberation. Understanding, and challenging, Big Data requires an attention to the aesthetics of data visualization and the ways in which those aesthetics would seem to depoliticize information. The conclusion proposes an alternative argumentative aesthetic as the appropriate response to the depoliticization posed by the popular imaginary of Big Data.

  1. Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Biological Big Data Produced by NGS Technologies

    Directory of Open Access Journals (Sweden)

    Alexandre G. de Brevern

    2015-01-01

    Full Text Available Sequencing the human genome began in 1994, and 10 years of work were necessary in order to provide a nearly complete sequence. Nowadays, NGS technologies allow sequencing of a whole human genome in a few days. This deluge of data challenges scientists in many ways, as they are faced with data management issues and analysis and visualization drawbacks due to the limitations of current bioinformatics tools. In this paper, we describe how the NGS Big Data revolution changes the way of managing and analysing data. We present how biologists are confronted with abundance of methods, tools, and data formats. To overcome these problems, focus on Big Data Information Technology innovations from web and business intelligence. We underline the interest of NoSQL databases, which are much more efficient than relational databases. Since Big Data leads to the loss of interactivity with data during analysis due to high processing time, we describe solutions from the Business Intelligence that allow one to regain interactivity whatever the volume of data is. We illustrate this point with a focus on the Amadea platform. Finally, we discuss visualization challenges posed by Big Data and present the latest innovations with JavaScript graphic libraries.

  2. Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Biological Big Data Produced by NGS Technologies

    Science.gov (United States)

    de Brevern, Alexandre G.; Meyniel, Jean-Philippe; Fairhead, Cécile; Neuvéglise, Cécile; Malpertuy, Alain

    2015-01-01

    Sequencing the human genome began in 1994, and 10 years of work were necessary in order to provide a nearly complete sequence. Nowadays, NGS technologies allow sequencing of a whole human genome in a few days. This deluge of data challenges scientists in many ways, as they are faced with data management issues and analysis and visualization drawbacks due to the limitations of current bioinformatics tools. In this paper, we describe how the NGS Big Data revolution changes the way of managing and analysing data. We present how biologists are confronted with abundance of methods, tools, and data formats. To overcome these problems, focus on Big Data Information Technology innovations from web and business intelligence. We underline the interest of NoSQL databases, which are much more efficient than relational databases. Since Big Data leads to the loss of interactivity with data during analysis due to high processing time, we describe solutions from the Business Intelligence that allow one to regain interactivity whatever the volume of data is. We illustrate this point with a focus on the Amadea platform. Finally, we discuss visualization challenges posed by Big Data and present the latest innovations with JavaScript graphic libraries. PMID:26125026

  3. Big Data in Caenorhabditis elegans: quo vadis?

    Science.gov (United States)

    Hutter, Harald; Moerman, Donald

    2015-11-05

    A clear definition of what constitutes "Big Data" is difficult to identify, but we find it most useful to define Big Data as a data collection that is complete. By this criterion, researchers on Caenorhabditis elegans have a long history of collecting Big Data, since the organism was selected with the idea of obtaining a complete biological description and understanding of development. The complete wiring diagram of the nervous system, the complete cell lineage, and the complete genome sequence provide a framework to phrase and test hypotheses. Given this history, it might be surprising that the number of "complete" data sets for this organism is actually rather small--not because of lack of effort, but because most types of biological experiments are not currently amenable to complete large-scale data collection. Many are also not inherently limited, so that it becomes difficult to even define completeness. At present, we only have partial data on mutated genes and their phenotypes, gene expression, and protein-protein interaction--important data for many biological questions. Big Data can point toward unexpected correlations, and these unexpected correlations can lead to novel investigations; however, Big Data cannot establish causation. As a result, there is much excitement about Big Data, but there is also a discussion on just what Big Data contributes to solving a biological problem. Because of its relative simplicity, C. elegans is an ideal test bed to explore this issue and at the same time determine what is necessary to build a multicellular organism from a single cell. © 2015 Hutter and Moerman. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  4. Surface-water quality and suspended-sediment quantity and quality within the Big River Basin, southeastern Missouri, 2011-13

    Science.gov (United States)

    Barr, Miya N.

    2016-01-28

    Missouri was the leading producer of lead in the United States—as well as the world—for more than a century. One of the lead sources is known as the Old Lead Belt, located in southeast Missouri. The primary ore mineral in the region is galena, which can be found both in surface deposits and underground as deep as 200 feet. More than 8.5 million tons of lead were produced from the Old Lead Belt before operations ceased in 1972. Although active lead mining has ended, the effects of mining activities still remain in the form of large mine waste piles on the landscape typically near tributaries and the main stem of the Big River, which drains the Old Lead Belt. Six large mine waste piles encompassing more than 2,800 acres, exist within the Big River Basin. These six mine waste piles have been an available source of trace element-rich suspended sediments transported by natural erosional processes downstream into the Big River.

  5. Where Are the Logical Errors in the Theory of Big Bang?

    Science.gov (United States)

    Kalanov, Temur Z.

    2015-04-01

    The critical analysis of the foundations of the theory of Big Bang is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. It is argued that the starting point of the theory of Big Bang contains three fundamental logical errors. The first error is the assumption that a macroscopic object (having qualitative determinacy) can have an arbitrarily small size and can be in the singular state (i.e., in the state that has no qualitative determinacy). This assumption implies that the transition, (macroscopic object having the qualitative determinacy) --> (singular state of matter that has no qualitative determinacy), leads to loss of information contained in the macroscopic object. The second error is the assumption that there are the void and the boundary between matter and void. But if such boundary existed, then it would mean that the void has dimensions and can be measured. The third error is the assumption that the singular state of matter can make a transition into the normal state without the existence of the program of qualitative and quantitative development of the matter, without controlling influence of other (independent) object. However, these assumptions conflict with the practice and, consequently, formal logic, rational dialectics, and cybernetics. Indeed, from the point of view of cybernetics, the transition, (singular state of the Universe) -->(normal state of the Universe),would be possible only in the case if there was the Managed Object that is outside the Universe and have full, complete, and detailed information about the Universe. Thus, the theory of Big Bang is a scientific fiction.

  6. Big data

    DEFF Research Database (Denmark)

    Madsen, Anders Koed; Flyverbom, Mikkel; Hilbert, Martin

    2016-01-01

    is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations......The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper...... into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Second, it suggests a research agenda built around a set of subthemes that each deserve dedicated scrutiny when studying the interplay between big data...

  7. Big data computing

    CERN Document Server

    Akerkar, Rajendra

    2013-01-01

    Due to market forces and technological evolution, Big Data computing is developing at an increasing rate. A wide variety of novel approaches and tools have emerged to tackle the challenges of Big Data, creating both more opportunities and more challenges for students and professionals in the field of data computation and analysis. Presenting a mix of industry cases and theory, Big Data Computing discusses the technical and practical issues related to Big Data in intelligent information management. Emphasizing the adoption and diffusion of Big Data tools and technologies in industry, the book i

  8. Loss of FTO antagonises Wnt signaling and leads to developmental defects associated with ciliopathies.

    Directory of Open Access Journals (Sweden)

    Daniel P S Osborn

    Full Text Available Common intronic variants in the Human fat mass and obesity-associated gene (FTO are found to be associated with an increased risk of obesity. Overexpression of FTO correlates with increased food intake and obesity, whilst loss-of-function results in lethality and severe developmental defects. Despite intense scientific discussions around the role of FTO in energy metabolism, the function of FTO during development remains undefined. Here, we show that loss of Fto leads to developmental defects such as growth retardation, craniofacial dysmorphism and aberrant neural crest cells migration in Zebrafish. We find that the important developmental pathway, Wnt, is compromised in the absence of FTO, both in vivo (zebrafish and in vitro (Fto(-/- MEFs and HEK293T. Canonical Wnt signalling is down regulated by abrogated β-Catenin translocation to the nucleus whilst non-canonical Wnt/Ca(2+ pathway is activated via its key signal mediators CaMKII and PKCδ. Moreover, we demonstrate that loss of Fto results in short, absent or disorganised cilia leading to situs inversus, renal cystogenesis, neural crest cell defects and microcephaly in Zebrafish. Congruently, Fto knockout mice display aberrant tissue specific cilia. These data identify FTO as a protein-regulator of the balanced activation between canonical and non-canonical branches of the Wnt pathway. Furthermore, we present the first evidence that FTO plays a role in development and cilia formation/function.

  9. Water Loss in Small Settlements

    OpenAIRE

    Mindaugas Rimeika; Anželika Jurkienė

    2014-01-01

    The main performance indicators of a water supply system include the quality and safety of water, continuous work, relevant pressure and small water loss. The majority of foreign and local projects on reducing water loss have been carried out in the water supply systems of metropolitans; however, the specificity of small settlements differs from that of big cities. Differences can be observed not only in the development of infrastructure and technical indicators but also in the features of wa...

  10. Natural regeneration processes in big sagebrush (Artemisia tridentata)

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Big sagebrush, Artemisia tridentata Nuttall (Asteraceae), is the dominant plant species of large portions of semiarid western North America. However, much of historical big sagebrush vegetation has been removed or modified. Thus, regeneration is recognized as an important component for land management. Limited knowledge about key regeneration processes, however, represents an obstacle to identifying successful management practices and to gaining greater insight into the consequences of increasing disturbance frequency and global change. Therefore, our objective is to synthesize knowledge about natural big sagebrush regeneration. We identified and characterized the controls of big sagebrush seed production, germination, and establishment. The largest knowledge gaps and associated research needs include quiescence and dormancy of embryos and seedlings; variation in seed production and germination percentages; wet-thermal time model of germination; responses to frost events (including freezing/thawing of soils), CO2 concentration, and nutrients in combination with water availability; suitability of microsite vs. site conditions; competitive ability as well as seedling growth responses; and differences among subspecies and ecoregions. Potential impacts of climate change on big sagebrush regeneration could include that temperature increases may not have a large direct influence on regeneration due to the broad temperature optimum for regeneration, whereas indirect effects could include selection for populations with less stringent seed dormancy. Drier conditions will have direct negative effects on germination and seedling survival and could also lead to lighter seeds, which lowers germination success further. The short seed dispersal distance of big sagebrush may limit its tracking of suitable climate; whereas, the low competitive ability of big sagebrush seedlings may limit successful competition with species that track climate. An improved understanding of the

  11. Repeated lifestyle interventions lead to progressive weight loss

    DEFF Research Database (Denmark)

    Dandanell, Sune; Ritz, Christian; Verdich, Elisabeth

    2017-01-01

    in one to four 11-12 week lifestyle interventions (residential weight loss programme, mixed activities). Weight loss was promoted through a hypocaloric diet (-500 to -700 kcal/day) and daily physical activity (1-3 hours/day). Primary outcomes were weight loss and change in body composition (bioimpedance...

  12. Toward effective software solutions for big biology

    NARCIS (Netherlands)

    Prins, Pjotr; de Ligt, Joep; Tarasov, Artem; Jansen, Ritsert C; Cuppen, Edwin; Bourne, Philip E

    2015-01-01

    Leading scientists tell us that the problem of large data and data integration, referred to as 'big data', is acute and hurting research. Recently, Snijder et al.1 suggested a culture change in which scientists would aim to share high-dimensional data among laboratories. It is important to realize

  13. From big bang to big crunch and beyond

    International Nuclear Information System (INIS)

    Elitzur, Shmuel; Rabinovici, Eliezer; Giveon, Amit; Kutasov, David

    2002-01-01

    We study a quotient Conformal Field Theory, which describes a 3+1 dimensional cosmological spacetime. Part of this spacetime is the Nappi-Witten (NW) universe, which starts at a 'big bang' singularity, expands and then contracts to a 'big crunch' singularity at a finite time. The gauged WZW model contains a number of copies of the NW spacetime, with each copy connected to the preceding one and to the next one at the respective big bang/big crunch singularities. The sequence of NW spacetimes is further connected at the singularities to a series of non-compact static regions with closed timelike curves. These regions contain boundaries, on which the observables of the theory live. This suggests a holographic interpretation of the physics. (author)

  14. BIG data - BIG gains? Empirical evidence on the link between big data analytics and innovation

    OpenAIRE

    Niebel, Thomas; Rasel, Fabienne; Viete, Steffen

    2017-01-01

    This paper analyzes the relationship between firms’ use of big data analytics and their innovative performance in terms of product innovations. Since big data technologies provide new data information practices, they create novel decision-making possibilities, which are widely believed to support firms’ innovation process. Applying German firm-level data within a knowledge production function framework we find suggestive evidence that big data analytics is a relevant determinant for the likel...

  15. The universe before the Big Bang cosmology and string theory

    CERN Document Server

    Gasperini, Maurizio

    2008-01-01

    Terms such as "expanding Universe", "big bang", and "initial singularity", are nowadays part of our common language. The idea that the Universe we observe today originated from an enormous explosion (big bang) is now well known and widely accepted, at all levels, in modern popular culture. But what happens to the Universe before the big bang? And would it make any sense at all to ask such a question? In fact, recent progress in theoretical physics, and in particular in String Theory, suggests answers to the above questions, providing us with mathematical tools able in principle to reconstruct the history of the Universe even for times before the big bang. In the emerging cosmological scenario the Universe, at the epoch of the big bang, instead of being a "new born baby" was actually a rather "aged" creature in the middle of its possibly infinitely enduring evolution. The aim of this book is to convey this picture in non-technical language accessibile also to non-specialists. The author, himself a leading cosm...

  16. Chronic lead exposure induces cochlear oxidative stress and potentiates noise-induced hearing loss.

    Science.gov (United States)

    Jamesdaniel, Samson; Rosati, Rita; Westrick, Judy; Ruden, Douglas M

    2018-08-01

    Acquired hearing loss is caused by complex interactions of multiple environmental risk factors, such as elevated levels of lead and noise, which are prevalent in urban communities. This study delineates the mechanism underlying lead-induced auditory dysfunction and its potential interaction with noise exposure. Young-adult C57BL/6 mice were exposed to: 1) control conditions; 2) 2 mM lead acetate in drinking water for 28 days; 3) 90 dB broadband noise 2 h/day for two weeks; and 4) both lead and noise. Blood lead levels were measured by inductively coupled plasma mass spectrometry analysis (ICP-MS) lead-induced cochlear oxidative stress signaling was assessed using targeted gene arrays, and the hearing thresholds were assessed by recording auditory brainstem responses. Chronic lead exposure downregulated cochlear Sod1, Gpx1, and Gstk1, which encode critical antioxidant enzymes, and upregulated ApoE, Hspa1a, Ercc2, Prnp, Ccl5, and Sqstm1, which are indicative of cellular apoptosis. Isolated exposure to lead or noise induced 8-12 dB and 11-25 dB shifts in hearing thresholds, respectively. Combined exposure induced 18-30 dB shifts, which was significantly higher than that observed with isolated exposures. This study suggests that chronic exposure to lead induces cochlear oxidative stress and potentiates noise-induced hearing impairment, possibly through parallel pathways. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Quantum Big Bang without fine-tuning in a toy-model

    International Nuclear Information System (INIS)

    Znojil, Miloslav

    2012-01-01

    The question of possible physics before Big Bang (or after Big Crunch) is addressed via a schematic non-covariant simulation of the loss of observability of the Universe. Our model is drastically simplified by the reduction of its degrees of freedom to the mere finite number. The Hilbert space of states is then allowed time-dependent and singular at the critical time t = t c . This option circumvents several traditional theoretical difficulties in a way illustrated via solvable examples. In particular, the unitary evolution of our toy-model quantum Universe is shown interruptible, without any fine-tuning, at the instant of its bang or collapse t = t c .

  18. Quantum Big Bang without fine-tuning in a toy-model

    Science.gov (United States)

    Znojil, Miloslav

    2012-02-01

    The question of possible physics before Big Bang (or after Big Crunch) is addressed via a schematic non-covariant simulation of the loss of observability of the Universe. Our model is drastically simplified by the reduction of its degrees of freedom to the mere finite number. The Hilbert space of states is then allowed time-dependent and singular at the critical time t = tc. This option circumvents several traditional theoretical difficulties in a way illustrated via solvable examples. In particular, the unitary evolution of our toy-model quantum Universe is shown interruptible, without any fine-tuning, at the instant of its bang or collapse t = tc.

  19. Particle Physics Catalysis of Thermal Big Bang Nucleosynthesis

    International Nuclear Information System (INIS)

    Pospelov, Maxim

    2007-01-01

    We point out that the existence of metastable, τ>10 3 s, negatively charged electroweak-scale particles (X - ) alters the predictions for lithium and other primordial elemental abundances for A>4 via the formation of bound states with nuclei during big bang nucleosynthesis. In particular, we show that the bound states of X - with helium, formed at temperatures of about T=10 8 K, lead to the catalytic enhancement of 6 Li production, which is 8 orders of magnitude more efficient than the standard channel. In particle physics models where subsequent decay of X - does not lead to large nonthermal big bang nucleosynthesis effects, this directly translates to the level of sensitivity to the number density of long-lived X - particles (τ>10 5 s) relative to entropy of n X - /s -17 , which is one of the most stringent probes of electroweak scale remnants known to date

  20. Big Bear Exploration Ltd. 1998 annual report

    International Nuclear Information System (INIS)

    1999-01-01

    During the first quarter of 1998 Big Bear completed a purchase of additional assets in the Rainbow Lake area of Alberta in which light oil purchase was financed with new equity and bank debt. The business plan was to immediately exploit these light oil assets, the result of which would be increased reserves, production and cash flow. Although drilling results in the first quarter on the Rainbow Lake properties was mixed, oil prices started to free fall and drilling costs were much higher than expected. As a result, the company completed a reduced program which resulted in less incremental loss and cash flow than it budgeted for. On April 29, 1998, Big Bear entered into agreement with Belco Oil and Gas Corp. and Moan Investments Ltd. for the issuance of convertible preferred shares at a gross value of $15,750,000, which shares were eventually converted at 70 cents per share to common equity. As a result of the continued plunge in oil prices, the lending value of the company's assets continued to fall, requiring it to take action in order to meet its financial commitments. Late in the third quarter Big Bear issued equity for proceeds of $11,032,000 which further reduced the company's debt. Although the company has been extremely active in identifying and pursuing acquisition opportunities, it became evident that Belco Oil and Gas Corp. and Big Bear did nor share common criteria for acquisitions, which resulted in the restructuring of their relationship in the fourth quarter. With the future of oil prices in question, Big Bear decided that it would change its focus to that of natural gas and would refocus ts efforts to acquire natural gas assets to fuel its growth. The purchase of Blue Range put Big Bear in a difficult position in terms of the latter's growth. In summary, what started as a difficult year ended in disappointment

  1. Big sized players on the European Union’s financial advisory market

    Directory of Open Access Journals (Sweden)

    Nicolae, C.

    2013-06-01

    Full Text Available The paper presents the activity and the objectives of “The Big Four” Group of Financial Advisory Firms. The “Big Four” are the four largest international professional services networks in accountancy and professional services, offering audit, assurance, tax, consulting, advisory, actuarial, corporate finance and legal services. They handle the vast majority of audits for publicly traded companies as well as many private companies, creating an oligopoly in auditing large companies. It is reported that the Big Four audit all but one of the companies that constitute the FTSE 100, and 240 of the companies in the FTSE 250, an index of the leading mid-cap listing companies.

  2. Drosophila Big bang regulates the apical cytocortex and wing growth through junctional tension.

    Science.gov (United States)

    Tsoumpekos, Giorgos; Nemetschke, Linda; Knust, Elisabeth

    2018-03-05

    Growth of epithelial tissues is regulated by a plethora of components, including signaling and scaffolding proteins, but also by junctional tension, mediated by the actomyosin cytoskeleton. However, how these players are spatially organized and functionally coordinated is not well understood. Here, we identify the Drosophila melanogaster scaffolding protein Big bang as a novel regulator of growth in epithelial cells of the wing disc by ensuring proper junctional tension. Loss of big bang results in the reduction of the regulatory light chain of nonmuscle myosin, Spaghetti squash. This is associated with an increased apical cell surface, decreased junctional tension, and smaller wings. Strikingly, these phenotypic traits of big bang mutant discs can be rescued by expressing constitutively active Spaghetti squash. Big bang colocalizes with Spaghetti squash in the apical cytocortex and is found in the same protein complex. These results suggest that in epithelial cells of developing wings, the scaffolding protein Big bang controls apical cytocortex organization, which is important for regulating cell shape and tissue growth. © 2018 Tsoumpekos et al.

  3. What does big data mean for personalized medicine?

    NARCIS (Netherlands)

    Sieverink, Floor; Tjin-Kam-Jet-Siemons, Liseth; Braakman-Jansen, Louise Marie Antoinette; van Gemert-Pijnen, Julia E.W.C.

    2015-01-01

    Background The rapid and ongoing digitalization of society leads to an exponential growth of both structured and unstructured data, so-called Big Data. This wealth of information opens the door to the development of more sophisticated personalized health technologies. The analysis of log data from

  4. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  5. Big data, big knowledge: big data for personalized healthcare.

    Science.gov (United States)

    Viceconti, Marco; Hunter, Peter; Hose, Rod

    2015-07-01

    The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.

  6. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  7. Conociendo Big Data

    Directory of Open Access Journals (Sweden)

    Juan José Camargo-Vega

    2014-12-01

    Full Text Available Teniendo en cuenta la importancia que ha adquirido el término Big Data, la presente investigación buscó estudiar y analizar de manera exhaustiva el estado del arte del Big Data; además, y como segundo objetivo, analizó las características, las herramientas, las tecnologías, los modelos y los estándares relacionados con Big Data, y por último buscó identificar las características más relevantes en la gestión de Big Data, para que con ello se pueda conocer todo lo concerniente al tema central de la investigación.La metodología utilizada incluyó revisar el estado del arte de Big Data y enseñar su situación actual; conocer las tecnologías de Big Data; presentar algunas de las bases de datos NoSQL, que son las que permiten procesar datos con formatos no estructurados, y mostrar los modelos de datos y las tecnologías de análisis de ellos, para terminar con algunos beneficios de Big Data.El diseño metodológico usado para la investigación fue no experimental, pues no se manipulan variables, y de tipo exploratorio, debido a que con esta investigación se empieza a conocer el ambiente del Big Data.

  8. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  9. Particle physics catalysis of thermal big bang nucleosynthesis.

    Science.gov (United States)

    Pospelov, Maxim

    2007-06-08

    We point out that the existence of metastable, tau>10(3) s, negatively charged electroweak-scale particles (X-) alters the predictions for lithium and other primordial elemental abundances for A>4 via the formation of bound states with nuclei during big bang nucleosynthesis. In particular, we show that the bound states of X- with helium, formed at temperatures of about T=10(8) K, lead to the catalytic enhancement of 6Li production, which is 8 orders of magnitude more efficient than the standard channel. In particle physics models where subsequent decay of X- does not lead to large nonthermal big bang nucleosynthesis effects, this directly translates to the level of sensitivity to the number density of long-lived X- particles (tau>10(5) s) relative to entropy of nX-/s less, approximately <3x10(-17), which is one of the most stringent probes of electroweak scale remnants known to date.

  10. Historical Trauma, Substance Use, and Indigenous Peoples: Seven Generations of Harm From a "Big Event".

    Science.gov (United States)

    Nutton, Jennifer; Fast, Elizabeth

    2015-01-01

    Indigenous peoples the world over have and continue to experience the devastating effects of colonialism including loss of life, land, language, culture, and identity. Indigenous peoples suffer disproportionately across many health risk factors including an increased risk of substance use. We use the term "Big Event" to describe the historical trauma attributed to colonial policies as a potential pathway to explain the disparity in rates of substance use among many Indigenous populations. We present "Big Solutions" that have the potential to buffer the negative effects of the Big Event, including: (1) decolonizing strategies, (2) identity development, and (3) culturally adapted interventions. Study limitations are noted and future needed research is suggested.

  11. Characterizing Big Data Management

    Directory of Open Access Journals (Sweden)

    Rogério Rossi

    2015-06-01

    Full Text Available Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: technology, people and processes. Hence, this article discusses these dimensions: the technological dimension that is related to storage, analytics and visualization of big data; the human aspects of big data; and, in addition, the process management dimension that involves in a technological and business approach the aspects of big data management.

  12. Big science

    CERN Multimedia

    Nadis, S

    2003-01-01

    " "Big science" is moving into astronomy, bringing large experimental teams, multi-year research projects, and big budgets. If this is the wave of the future, why are some astronomers bucking the trend?" (2 pages).

  13. Hackers ring up big phone bills for hospitals.

    Science.gov (United States)

    Gardner, E

    1992-11-02

    A big telephone bill--possibly in six figures--can be a painful way for a hospital to find out its phone system isn't secure. When unusually large long-distance bills start to show up, chances are a professional telephone hacker has broken in. According to experts, one in 15 businesses has been victimized by long-distance toll fraud, and loss estimates range from $900 million to $4 billion a year.

  14. Recent Development in Big Data Analytics for Business Operations and Risk Management.

    Science.gov (United States)

    Choi, Tsan-Ming; Chan, Hing Kai; Yue, Xiaohang

    2017-01-01

    "Big data" is an emerging topic and has attracted the attention of many researchers and practitioners in industrial systems engineering and cybernetics. Big data analytics would definitely lead to valuable knowledge for many organizations. Business operations and risk management can be a beneficiary as there are many data collection channels in the related industrial systems (e.g., wireless sensor networks, Internet-based systems, etc.). Big data research, however, is still in its infancy. Its focus is rather unclear and related studies are not well amalgamated. This paper aims to present the challenges and opportunities of big data analytics in this unique application domain. Technological development and advances for industrial-based business systems, reliability and security of industrial systems, and their operational risk management are examined. Important areas for future research are also discussed and revealed.

  15. Reducing Racial Disparities in Breast Cancer Care: The Role of 'Big Data'.

    Science.gov (United States)

    Reeder-Hayes, Katherine E; Troester, Melissa A; Meyer, Anne-Marie

    2017-10-15

    Advances in a wide array of scientific technologies have brought data of unprecedented volume and complexity into the oncology research space. These novel big data resources are applied across a variety of contexts-from health services research using data from insurance claims, cancer registries, and electronic health records, to deeper and broader genomic characterizations of disease. Several forms of big data show promise for improving our understanding of racial disparities in breast cancer, and for powering more intelligent and far-reaching interventions to close the racial gap in breast cancer survival. In this article we introduce several major types of big data used in breast cancer disparities research, highlight important findings to date, and discuss how big data may transform breast cancer disparities research in ways that lead to meaningful, lifesaving changes in breast cancer screening and treatment. We also discuss key challenges that may hinder progress in using big data for cancer disparities research and quality improvement.

  16. Big Data Analytics Platforms analyze from startups to traditional database players

    Directory of Open Access Journals (Sweden)

    Ionut TARANU

    2015-07-01

    Full Text Available Big data analytics enables organizations to analyze a mix of structured, semi-structured and unstructured data in search of valuable business information and insights. The analytical findings can lead to more effective marketing, new revenue opportunities, better customer service, improved operational efficiency, competitive advantages over rival organizations and other business benefits. With so many emerging trends around big data and analytics, IT organizations need to create conditions that will allow analysts and data scientists to experiment. "You need a way to evaluate, prototype and eventually integrate some of these technologies into the business," says Chris Curran[1]. In this paper we are going to review 10 Top Big Data Analytics Platforms and compare the key-features.

  17. Big bang and big crunch in matrix string theory

    OpenAIRE

    Bedford, J; Papageorgakis, C; Rodríguez-Gómez, D; Ward, J

    2007-01-01

    Following the holographic description of linear dilaton null Cosmologies with a Big Bang in terms of Matrix String Theory put forward by Craps, Sethi and Verlinde, we propose an extended background describing a Universe including both Big Bang and Big Crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using Matrix String Theory. We provide a simple theory capable of...

  18. Big data and technology assessment: research topic or competitor?

    DEFF Research Database (Denmark)

    Rieder, Gernot; Simon, Judith

    2017-01-01

    With its promise to transform how we live, work, and think, Big Data has captured the imaginations of governments, businesses, and academia. However, the grand claims of Big Data advocates have been accompanied with concerns about potential detrimental implications for civil rights and liberties......, leading to a climate of clash and mutual distrust between different stakeholders. Throughout the years, the interdisciplinary field of technology assessment (TA) has gained considerable experience in studying socio-technical controversies and as such is exceptionally well equipped to assess the premises...... considerations on how TA might contribute to more responsible data-based research and innovation....

  19. Advances in mobile cloud computing and big data in the 5G era

    CERN Document Server

    Mastorakis, George; Dobre, Ciprian

    2017-01-01

    This book reports on the latest advances on the theories, practices, standards and strategies that are related to the modern technology paradigms, the Mobile Cloud computing (MCC) and Big Data, as the pillars and their association with the emerging 5G mobile networks. The book includes 15 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of Big Data and Mobile Cloud Computing, from basic concepts to advanced findings, reporting the state-of-the-art on Big Data management. It demonstrates and discusses methods and practices to improve multi-source Big Data manipulation techniques, as well as the integration of resources availability through the 3As (Anywhere, Anything, Anytime) paradigm, using the 5G access technologies.

  20. Bliver big data til big business?

    DEFF Research Database (Denmark)

    Ritter, Thomas

    2015-01-01

    Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge.......Danmark har en digital infrastruktur, en registreringskultur og it-kompetente medarbejdere og kunder, som muliggør en førerposition, men kun hvis virksomhederne gør sig klar til næste big data-bølge....

  1. Big data uncertainties.

    Science.gov (United States)

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  2. Comments on Thomas Wartenberg's "Big Ideas for Little Kids"

    Science.gov (United States)

    Goering, Sara

    2012-01-01

    This short commentary offers praise for Tom Wartenberg's book "Big Ideas for Little Kids" and raises questions about who is best qualified to lead a philosophy discussion with children, and how we are to assess the benefits of doing philosophy with children.

  3. Telecom Big Data for Urban Transport Analysis - a Case Study of Split-Dalmatia County in Croatia

    Science.gov (United States)

    Baučić, M.; Jajac, N.; Bućan, M.

    2017-09-01

    Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the Study of Tourist Movement and Traffic in Split-Dalmatia County in Croatia is carried out as a part of the "IPA Adriatic CBC//N.0086/INTERMODAL" project. This paper briefly explains the big data used in the study and the results of the study. Furthermore, this paper investigates the main considerations when using telecom customer big data: data privacy and data quality. The paper concludes with GIS visualisation and proposes the further use of big data used in the study.

  4. Challenges and potential solutions for big data implementations in developing countries.

    Science.gov (United States)

    Luna, D; Mayan, J C; García, M J; Almerares, A A; Househ, M

    2014-08-15

    The volume of data, the velocity with which they are generated, and their variety and lack of structure hinder their use. This creates the need to change the way information is captured, stored, processed, and analyzed, leading to the paradigm shift called Big Data. To describe the challenges and possible solutions for developing countries when implementing Big Data projects in the health sector. A non-systematic review of the literature was performed in PubMed and Google Scholar. The following keywords were used: "big data", "developing countries", "data mining", "health information systems", and "computing methodologies". A thematic review of selected articles was performed. There are challenges when implementing any Big Data program including exponential growth of data, special infrastructure needs, need for a trained workforce, need to agree on interoperability standards, privacy and security issues, and the need to include people, processes, and policies to ensure their adoption. Developing countries have particular characteristics that hinder further development of these projects. The advent of Big Data promises great opportunities for the healthcare field. In this article, we attempt to describe the challenges developing countries would face and enumerate the options to be used to achieve successful implementations of Big Data programs.

  5. HARNESSING BIG DATA VOLUMES

    Directory of Open Access Journals (Sweden)

    Bogdan DINU

    2014-04-01

    Full Text Available Big Data can revolutionize humanity. Hidden within the huge amounts and variety of the data we are creating we may find information, facts, social insights and benchmarks that were once virtually impossible to find or were simply inexistent. Large volumes of data allow organizations to tap in real time the full potential of all the internal or external information they possess. Big data calls for quick decisions and innovative ways to assist customers and the society as a whole. Big data platforms and product portfolio will help customers harness to the full the value of big data volumes. This paper deals with technical and technological issues related to handling big data volumes in the Big Data environment.

  6. Big bang and big crunch in matrix string theory

    International Nuclear Information System (INIS)

    Bedford, J.; Ward, J.; Papageorgakis, C.; Rodriguez-Gomez, D.

    2007-01-01

    Following the holographic description of linear dilaton null cosmologies with a big bang in terms of matrix string theory put forward by Craps, Sethi, and Verlinde, we propose an extended background describing a universe including both big bang and big crunch singularities. This belongs to a class of exact string backgrounds and is perturbative in the string coupling far away from the singularities, both of which can be resolved using matrix string theory. We provide a simple theory capable of describing the complete evolution of this closed universe

  7. Big data a primer

    CERN Document Server

    Bhuyan, Prachet; Chenthati, Deepak

    2015-01-01

    This book is a collection of chapters written by experts on various aspects of big data. The book aims to explain what big data is and how it is stored and used. The book starts from  the fundamentals and builds up from there. It is intended to serve as a review of the state-of-the-practice in the field of big data handling. The traditional framework of relational databases can no longer provide appropriate solutions for handling big data and making it available and useful to users scattered around the globe. The study of big data covers a wide range of issues including management of heterogeneous data, big data frameworks, change management, finding patterns in data usage and evolution, data as a service, service-generated data, service management, privacy and security. All of these aspects are touched upon in this book. It also discusses big data applications in different domains. The book will prove useful to students, researchers, and practicing database and networking engineers.

  8. Discovery of an Unexplored Protein Structural Scaffold of Serine Protease from Big Blue Octopus (Octopus cyanea): A New Prospective Lead Molecule.

    Science.gov (United States)

    Panda, Subhamay; Kumari, Leena

    2017-01-01

    Serine proteases are a group of enzymes that hydrolyses the peptide bonds in proteins. In mammals, these enzymes help in the regulation of several major physiological functions such as digestion, blood clotting, responses of immune system, reproductive functions and the complement system. Serine proteases obtained from the venom of Octopodidae family is a relatively unexplored area of research. In the present work, we tried to effectively utilize comparative composite molecular modeling technique. Our key aim was to propose the first molecular model structure of unexplored serine protease 5 derived from big blue octopus. The other objective of this study was to analyze the distribution of negatively and positively charged amino acid over molecular modeled structure, distribution of secondary structural elements, hydrophobicity molecular surface analysis and electrostatic potential analysis with the aid of different bioinformatic tools. In the present study, molecular model has been generated with the help of I-TASSER suite. Afterwards the refined structural model was validated with standard methods. For functional annotation of protein molecule we used Protein Information Resource (PIR) database. Serine protease 5 of big blue octopus was analyzed with different bioinformatical algorithms for the distribution of negatively and positively charged amino acid over molecular modeled structure, distribution of secondary structural elements, hydrophobicity molecular surface analysis and electrostatic potential analysis. The functionally critical amino acids and ligand- binding site (LBS) of the proteins (modeled) were determined using the COACH program. The molecular model data in cooperation to other pertinent post model analysis data put forward molecular insight to proteolytic activity of serine protease 5, which helps in the clear understanding of procoagulant and anticoagulant characteristics of this natural lead molecule. Our approach was to investigate the octopus

  9. Enhanced Store-Operated Calcium Entry Leads to Striatal Synaptic Loss in a Huntington's Disease Mouse Model.

    Science.gov (United States)

    Wu, Jun; Ryskamp, Daniel A; Liang, Xia; Egorova, Polina; Zakharova, Olga; Hung, Gene; Bezprozvanny, Ilya

    2016-01-06

    In Huntington's disease (HD), mutant Huntingtin (mHtt) protein causes striatal neuron dysfunction, synaptic loss, and eventual neurodegeneration. To understand the mechanisms responsible for synaptic loss in HD, we developed a corticostriatal coculture model that features age-dependent dendritic spine loss in striatal medium spiny neurons (MSNs) from YAC128 transgenic HD mice. Age-dependent spine loss was also observed in vivo in YAC128 MSNs. To understand the causes of spine loss in YAC128 MSNs, we performed a series of mechanistic studies. We previously discovered that mHtt protein binds to type 1 inositol (1,4,5)-trisphosphate receptor (InsP3R1) and increases its sensitivity to activation by InsP3. We now report that the resulting increase in steady-state InsP3R1 activity reduces endoplasmic reticulum (ER) Ca(2+) levels. Depletion of ER Ca(2+) leads to overactivation of the neuronal store-operated Ca(2+) entry (nSOC) pathway in YAC128 MSN spines. The synaptic nSOC pathway is controlled by the ER resident protein STIM2. We discovered that STIM2 expression is elevated in aged YAC128 striatal cultures and in YAC128 mouse striatum. Knock-down of InsP3R1 expression by antisense oligonucleotides or knock-down or knock-out of STIM2 resulted in normalization of nSOC and rescue of spine loss in YAC128 MSNs. The selective nSOC inhibitor EVP4593 was identified in our previous studies. We now demonstrate that EVP4593 reduces synaptic nSOC and rescues spine loss in YAC128 MSNs. Intraventricular delivery of EVP4593 in YAC128 mice rescued age-dependent striatal spine loss in vivo. Our results suggest EVP4593 and other inhibitors of the STIM2-dependent nSOC pathway as promising leads for HD therapeutic development. In Huntington's disease (HD) mutant Huntingtin (mHtt) causes early corticostriatal synaptic dysfunction and eventual neurodegeneration of medium spine neurons (MSNs) through poorly understood mechanisms. We report here that corticostriatal cocultures prepared from

  10. Microsoft big data solutions

    CERN Document Server

    Jorgensen, Adam; Welch, John; Clark, Dan; Price, Christopher; Mitchell, Brian

    2014-01-01

    Tap the power of Big Data with Microsoft technologies Big Data is here, and Microsoft's new Big Data platform is a valuable tool to help your company get the very most out of it. This timely book shows you how to use HDInsight along with HortonWorks Data Platform for Windows to store, manage, analyze, and share Big Data throughout the enterprise. Focusing primarily on Microsoft and HortonWorks technologies but also covering open source tools, Microsoft Big Data Solutions explains best practices, covers on-premises and cloud-based solutions, and features valuable case studies. Best of all,

  11. Exploring how pain leads to productivity loss in primary care consulters for osteoarthritis: a prospective cohort study.

    Science.gov (United States)

    Wilkie, Ross; Hay, Elaine M; Croft, Peter; Pransky, Glenn

    2015-01-01

    Osteoarthritis pain has become a leading cause of decreased productivity and work disability in older workers, a major concern in primary care. How osteoarthritis pain leads to decreased productivity at work is unclear; the aim of this study was to elucidate causal mechanisms and thus identify potential opportunities for intervention. Population-based prospective cohort study of primary care consulters with osteoarthritis. Path analysis was used to test proposed mechanisms by examining the association between pain at baseline, and onset of work productivity loss at three years for mediation by physical limitation, depression, poor sleep and poor coping mechanisms. High pain intensity was associated with onset of work productivity loss (Adjusted Odds Ratio 2.5; 95%CI 1.3, 4.8). About half of the effect of pain on work productivity was a direct effect, and half was mediated by the impact of pain on physical function. Depression, poor sleep quality and poor coping did not mediate the association between high pain intensity and onset of work productivity loss. As pain is a major cause of work productivity loss, results suggest that decreasing pain should be a major focus. However, successfully improving function may have an indirect effect by decreasing the impact of pain on work productivity, especially important as significant pain reduction is often difficult to achieve. Although depression, sleep problems, and coping strategies may be directly related to work productivity loss, addressing these issues may not have much effect on the significant impact of pain on work productivity.

  12. Summary big data

    CERN Document Server

    2014-01-01

    This work offers a summary of Cukier the book: "Big Data: A Revolution That Will Transform How we Live, Work, and Think" by Viktor Mayer-Schonberg and Kenneth. Summary of the ideas in Viktor Mayer-Schonberg's and Kenneth Cukier's book: " Big Data " explains that big data is where we use huge quantities of data to make better predictions based on the fact we identify patters in the data rather than trying to understand the underlying causes in more detail. This summary highlights that big data will be a source of new economic value and innovation in the future. Moreover, it shows that it will

  13. Kasner asymptotics of mixmaster Horava-Witten and pre-big-bang cosmologies

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2001-01-01

    We discuss various superstring effective actions and, in particular, their common sector which leads to the so-called pre-big-bang cosmology (cosmology in a weak coupling limit of heterotic superstring theory. Using the conformal relationship between these two theories we present Kasner asymptotic solutions of Bianchi type IX geometries within these theories and make predictions about possible emergence of chaos. Finally, we present a possible method of generating Horava-Witten cosmological solutions out of the well-known general relativistic or pre-big-bang solutions

  14. Big Data in Health: a Literature Review from the Year 2005.

    Science.gov (United States)

    de la Torre Díez, Isabel; Cosgaya, Héctor Merino; Garcia-Zapirain, Begoña; López-Coronado, Miguel

    2016-09-01

    The information stored in healthcare systems has increased over the last ten years, leading it to be considered Big Data. There is a wealth of health information ready to be analysed. However, the sheer volume raises a challenge for traditional methods. The aim of this article is to conduct a cutting-edge study on Big Data in healthcare from 2005 to the present. This literature review will help researchers to know how Big Data has developed in the health industry and open up new avenues for research. Information searches have been made on various scientific databases such as Pubmed, Science Direct, Scopus and Web of Science for Big Data in healthcare. The search criteria were "Big Data" and "health" with a date range from 2005 to the present. A total of 9724 articles were found on the databases. 9515 articles were discarded as duplicates or for not having a title of interest to the study. 209 articles were read, with the resulting decision that 46 were useful for this study. 52.6 % of the articles used were found in Science Direct, 23.7 % in Pubmed, 22.1 % through Scopus and the remaining 2.6 % through the Web of Science. Big Data has undergone extremely high growth since 2011 and its use is becoming compulsory in developed nations and in an increasing number of developing nations. Big Data is a step forward and a cost reducer for public and private healthcare.

  15. APC loss in breast cancer leads to doxorubicin resistance via STAT3 activation.

    Science.gov (United States)

    VanKlompenberg, Monica K; Leyden, Emily; Arnason, Anne H; Zhang, Jian-Ting; Stefanski, Casey D; Prosperi, Jenifer R

    2017-11-28

    Resistance to chemotherapy is one of the leading causes of death from breast cancer. We recently established that loss of Adenomatous Polyposis Coli (APC) in the Mouse Mammary Tumor Virus - Polyoma middle T (MMTV-PyMT) transgenic mouse model results in resistance to cisplatin or doxorubicin-induced apoptosis. Herein, we aim to establish the mechanism that is responsible for APC-mediated chemotherapeutic resistance. Our data demonstrate that MMTV-PyMT; Apc Min/+ cells have increased signal transducer and activator of transcription 3 (STAT3) activation. STAT3 can be constitutively activated in breast cancer, maintains the tumor initiating cell (TIC) population, and upregulates multidrug resistance protein 1 (MDR1). The activation of STAT3 in the MMTV-PyMT; Apc Min/+ model is independent of interleukin 6 (IL-6); however, enhanced EGFR expression in the MMTV-PyMT; Apc Min/+ cells may be responsible for the increased STAT3 activation. Inhibiting STAT3 with a small molecule inhibitor A69 in combination with doxorubicin, but not cisplatin, restores drug sensitivity. A69 also decreases doxorubicin enhanced MDR1 gene expression and the TIC population enhanced by loss of APC. In summary, these results have revealed the molecular mechanisms of APC loss in breast cancer that can guide future treatment plans to counteract chemotherapeutic resistance.

  16. Big Data: Philosophy, Emergence, Crowdledge, and Science Education

    Science.gov (United States)

    dos Santos, Renato P.

    2015-01-01

    Big Data already passed out of hype, is now a field that deserves serious academic investigation, and natural scientists should also become familiar with Analytics. On the other hand, there is little empirical evidence that any science taught in school is helping people to lead happier, more prosperous, or more politically well-informed lives. In…

  17. Loss-Aversion or Loss-Attention: The Impact of Losses on Cognitive Performance

    Science.gov (United States)

    Yechiam, Eldad; Hochman, Guy

    2013-01-01

    Losses were found to improve cognitive performance, and this has been commonly explained by increased weighting of losses compared to gains (i.e., loss aversion). We examine whether effects of losses on performance could be modulated by two alternative processes: an attentional effect leading to increased sensitivity to task incentives; and a…

  18. TELECOM BIG DATA FOR URBAN TRANSPORT ANALYSIS – A CASE STUDY OF SPLIT-DALMATIA COUNTY IN CROATIA

    Directory of Open Access Journals (Sweden)

    M. Baučić

    2017-09-01

    Full Text Available Today, big data has become widely available and the new technologies are being developed for big data storage architecture and big data analytics. An ongoing challenge is how to incorporate big data into GIS applications supporting the various domains. International Transport Forum explains how the arrival of big data and real-time data, together with new data processing algorithms lead to new insights and operational improvements of transport. Based on the telecom customer data, the Study of Tourist Movement and Traffic in Split-Dalmatia County in Croatia is carried out as a part of the “IPA Adriatic CBC//N.0086/INTERMODAL” project. This paper briefly explains the big data used in the study and the results of the study. Furthermore, this paper investigates the main considerations when using telecom customer big data: data privacy and data quality. The paper concludes with GIS visualisation and proposes the further use of big data used in the study.

  19. The use of big data in transfusion medicine.

    Science.gov (United States)

    Pendry, K

    2015-06-01

    'Big data' refers to the huge quantities of digital information now available that describe much of human activity. The science of data management and analysis is rapidly developing to enable organisations to convert data into useful information and knowledge. Electronic health records and new developments in Pathology Informatics now support the collection of 'big laboratory and clinical data', and these digital innovations are now being applied to transfusion medicine. To use big data effectively, we must address concerns about confidentiality and the need for a change in culture and practice, remove barriers to adopting common operating systems and data standards and ensure the safe and secure storage of sensitive personal information. In the UK, the aim is to formulate a single set of data and standards for communicating test results and so enable pathology data to contribute to national datasets. In transfusion, big data has been used for benchmarking, detection of transfusion-related complications, determining patterns of blood use and definition of blood order schedules for surgery. More generally, rapidly available information can monitor compliance with key performance indicators for patient blood management and inventory management leading to better patient care and reduced use of blood. The challenges of enabling reliable systems and analysis of big data and securing funding in the restrictive financial climate are formidable, but not insurmountable. The promise is that digital information will soon improve the implementation of best practice in transfusion medicine and patient blood management globally. © 2015 British Blood Transfusion Society.

  20. Can companies benefit from Big Science? Science and Industry

    CERN Document Server

    Autio, Erkko; Bianchi-Streit, M

    2003-01-01

    Several studies have indicated that there are significant returns on financial investment via "Big Science" centres. Financial multipliers ranging from 2.7 (ESA) to 3.7 (CERN) have been found, meaning that each Euro invested in industry by Big Science generates a two- to fourfold return for the supplier. Moreover, laboratories such as CERN are proud of their record in technology transfer, where research developments lead to applications in other fields - for example, with particle accelerators and detectors. Less well documented, however, is the effect of the experience that technological firms gain through working in the arena of Big Science. Indeed, up to now there has been no explicit empirical study of such benefits. Our findings reveal a variety of outcomes, which include technological learning, the development of new products and markets, and impact on the firm's organization. The study also demonstrates the importance of technologically challenging projects for staff at CERN. Together, these findings i...

  1. Big Data en surveillance, deel 1 : Definities en discussies omtrent Big Data

    NARCIS (Netherlands)

    Timan, Tjerk

    2016-01-01

    Naar aanleiding van een (vrij kort) college over surveillance en Big Data, werd me gevraagd iets dieper in te gaan op het thema, definities en verschillende vraagstukken die te maken hebben met big data. In dit eerste deel zal ik proberen e.e.a. uiteen te zetten betreft Big Data theorie en

  2. Superhorizon curvaton amplitude in inflation and pre-big bang cosmology

    DEFF Research Database (Denmark)

    Sloth, Martin Snoager

    2002-01-01

    We follow the evolution of the curvaton on superhorizon scales and check that the spectral tilt of the curvaton perturbations is unchanged as the curvaton becomes non-relativistic. Both inflation and pre-big bang cosmology can be treated since the curvaton mechanism within the two scenarios works...... the same way. We also discuss the amplitude of the density perturbations, which leads to some interesting constrains on the pre-big bang scenario. It is shown that within a SL(3,R) non-linear sigma model one of the three axions has the right coupling to the dilaton and moduli to yield a flat spectrum...

  3. Characterizing Big Data Management

    OpenAIRE

    Rogério Rossi; Kechi Hirama

    2015-01-01

    Big data management is a reality for an increasing number of organizations in many areas and represents a set of challenges involving big data modeling, storage and retrieval, analysis and visualization. However, technological resources, people and processes are crucial to facilitate the management of big data in any kind of organization, allowing information and knowledge from a large volume of data to support decision-making. Big data management can be supported by these three dimensions: t...

  4. Does Implementation of Big Data Analytics Improve Firms’ Market Value? Investors’ Reaction in Stock Market

    Directory of Open Access Journals (Sweden)

    Hansol Lee

    2017-06-01

    Full Text Available Recently, due to the development of social media, multimedia, and the Internet of Things (IoT, various types of data have increased. As the existing data analytics tools cannot cover this huge volume of data, big data analytics becomes one of the emerging technologies for business today. Considering that big data analytics is an up-to-date term, in the present study, we investigated the impact of implementing big data analytics in the short-term perspective. We used an event study methodology to investigate the changes in stock price caused by announcements on big data analytics solution investment. A total of 54 investment announcements of firms publicly traded in NASDAQ and NYSE from 2010 to 2015 were collected. Our results empirically demonstrate that announcement of firms’ investment on big data solution leads to positive stock market reactions. In addition, we also found that investments on small vendors’ solution with industry-oriented functions tend to result in higher abnormal returns than those on big vendors’ solution with general functions. Finally, our results also suggest that stock market investors highly evaluate big data analytics investments of big firms as compared to those of small firms.

  5. Popular Weight Loss Strategies: a Review of Four Weight Loss Techniques.

    Science.gov (United States)

    Obert, Jonathan; Pearlman, Michelle; Obert, Lois; Chapin, Sarah

    2017-11-09

    The purpose of this paper is to review the epidemiology of obesity and the most recent literature on popular fad diets and exercise regimens that are used for weight loss. The weight loss plans that will be discussed in this article include juicing or detoxification diets, intermittent fasting, the paleo diet, and high intensity training. Despite the growing popularity of fad diets and exercise plans for weight loss, there are limited studies that actually suggest these particular regimens are beneficial and lead to long-term weight loss. Juicing or detoxification diets tend to work because they lead to extremely low caloric intake for short periods of time, however tend to lead to weight gain once a normal diet is resumed. Both intermittent fasting and the paleo diet lead to weight loss because of overall decreased caloric intake as well. Lastly, studies on short bursts of high intensity training have shown remarkable weight loss and improvements in cardiovascular health. Review of the literature does suggest that some fad diets and exercise plans do lead to weight loss; however, the studies are quite limited and are all based on the concept of caloric restriction.

  6. Algorithmic design considerations for geospatial and/or temporal big data

    CSIR Research Space (South Africa)

    Van Zyl, T

    2014-02-01

    Full Text Available Mining. In addition, ignoring the spatiotemporal autocorrelation in the data can lead to spurious results, for instance, the salt and pepper effect when clustering. The solution to the big data challenge is simple to describe yet in most cases...

  7. Modeling regeneration responses of big sagebrush (Artemisia tridentata) to abiotic conditions

    Science.gov (United States)

    Schlaepfer, Daniel R.; Lauenroth, William K.; Bradford, John B.

    2014-01-01

    Ecosystems dominated by big sagebrush, Artemisia tridentata Nuttall (Asteraceae), which are the most widespread ecosystems in semiarid western North America, have been affected by land use practices and invasive species. Loss of big sagebrush and the decline of associated species, such as greater sage-grouse, are a concern to land managers and conservationists. However, big sagebrush regeneration remains difficult to achieve by restoration and reclamation efforts and there is no regeneration simulation model available. We present here the first process-based, daily time-step, simulation model to predict yearly big sagebrush regeneration including relevant germination and seedling responses to abiotic factors. We estimated values, uncertainty, and importance of 27 model parameters using a total of 1435 site-years of observation. Our model explained 74% of variability of number of years with successful regeneration at 46 sites. It also achieved 60% overall accuracy predicting yearly regeneration success/failure. Our results identify specific future research needed to improve our understanding of big sagebrush regeneration, including data at the subspecies level and improved parameter estimates for start of seed dispersal, modified wet thermal-time model of germination, and soil water potential influences. We found that relationships between big sagebrush regeneration and climate conditions were site specific, varying across the distribution of big sagebrush. This indicates that statistical models based on climate are unsuitable for understanding range-wide regeneration patterns or for assessing the potential consequences of changing climate on sagebrush regeneration and underscores the value of this process-based model. We used our model to predict potential regeneration across the range of sagebrush ecosystems in the western United States, which confirmed that seedling survival is a limiting factor, whereas germination is not. Our results also suggested that modeled

  8. Big Data in der Cloud

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2014-01-01

    Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)......Technology assessment of big data, in particular cloud based big data services, for the Office for Technology Assessment at the German federal parliament (Bundestag)...

  9. Big Data is invading big places as CERN

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Big Data technologies are becoming more popular with the constant grow of data generation in different fields such as social networks, internet of things and laboratories like CERN. How is CERN making use of such technologies? How machine learning is applied at CERN with Big Data technologies? How much data we move and how it is analyzed? All these questions will be answered during the talk.

  10. The big bang

    International Nuclear Information System (INIS)

    Chown, Marcus.

    1987-01-01

    The paper concerns the 'Big Bang' theory of the creation of the Universe 15 thousand million years ago, and traces events which physicists predict occurred soon after the creation. Unified theory of the moment of creation, evidence of an expanding Universe, the X-boson -the particle produced very soon after the big bang and which vanished from the Universe one-hundredth of a second after the big bang, and the fate of the Universe, are all discussed. (U.K.)

  11. Baryon symmetric big bang cosmology

    Science.gov (United States)

    Stecker, F. W.

    1978-01-01

    Both the quantum theory and Einsteins theory of special relativity lead to the supposition that matter and antimatter were produced in equal quantities during the big bang. It is noted that local matter/antimatter asymmetries may be reconciled with universal symmetry by assuming (1) a slight imbalance of matter over antimatter in the early universe, annihilation, and a subsequent remainder of matter; (2) localized regions of excess for one or the other type of matter as an initial condition; and (3) an extremely dense, high temperature state with zero net baryon number; i.e., matter/antimatter symmetry. Attention is given to the third assumption, which is the simplest and the most in keeping with current knowledge of the cosmos, especially as pertains the universality of 3 K background radiation. Mechanisms of galaxy formation are discussed, whereby matter and antimatter might have collided and annihilated each other, or have coexisted (and continue to coexist) at vast distances. It is pointed out that baryon symmetric big bang cosmology could probably be proved if an antinucleus could be detected in cosmic radiation.

  12. Occurrence and transport of nitrogen in the Big Sunflower River, northwestern Mississippi, October 2009-June 2011

    Science.gov (United States)

    Barlow, Jeannie R.B.; Coupe, Richard H.

    2014-01-01

    The Big Sunflower River Basin, located within the Yazoo River Basin, is subject to large annual inputs of nitrogen from agriculture, atmospheric deposition, and point sources. Understanding how nutrients are transported in, and downstream from, the Big Sunflower River is key to quantifying their eutrophying effects on the Gulf. Recent results from two Spatially Referenced Regressions on Watershed attributes (SPARROW models), which include the Big Sunflower River, indicate minimal losses of nitrogen in stream reaches typical of the main channels of major river systems. If SPARROW assumptions of relatively conservative transport of nitrogen are correct and surface-water losses through the bed of the Big Sunflower River are negligible, then options for managing nutrient loads to the Gulf of Mexico may be limited. Simply put, if every pound of nitrogen entering the Delta is eventually delivered to the Gulf, then the only effective nutrient management option in the Delta is to reduce inputs. If, on the other hand, it can be shown that processes within river channels of the Mississippi Delta act to reduce the mass of nitrogen in transport, other hydrologic approaches may be designed to further limit nitrogen transport. Direct validation of existing SPARROW models for the Delta is a first step in assessing the assumptions underlying those models. In order to characterize spatial and temporal variability of nitrogen in the Big Sunflower River Basin, water samples were collected at four U.S. Geological Survey gaging stations located on the Big Sunflower River between October 1, 2009, and June 30, 2011. Nitrogen concentrations were generally highest at each site during the spring of the 2010 water year and the fall and winter of the 2011 water year. Additionally, the dominant form of nitrogen varied between sites. For example, in samples collected from the most upstream site (Clarksdale), the concentration of organic nitrogen was generally higher than the concentrations of

  13. Small Big Data Congress 2017

    NARCIS (Netherlands)

    Doorn, J.

    2017-01-01

    TNO, in collaboration with the Big Data Value Center, presents the fourth Small Big Data Congress! Our congress aims at providing an overview of practical and innovative applications based on big data. Do you want to know what is happening in applied research with big data? And what can already be

  14. Big data opportunities and challenges

    CERN Document Server

    2014-01-01

    This ebook aims to give practical guidance for all those who want to understand big data better and learn how to make the most of it. Topics range from big data analysis, mobile big data and managing unstructured data to technologies, governance and intellectual property and security issues surrounding big data.

  15. Big Data and Neuroimaging.

    Science.gov (United States)

    Webb-Vargas, Yenny; Chen, Shaojie; Fisher, Aaron; Mejia, Amanda; Xu, Yuting; Crainiceanu, Ciprian; Caffo, Brian; Lindquist, Martin A

    2017-12-01

    Big Data are of increasing importance in a variety of areas, especially in the biosciences. There is an emerging critical need for Big Data tools and methods, because of the potential impact of advancements in these areas. Importantly, statisticians and statistical thinking have a major role to play in creating meaningful progress in this arena. We would like to emphasize this point in this special issue, as it highlights both the dramatic need for statistical input for Big Data analysis and for a greater number of statisticians working on Big Data problems. We use the field of statistical neuroimaging to demonstrate these points. As such, this paper covers several applications and novel methodological developments of Big Data tools applied to neuroimaging data.

  16. Big Data; A Management Revolution : The emerging role of big data in businesses

    OpenAIRE

    Blasiak, Kevin

    2014-01-01

    Big data is a term that was coined in 2012 and has since then emerged to one of the top trends in business and technology. Big data is an agglomeration of different technologies resulting in data processing capabilities that have been unreached before. Big data is generally characterized by 4 factors. Volume, velocity and variety. These three factors distinct it from the traditional data use. The possibilities to utilize this technology are vast. Big data technology has touch points in differ...

  17. Endless universe beyond the big bang

    CERN Document Server

    Steinhardt, Paul J

    2007-01-01

    The Big Bang theory—widely regarded as the leading explanation for the origin of the universe—posits that space and time sprang into being about 14 billion years ago in a hot, expanding fireball of nearly infinite density. Over the last three decades the theory has been repeatedly revised to address such issues as how galaxies and stars first formed and why the expansion of the universe is speeding up today. Furthermore, an explanation has yet to be found for what caused the Big Bang in the first place. In Endless Universe, Paul J. Steinhardt and Neil Turok, both distinguished theoretical physicists, present a bold new cosmology. Steinhardt and Turok “contend that what we think of as the moment of creation was simply part of an infinite cycle of titanic collisions between our universe and a parallel world” (Discover). They recount the remarkable developments in astronomy, particle physics, and superstring theory that form the basis for their groundbreaking “Cyclic Universe” theory. According to t...

  18. Loss of ATF2 function leads to cranial motoneuron degeneration during embryonic mouse development.

    Directory of Open Access Journals (Sweden)

    Julien Ackermann

    2011-04-01

    Full Text Available The AP-1 family transcription factor ATF2 is essential for development and tissue maintenance in mammals. In particular, ATF2 is highly expressed and activated in the brain and previous studies using mouse knockouts have confirmed its requirement in the cerebellum as well as in vestibular sense organs. Here we present the analysis of the requirement for ATF2 in CNS development in mouse embryos, specifically in the brainstem. We discovered that neuron-specific inactivation of ATF2 leads to significant loss of motoneurons of the hypoglossal, abducens and facial nuclei. While the generation of ATF2 mutant motoneurons appears normal during early development, they undergo caspase-dependent and independent cell death during later embryonic and foetal stages. The loss of these motoneurons correlates with increased levels of stress activated MAP kinases, JNK and p38, as well as aberrant accumulation of phosphorylated neurofilament proteins, NF-H and NF-M, known substrates for these kinases. This, together with other neuropathological phenotypes, including aberrant vacuolisation and lipid accumulation, indicates that deficiency in ATF2 leads to neurodegeneration of subsets of somatic and visceral motoneurons of the brainstem. It also confirms that ATF2 has a critical role in limiting the activities of stress kinases JNK and p38 which are potent inducers of cell death in the CNS.

  19. THE 2H(alpha, gamma6LI REACTION AT LUNA AND BIG BANG NUCLEOSYNTHETIS

    Directory of Open Access Journals (Sweden)

    Carlo Gustavino

    2013-12-01

    Full Text Available The 2H(α, γ6Li reaction is the leading process for the production of 6Li in standard Big Bang Nucleosynthesis. Recent observations of lithium abundance in metal-poor halo stars suggest that there might be a 6Li plateau, similar to the well-known Spite plateau of 7Li. This calls for a re-investigation of the standard production channel for 6Li. As the 2H(α, γ6Li cross section drops steeply at low energy, it has never before been studied directly at Big Bang energies. For the first time the reaction has been studied directly at Big Bang energies at the LUNA accelerator. The preliminary data and their implications for Big Bang nucleosynthesis and the purported 6Li problem will be shown.

  20. Supporting diagnosis and treatment in medical care based on Big Data processing.

    Science.gov (United States)

    Lupşe, Oana-Sorina; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara; Bernard, Elena

    2014-01-01

    With information and data in all domains growing every day, it is difficult to manage and extract useful knowledge for specific situations. This paper presents an integrated system architecture to support the activity in the Ob-Gin departments with further developments in using new technology to manage Big Data processing - using Google BigQuery - in the medical domain. The data collected and processed with Google BigQuery results from different sources: two Obstetrics & Gynaecology Departments, the TreatSuggest application - an application for suggesting treatments, and a home foetal surveillance system. Data is uploaded in Google BigQuery from Bega Hospital Timişoara, Romania. The analysed data is useful for the medical staff, researchers and statisticians from public health domain. The current work describes the technological architecture and its processing possibilities that in the future will be proved based on quality criteria to lead to a better decision process in diagnosis and public health.

  1. Social big data mining

    CERN Document Server

    Ishikawa, Hiroshi

    2015-01-01

    Social Media. Big Data and Social Data. Hypotheses in the Era of Big Data. Social Big Data Applications. Basic Concepts in Data Mining. Association Rule Mining. Clustering. Classification. Prediction. Web Structure Mining. Web Content Mining. Web Access Log Mining, Information Extraction and Deep Web Mining. Media Mining. Scalability and Outlier Detection.

  2. Cryptography for Big Data Security

    Science.gov (United States)

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  3. Data: Big and Small.

    Science.gov (United States)

    Jones-Schenk, Jan

    2017-02-01

    Big data is a big topic in all leadership circles. Leaders in professional development must develop an understanding of what data are available across the organization that can inform effective planning for forecasting. Collaborating with others to integrate data sets can increase the power of prediction. Big data alone is insufficient to make big decisions. Leaders must find ways to access small data and triangulate multiple types of data to ensure the best decision making. J Contin Educ Nurs. 2017;48(2):60-61. Copyright 2017, SLACK Incorporated.

  4. Big Data Revisited

    DEFF Research Database (Denmark)

    Kallinikos, Jannis; Constantiou, Ioanna

    2015-01-01

    We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper’s commentators. We initially deal with the issue of social data and the role it plays in the current data revolution...... and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim...... that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data....

  5. Big Data in industry

    Science.gov (United States)

    Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.

    2016-08-01

    The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.

  6. Loss of the homologous recombination gene rad51 leads to Fanconi anemia-like symptoms in zebrafish.

    Science.gov (United States)

    Botthof, Jan Gregor; Bielczyk-Maczyńska, Ewa; Ferreira, Lauren; Cvejic, Ana

    2017-05-30

    RAD51 is an indispensable homologous recombination protein, necessary for strand invasion and crossing over. It has recently been designated as a Fanconi anemia (FA) gene, following the discovery of two patients carrying dominant-negative mutations. FA is a hereditary DNA-repair disorder characterized by various congenital abnormalities, progressive bone marrow failure, and cancer predisposition. In this report, we describe a viable vertebrate model of RAD51 loss. Zebrafish rad51 loss-of-function mutants developed key features of FA, including hypocellular kidney marrow, sensitivity to cross-linking agents, and decreased size. We show that some of these symptoms stem from both decreased proliferation and increased apoptosis of embryonic hematopoietic stem and progenitor cells. Comutation of p53 was able to rescue the hematopoietic defects seen in the single mutants, but led to tumor development. We further demonstrate that prolonged inflammatory stress can exacerbate the hematological impairment, leading to an additional decrease in kidney marrow cell numbers. These findings strengthen the assignment of RAD51 as a Fanconi gene and provide more evidence for the notion that aberrant p53 signaling during embryogenesis leads to the hematological defects seen later in life in FA. Further research on this zebrafish FA model will lead to a deeper understanding of the molecular basis of bone marrow failure in FA and the cellular role of RAD51.

  7. Big Data Analytics An Overview

    Directory of Open Access Journals (Sweden)

    Jayshree Dwivedi

    2015-08-01

    Full Text Available Big data is a data beyond the storage capacity and beyond the processing power is called big data. Big data term is used for data sets its so large or complex that traditional data it involves data sets with sizes. Big data size is a constantly moving target year by year ranging from a few dozen terabytes to many petabytes of data means like social networking sites the amount of data produced by people is growing rapidly every year. Big data is not only a data rather it become a complete subject which includes various tools techniques and framework. It defines the epidemic possibility and evolvement of data both structured and unstructured. Big data is a set of techniques and technologies that require new forms of assimilate to uncover large hidden values from large datasets that are diverse complex and of a massive scale. It is difficult to work with using most relational database management systems and desktop statistics and visualization packages exacting preferably massively parallel software running on tens hundreds or even thousands of servers. Big data environment is used to grab organize and resolve the various types of data. In this paper we describe applications problems and tools of big data and gives overview of big data.

  8. Big Data Analytics for Industrial Process Control

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schioler, Henrik; Kulahci, Murat

    2017-01-01

    Today, in modern factories, each step in manufacturing produces a bulk of valuable as well as highly precise information. This provides a great opportunity for understanding the hidden statistical dependencies in the process. Systematic analysis and utilization of advanced analytical methods can ...... lead towards more informed decisions. In this article we discuss some of the challenges related to big data analysis in manufacturing and relevant solutions to some of these challenges....

  9. Environmental lead pollution and its possible influence on tooth loss and hard dental tissue lesions

    Directory of Open Access Journals (Sweden)

    Cenić-Milošević Desanka

    2013-01-01

    Full Text Available Bacground/Aim. Environmental lead (Pb pollution is a global problem. Hard dental tissue is capable of accumulating lead and other hard metals from the environment. The aim of this study was to investigate any correlation between the concentration of lead in teeth extracted from inhabitants of Pančevo and Belgrade, Serbia, belonging to different age groups and occurrence of tooth loss, caries and non-carious lesions. Methods. A total of 160 volunteers were chosen consecutively from Pančevo (the experimental group and Belgrade (the control group and divided into 5 age subgroups of 32 subjects each. Clinical examination consisted of caries and hard dental tissue diagnostics. The Decayed Missing Filled Teeth (DMFT Index and Significant Caries Index were calculated. Extracted teeth were freed of any organic residue by UV digestion and subjected to voltammetric analysis for the content of lead. Results. The average DMFT scores in Pančevo (20.41 were higher than in Belgrade (16.52; in the patients aged 31-40 and 41-50 years the difference was significant (p < 0.05 and highly significant in the patients aged 51-60 (23.69 vs 18.5, p < 0.01. Non-carious lesions were diagnosed in 71 (44% patients from Pančevo and 39 (24% patients from Belgrade. The concentrations of Pb in extracted teeth in all the groups from Pančevo were statistically significantly (p < 0.05 higher than in all the groups from Belgrade. In the patients from Pančevo correlations between Pb concentration in extracted teeth and the number of extracted teeth, the number of carious lesions and the number of non-carious lesions showed a statistical significance (p < 0.001, p < 0.01 and p < 0.001, respectively. Conclusion. According to correlations between lead concentration and the number of extracted teeth, number of carious lesions and non-carious lesions found in the patients living in Pančevo, one possible cause of tooth loss and hard dental tissue damage could be a long

  10. Urbanising Big

    DEFF Research Database (Denmark)

    Ljungwall, Christer

    2013-01-01

    Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis.......Development in China raises the question of how big a city can become, and at the same time be sustainable, writes Christer Ljungwall of the Swedish Agency for Growth Policy Analysis....

  11. Big bang nucleosynthesis

    International Nuclear Information System (INIS)

    Boyd, Richard N.

    2001-01-01

    The precision of measurements in modern cosmology has made huge strides in recent years, with measurements of the cosmic microwave background and the determination of the Hubble constant now rivaling the level of precision of the predictions of big bang nucleosynthesis. However, these results are not necessarily consistent with the predictions of the Standard Model of big bang nucleosynthesis. Reconciling these discrepancies may require extensions of the basic tenets of the model, and possibly of the reaction rates that determine the big bang abundances

  12. Breaks in the 45S rDNA Lead to Recombination-Mediated Loss of Repeats

    Directory of Open Access Journals (Sweden)

    Daniël O. Warmerdam

    2016-03-01

    Full Text Available rDNA repeats constitute the most heavily transcribed region in the human genome. Tumors frequently display elevated levels of recombination in rDNA, indicating that the repeats are a liability to the genomic integrity of a cell. However, little is known about how cells deal with DNA double-stranded breaks in rDNA. Using selective endonucleases, we show that human cells are highly sensitive to breaks in 45S but not the 5S rDNA repeats. We find that homologous recombination inhibits repair of breaks in 45S rDNA, and this results in repeat loss. We identify the structural maintenance of chromosomes protein 5 (SMC5 as contributing to recombination-mediated repair of rDNA breaks. Together, our data demonstrate that SMC5-mediated recombination can lead to error-prone repair of 45S rDNA repeats, resulting in their loss and thereby reducing cellular viability.

  13. The ethics of big data in big agriculture

    OpenAIRE

    Carbonell (Isabelle M.)

    2016-01-01

    This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique in...

  14. The big data-big model (BDBM) challenges in ecological research

    Science.gov (United States)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple

  15. A Big Video Manifesto

    DEFF Research Database (Denmark)

    Mcilvenny, Paul Bruce; Davidsen, Jacob

    2017-01-01

    and beautiful visualisations. However, we also need to ask what the tools of big data can do both for the Humanities and for more interpretative approaches and methods. Thus, we prefer to explore how the power of computation, new sensor technologies and massive storage can also help with video-based qualitative......For the last few years, we have witnessed a hype about the potential results and insights that quantitative big data can bring to the social sciences. The wonder of big data has moved into education, traffic planning, and disease control with a promise of making things better with big numbers...

  16. Matter sources for a null big bang

    International Nuclear Information System (INIS)

    Bronnikov, K A; Zaslavskii, O B

    2008-01-01

    We consider the properties of stress-energy tensors compatible with a null big bang, i.e., cosmological evolution starting from a Killing horizon rather than a singularity. For Kantowski-Sachs cosmologies, it is shown that if matter satisfies the null energy condition, then (i) regular cosmological evolution can only start from a Killing horizon, (ii) matter is absent at the horizon and (iii) matter can only appear in the cosmological region due to interaction with vacuum. The latter is understood phenomenologically as a fluid whose stress tensor is insensitive to boosts in a particular direction. We also argue that matter is absent in a static region beyond the horizon. All this generalizes the observations recently obtained for a mixture of dust and a vacuum fluid. If, however, we admit the existence of phantom matter, its certain special kinds (with the parameter w ≤ -3) are consistent with a null big bang without interaction with vacuum (or without vacuum fluid at all). Then in the static region there is matter with w ≥ -1/3. Alternatively, the evolution can begin from a horizon in an infinitely remote past, leading to a scenario combining the features of a null big bang and an emergent universe

  17. Identifying Dwarfs Workloads in Big Data Analytics

    OpenAIRE

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  18. Loss of MeCP2 From Forebrain Excitatory Neurons Leads to Cortical Hyperexcitation and Seizures

    Science.gov (United States)

    Zhang, Wen; Peterson, Matthew; Beyer, Barbara; Frankel, Wayne N.

    2014-01-01

    Mutations of MECP2 cause Rett syndrome (RTT), a neurodevelopmental disorder leading to loss of motor and cognitive functions, impaired social interactions, and seizure at young ages. Defects of neuronal circuit development and function are thought to be responsible for the symptoms of RTT. The majority of RTT patients show recurrent seizures, indicating that neuronal hyperexcitation is a common feature of RTT. However, mechanisms underlying hyperexcitation in RTT are poorly understood. Here we show that deletion of Mecp2 from cortical excitatory neurons but not forebrain inhibitory neurons in the mouse leads to spontaneous seizures. Selective deletion of Mecp2 from excitatory but not inhibitory neurons in the forebrain reduces GABAergic transmission in layer 5 pyramidal neurons in the prefrontal and somatosensory cortices. Loss of MeCP2 from cortical excitatory neurons reduces the number of GABAergic synapses in the cortex, and enhances the excitability of layer 5 pyramidal neurons. Using single-cell deletion of Mecp2 in layer 2/3 pyramidal neurons, we show that GABAergic transmission is reduced in neurons without MeCP2, but is normal in neighboring neurons with MeCP2. Together, these results suggest that MeCP2 in cortical excitatory neurons plays a critical role in the regulation of GABAergic transmission and cortical excitability. PMID:24523563

  19. Over-harvesting driven by consumer demand leads to population decline: big-leaf mahogany in South America

    Science.gov (United States)

    James Grogan; Arthur G. Blundell; R. Matthew Landis; Ani Youatt; Raymond E. Gullison; Martha Martinez; Roberto Kometter; Marco Lentini; Richard E. Rice

    2010-01-01

    Consumer demand for the premier neotropical luxury timber, big-leaf mahogany (Swietenia macrophylla), has driven boom-and-bust logging cycles for centuries, depleting local and regional supplies from Mexico to Bolivia. We revise the standard historic range map for mahogany in South America and estimate the extent to which commercial stocks have been depleted using...

  20. Applications of Big Data in Education

    OpenAIRE

    Faisal Kalota

    2015-01-01

    Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners' needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in educa...

  1. Big Data Semantics

    NARCIS (Netherlands)

    Ceravolo, Paolo; Azzini, Antonia; Angelini, Marco; Catarci, Tiziana; Cudré-Mauroux, Philippe; Damiani, Ernesto; Mazak, Alexandra; van Keulen, Maurice; Jarrar, Mustafa; Santucci, Giuseppe; Sattler, Kai-Uwe; Scannapieco, Monica; Wimmer, Manuel; Wrembel, Robert; Zaraket, Fadi

    2018-01-01

    Big Data technology has discarded traditional data modeling approaches as no longer applicable to distributed data processing. It is, however, largely recognized that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be

  2. Comparative validity of brief to medium-length Big Five and Big Six personality questionnaires

    NARCIS (Netherlands)

    Thalmayer, A.G.; Saucier, G.; Eigenhuis, A.

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five

  3. Breaks in the 45S rDNA Lead to Recombination-Mediated Loss of Repeats.

    Science.gov (United States)

    Warmerdam, Daniël O; van den Berg, Jeroen; Medema, René H

    2016-03-22

    rDNA repeats constitute the most heavily transcribed region in the human genome. Tumors frequently display elevated levels of recombination in rDNA, indicating that the repeats are a liability to the genomic integrity of a cell. However, little is known about how cells deal with DNA double-stranded breaks in rDNA. Using selective endonucleases, we show that human cells are highly sensitive to breaks in 45S but not the 5S rDNA repeats. We find that homologous recombination inhibits repair of breaks in 45S rDNA, and this results in repeat loss. We identify the structural maintenance of chromosomes protein 5 (SMC5) as contributing to recombination-mediated repair of rDNA breaks. Together, our data demonstrate that SMC5-mediated recombination can lead to error-prone repair of 45S rDNA repeats, resulting in their loss and thereby reducing cellular viability. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Big data need big theory too.

    Science.gov (United States)

    Coveney, Peter V; Dougherty, Edward R; Highfield, Roger R

    2016-11-13

    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2015 The Authors.

  5. Chronic Conductive Hearing Loss Leads to Cochlear Degeneration.

    Science.gov (United States)

    Liberman, M Charles; Liberman, Leslie D; Maison, Stéphane F

    2015-01-01

    Synapses between cochlear nerve terminals and hair cells are the most vulnerable elements in the inner ear in both noise-induced and age-related hearing loss, and this neuropathy is exacerbated in the absence of efferent feedback from the olivocochlear bundle. If age-related loss is dominated by a lifetime of exposure to environmental sounds, reduction of acoustic drive to the inner ear might improve cochlear preservation throughout life. To test this, we removed the tympanic membrane unilaterally in one group of young adult mice, removed the olivocochlear bundle in another group and compared their cochlear function and innervation to age-matched controls one year later. Results showed that tympanic membrane removal, and the associated threshold elevation, was counterproductive: cochlear efferent innervation was dramatically reduced, especially the lateral olivocochlear terminals to the inner hair cell area, and there was a corresponding reduction in the number of cochlear nerve synapses. This loss led to a decrease in the amplitude of the suprathreshold cochlear neural responses. Similar results were seen in two cases with conductive hearing loss due to chronic otitis media. Outer hair cell death was increased only in ears lacking medial olivocochlear innervation following olivocochlear bundle cuts. Results suggest the novel ideas that 1) the olivocochlear efferent pathway has a dramatic use-dependent plasticity even in the adult ear and 2) a component of the lingering auditory processing disorder seen in humans after persistent middle-ear infections is cochlear in origin.

  6. Big Data and medicine: a big deal?

    Science.gov (United States)

    Mayer-Schönberger, V; Ingelsson, E

    2018-05-01

    Big Data promises huge benefits for medical research. Looking beyond superficial increases in the amount of data collected, we identify three key areas where Big Data differs from conventional analyses of data samples: (i) data are captured more comprehensively relative to the phenomenon under study; this reduces some bias but surfaces important trade-offs, such as between data quantity and data quality; (ii) data are often analysed using machine learning tools, such as neural networks rather than conventional statistical methods resulting in systems that over time capture insights implicit in data, but remain black boxes, rarely revealing causal connections; and (iii) the purpose of the analyses of data is no longer simply answering existing questions, but hinting at novel ones and generating promising new hypotheses. As a consequence, when performed right, Big Data analyses can accelerate research. Because Big Data approaches differ so fundamentally from small data ones, research structures, processes and mindsets need to adjust. The latent value of data is being reaped through repeated reuse of data, which runs counter to existing practices not only regarding data privacy, but data management more generally. Consequently, we suggest a number of adjustments such as boards reviewing responsible data use, and incentives to facilitate comprehensive data sharing. As data's role changes to a resource of insight, we also need to acknowledge the importance of collecting and making data available as a crucial part of our research endeavours, and reassess our formal processes from career advancement to treatment approval. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  7. Assessing Big Data

    DEFF Research Database (Denmark)

    Leimbach, Timo; Bachlechner, Daniel

    2015-01-01

    In recent years, big data has been one of the most controversially discussed technologies in terms of its possible positive and negative impact. Therefore, the need for technology assessments is obvious. This paper first provides, based on the results of a technology assessment study, an overview...... of the potential and challenges associated with big data and then describes the problems experienced during the study as well as methods found helpful to address them. The paper concludes with reflections on how the insights from the technology assessment study may have an impact on the future governance of big...... data....

  8. Big data, big responsibilities

    Directory of Open Access Journals (Sweden)

    Primavera De Filippi

    2014-01-01

    Full Text Available Big data refers to the collection and aggregation of large quantities of data produced by and about people, things or the interactions between them. With the advent of cloud computing, specialised data centres with powerful computational hardware and software resources can be used for processing and analysing a humongous amount of aggregated data coming from a variety of different sources. The analysis of such data is all the more valuable to the extent that it allows for specific patterns to be found and new correlations to be made between different datasets, so as to eventually deduce or infer new information, as well as to potentially predict behaviours or assess the likelihood for a certain event to occur. This article will focus specifically on the legal and moral obligations of online operators collecting and processing large amounts of data, to investigate the potential implications of big data analysis on the privacy of individual users and on society as a whole.

  9. Comparative validity of brief to medium-length Big Five and Big Six Personality Questionnaires.

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-12-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are faced with a variety of options as to inventory length. Furthermore, a 6-factor model has been proposed to extend and update the Big Five model, in part by adding a dimension of Honesty/Humility or Honesty/Propriety. In this study, 3 popular brief to medium-length Big Five measures (NEO Five Factor Inventory, Big Five Inventory [BFI], and International Personality Item Pool), and 3 six-factor measures (HEXACO Personality Inventory, Questionnaire Big Six Scales, and a 6-factor version of the BFI) were placed in competition to best predict important student life outcomes. The effect of test length was investigated by comparing brief versions of most measures (subsets of items) with original versions. Personality questionnaires were administered to undergraduate students (N = 227). Participants' college transcripts and student conduct records were obtained 6-9 months after data was collected. Six-factor inventories demonstrated better predictive ability for life outcomes than did some Big Five inventories. Additional behavioral observations made on participants, including their Facebook profiles and cell-phone text usage, were predicted similarly by Big Five and 6-factor measures. A brief version of the BFI performed surprisingly well; across inventory platforms, increasing test length had little effect on predictive validity. Comparative validity of the models and measures in terms of outcome prediction and parsimony is discussed.

  10. Big Machines and Big Science: 80 Years of Accelerators at Stanford

    Energy Technology Data Exchange (ETDEWEB)

    Loew, Gregory

    2008-12-16

    Longtime SLAC physicist Greg Loew will present a trip through SLAC's origins, highlighting its scientific achievements, and provide a glimpse of the lab's future in 'Big Machines and Big Science: 80 Years of Accelerators at Stanford.'

  11. Dual of big bang and big crunch

    International Nuclear Information System (INIS)

    Bak, Dongsu

    2007-01-01

    Starting from the Janus solution and its gauge theory dual, we obtain the dual gauge theory description of the cosmological solution by the procedure of double analytic continuation. The coupling is driven either to zero or to infinity at the big-bang and big-crunch singularities, which are shown to be related by the S-duality symmetry. In the dual Yang-Mills theory description, these are nonsingular as the coupling goes to zero in the N=4 super Yang-Mills theory. The cosmological singularities simply signal the failure of the supergravity description of the full type IIB superstring theory

  12. Big data algorithms, analytics, and applications

    CERN Document Server

    Li, Kuan-Ching; Yang, Laurence T; Cuzzocrea, Alfredo

    2015-01-01

    Data are generated at an exponential rate all over the world. Through advanced algorithms and analytics techniques, organizations can harness this data, discover hidden patterns, and use the findings to make meaningful decisions. Containing contributions from leading experts in their respective fields, this book bridges the gap between the vastness of big data and the appropriate computational methods for scientific and social discovery. It also explores related applications in diverse sectors, covering technologies for media/data communication, elastic media/data storage, cross-network media/

  13. Comparative Validity of Brief to Medium-Length Big Five and Big Six Personality Questionnaires

    Science.gov (United States)

    Thalmayer, Amber Gayle; Saucier, Gerard; Eigenhuis, Annemarie

    2011-01-01

    A general consensus on the Big Five model of personality attributes has been highly generative for the field of personality psychology. Many important psychological and life outcome correlates with Big Five trait dimensions have been established. But researchers must choose between multiple Big Five inventories when conducting a study and are…

  14. Big data for health.

    Science.gov (United States)

    Andreu-Perez, Javier; Poon, Carmen C Y; Merrifield, Robert D; Wong, Stephen T C; Yang, Guang-Zhong

    2015-07-01

    This paper provides an overview of recent developments in big data in the context of biomedical and health informatics. It outlines the key characteristics of big data and how medical and health informatics, translational bioinformatics, sensor informatics, and imaging informatics will benefit from an integrated approach of piecing together different aspects of personalized information from a diverse range of data sources, both structured and unstructured, covering genomics, proteomics, metabolomics, as well as imaging, clinical diagnosis, and long-term continuous physiological sensing of an individual. It is expected that recent advances in big data will expand our knowledge for testing new hypotheses about disease management from diagnosis to prevention to personalized treatment. The rise of big data, however, also raises challenges in terms of privacy, security, data ownership, data stewardship, and governance. This paper discusses some of the existing activities and future opportunities related to big data for health, outlining some of the key underlying issues that need to be tackled.

  15. Distributed Coordinate Descent Method for Learning with Big Data

    OpenAIRE

    Richtárik, Peter; Takáč, Martin

    2013-01-01

    In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for solving loss minimization problems with big data. We initially partition the coordinates (features) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bound...

  16. Improving the Success of Strategic Management Using Big Data.

    Science.gov (United States)

    Desai, Sapan S; Wilkerson, James; Roberts, Todd

    2016-01-01

    Strategic management involves determining organizational goals, implementing a strategic plan, and properly allocating resources. Poor access to pertinent and timely data misidentifies clinical goals, prevents effective resource allocation, and generates waste from inaccurate forecasting. Loss of operational efficiency diminishes the value stream, adversely impacts the quality of patient care, and hampers effective strategic management. We have pioneered an approach using big data to create competitive advantage by identifying trends in clinical practice, accurately anticipating future needs, and strategically allocating resources for maximum impact.

  17. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  18. BigWig and BigBed: enabling browsing of large distributed datasets.

    Science.gov (United States)

    Kent, W J; Zweig, A S; Barber, G; Hinrichs, A S; Karolchik, D

    2010-09-01

    BigWig and BigBed files are compressed binary indexed files containing data at several resolutions that allow the high-performance display of next-generation sequencing experiment results in the UCSC Genome Browser. The visualization is implemented using a multi-layered software approach that takes advantage of specific capabilities of web-based protocols and Linux and UNIX operating systems files, R trees and various indexing and compression tricks. As a result, only the data needed to support the current browser view is transmitted rather than the entire file, enabling fast remote access to large distributed data sets. Binaries for the BigWig and BigBed creation and parsing utilities may be downloaded at http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/. Source code for the creation and visualization software is freely available for non-commercial use at http://hgdownload.cse.ucsc.edu/admin/jksrc.zip, implemented in C and supported on Linux. The UCSC Genome Browser is available at http://genome.ucsc.edu.

  19. The GEP: Crowd-Sourcing Big Data Analysis with Undergraduates.

    Science.gov (United States)

    Elgin, Sarah C R; Hauser, Charles; Holzen, Teresa M; Jones, Christopher; Kleinschmit, Adam; Leatherman, Judith

    2017-02-01

    The era of 'big data' is also the era of abundant data, creating new opportunities for student-scientist research partnerships. By coordinating undergraduate efforts, the Genomics Education Partnership produces high-quality annotated data sets and analyses that could not be generated otherwise, leading to scientific publications while providing many students with research experience. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Thermography hogging the limelight at Big Sky

    Energy Technology Data Exchange (ETDEWEB)

    Plastow, C. [Fluke Electronics Canada, Mississauga, ON (Canada)

    2010-02-15

    The high levels of humidity and ammonia found at hog farms can lead to premature corrosion of electrical systems and create potential hazards, such as electrical fires. Big Sky Farms in Saskatchewan has performed on-site inspections at its 44 farms and 16 feed mills using handheld thermography technology from Fluke Electronics. Ti thermal imaging units save time and simplify inspections. The units could be used for everything, from checking out the bearings at the feed mills to electrical circuits and relays. The Ti25 is affordable and has the right features for a preventative maintenance program. Operators of Big Sky Farms use the Ti25 to inspect all circuit breakers of 600 volts or lower as well as transformers where corrosion often causes connections to break off. The units are used to look at bearings, do scanning and thermal imaging on motors. To date, the Ti25 has detected and highlighted 5 or 6 problems on transformers alone that could have been major issues. At one site, the Ti25 indicated that all 30 circuit breakers had loose connections and were overeating. Big Sky Farms fixed the problem right away before a disaster happened. In addition to reducing inspection times, the Ti25 can record all measurements and keep a record of all the readings for downloading. 2 figs.

  1. Soil biogeochemistry in the age of big data

    Science.gov (United States)

    Cécillon, Lauric; Barré, Pierre; Coissac, Eric; Plante, Alain; Rasse, Daniel

    2015-04-01

    Data is becoming one of the key resource of the XXIst century. Soil biogeochemistry is not spared by this new movement. The conservation of soils and their services recently came into the political agenda. However, clear knowledge on the links between soil characteristics and the various processes ensuring the provision of soil services is rare at the molecular or the plot scale, and does not exist at the landscape scale. This split between society's expectations on its natural capital, and scientific knowledge on the most complex material on earth has lead to an increasing number of studies on soils, using an increasing number of techniques of increasing complexity, with an increasing spatial and temporal coverage. From data scarcity with a basic data management system, soil biogeochemistry is now facing a proliferation of data, with few quality controls from data collection to publication and few skills to deal with them. Based on this observation, here we (1) address how big data could help in making sense of all these soil biogeochemical data, (2) point out several shortcomings of big data that most biogeochemists will experience in their future career. Massive storage of data is now common and recent opportunities for cloud storage enables data sharing among researchers all over the world. The need for integrative and collaborative computational databases in soil biogeochemistry is emerging through pioneering initiatives in this direction (molTERdb; earthcube), following soil microbiologists (GenBank). We expect that a series of data storage and management systems will rapidly revolutionize the way of accessing raw biogeochemical data, published or not. Data mining techniques combined with cluster or cloud computing hold significant promises for facilitating the use of complex analytical methods, and for revealing new insights previously hidden in complex data on soil mineralogy, organic matter and biodiversity. Indeed, important scientific advances have

  2. Big data-driven business how to use big data to win customers, beat competitors, and boost profits

    CERN Document Server

    Glass, Russell

    2014-01-01

    Get the expert perspective and practical advice on big data The Big Data-Driven Business: How to Use Big Data to Win Customers, Beat Competitors, and Boost Profits makes the case that big data is for real, and more than just big hype. The book uses real-life examples-from Nate Silver to Copernicus, and Apple to Blackberry-to demonstrate how the winners of the future will use big data to seek the truth. Written by a marketing journalist and the CEO of a multi-million-dollar B2B marketing platform that reaches more than 90% of the U.S. business population, this book is a comprehens

  3. Big Game Reporting Stations

    Data.gov (United States)

    Vermont Center for Geographic Information — Point locations of big game reporting stations. Big game reporting stations are places where hunters can legally report harvested deer, bear, or turkey. These are...

  4. Factors Leading to the Loss of Natural Elite Control of HIV-1 Infection.

    Science.gov (United States)

    Pernas, María; Tarancón-Diez, Laura; Rodríguez-Gallego, Esther; Gómez, Josep; Prado, Julia G; Casado, Concepción; Dominguez-Molina, Beatriz; Olivares, Isabel; Coiras, Maite; León, Agathe; Rodriguez, Carmen; Benito, Jose Miguel; Rallón, Norma; Plana, Montserrat; Martinez-Madrid, Onofre; Dapena, Marta; Iribarren, Jose Antonio; Del Romero, Jorge; García, Felipe; Alcamí, José; Muñoz-Fernández, M Ángeles; Vidal, Francisco; Leal, Manuel; Lopez-Galindez, Cecilio; Ruiz-Mateos, Ezequiel

    2017-12-06

    HIV-1 elite controllers (EC) maintain undetectable viral load (VL) in the absence of antiretroviral treatment. However, these subjects have heterogeneous clinical outcomes including a proportion loosing HIV-1 control over time. In this work we compared, in a longitudinal design, transient EC, analyzed before and after the loss of virological control, versus persistent EC. The aim was to identify factors leading to the loss of natural virological control of HIV-1-infection with a longitudinal retrospective study design. Gag-specific T-cell response was assessed by in vitro intracellular poly-cytokine production quantified by flow cytometry. Viral diversity and sequence-dating were performed in proviral DNA by PCR amplification at limiting dilution in env and gag genes. The expression profile of 70 serum cytokines and chemokines was assessed by multiplex immunoassays. We identified transient EC as subjects with low Gag-specific T-cell polyfunctionality, high viral diversity and high proinflammatory cytokines levels before the loss of control. Gag-specific T-cell polyfunctionality was inversely associated with viral diversity in transient controllers before the loss of control (r=-0.8; p =0.02). RANTES was a potential biomarker of transient control. This study identified, virological and immunological factors including inflammatory biomarkers associated with two different phenotypes within EC. These results may allow a more accurate definition of EC, which could help in a better clinical management of these individuals and in the development of future curative approaches. IMPORTANCE There is a rare group of HIV-infected patients who have the extraordinary capacity to maintain undetectable viral load levels in the absence of antiretroviral treatment, the so called HIV-1 elite controllers (EC). However, there is a proportion within these subjects that eventually loses this capability. In this work we found differences in virological and immune factors including soluble

  5. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    Energy Technology Data Exchange (ETDEWEB)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro; Kuhn, Michael; Carns, Philip; Ludwig, Thomas

    2017-09-05

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question: Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms

  6. Stalin's Big Fleet Program

    National Research Council Canada - National Science Library

    Mauner, Milan

    2002-01-01

    Although Dr. Milan Hauner's study 'Stalin's Big Fleet program' has focused primarily on the formation of Big Fleets during the Tsarist and Soviet periods of Russia's naval history, there are important lessons...

  7. Five Big, Big Five Issues : Rationale, Content, Structure, Status, and Crosscultural Assessment

    NARCIS (Netherlands)

    De Raad, Boele

    1998-01-01

    This article discusses the rationale, content, structure, status, and crosscultural assessment of the Big Five trait factors, focusing on topics of dispute and misunderstanding. Taxonomic restrictions of the original Big Five forerunner, the "Norman Five," are discussed, and criticisms regarding the

  8. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  9. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  10. Big Data as Governmentality

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    This paper conceptualizes how large-scale data and algorithms condition and reshape knowledge production when addressing international development challenges. The concept of governmentality and four dimensions of an analytics of government are proposed as a theoretical framework to examine how big...... data is constituted as an aspiration to improve the data and knowledge underpinning development efforts. Based on this framework, we argue that big data’s impact on how relevant problems are governed is enabled by (1) new techniques of visualizing development issues, (2) linking aspects...... shows that big data problematizes selected aspects of traditional ways to collect and analyze data for development (e.g. via household surveys). We also demonstrate that using big data analyses to address development challenges raises a number of questions that can deteriorate its impact....

  11. Boarding to Big data

    Directory of Open Access Journals (Sweden)

    Oana Claudia BRATOSIN

    2016-05-01

    Full Text Available Today Big data is an emerging topic, as the quantity of the information grows exponentially, laying the foundation for its main challenge, the value of the information. The information value is not only defined by the value extraction from huge data sets, as fast and optimal as possible, but also by the value extraction from uncertain and inaccurate data, in an innovative manner using Big data analytics. At this point, the main challenge of the businesses that use Big data tools is to clearly define the scope and the necessary output of the business so that the real value can be gained. This article aims to explain the Big data concept, its various classifications criteria, architecture, as well as the impact in the world wide processes.

  12. Big data - a 21st century science Maginot Line? No-boundary thinking: shifting from the big data paradigm.

    Science.gov (United States)

    Huang, Xiuzhen; Jennings, Steven F; Bruce, Barry; Buchan, Alison; Cai, Liming; Chen, Pengyin; Cramer, Carole L; Guan, Weihua; Hilgert, Uwe Kk; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Donald F; Nanduri, Bindu; Perkins, Andy; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Walker, Karl; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhang, Yu; Zhao, Zhongming; Moore, Jason H

    2015-01-01

    Whether your interests lie in scientific arenas, the corporate world, or in government, you have certainly heard the praises of big data: Big data will give you new insights, allow you to become more efficient, and/or will solve your problems. While big data has had some outstanding successes, many are now beginning to see that it is not the Silver Bullet that it has been touted to be. Here our main concern is the overall impact of big data; the current manifestation of big data is constructing a Maginot Line in science in the 21st century. Big data is not "lots of data" as a phenomena anymore; The big data paradigm is putting the spirit of the Maginot Line into lots of data. Big data overall is disconnecting researchers and science challenges. We propose No-Boundary Thinking (NBT), applying no-boundary thinking in problem defining to address science challenges.

  13. Big Data and Big Science

    OpenAIRE

    Di Meglio, Alberto

    2014-01-01

    Brief introduction to the challenges of big data in scientific research based on the work done by the HEP community at CERN and how the CERN openlab promotes collaboration among research institutes and industrial IT companies. Presented at the FutureGov 2014 conference in Singapore.

  14. Health level seven interoperability strategy: big data, incrementally structured.

    Science.gov (United States)

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  15. Big data is not a monolith

    CERN Document Server

    Ekbia, Hamid R; Mattioli, Michael

    2016-01-01

    Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies. The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data's ramifications. The contributors look at big data's effect on individuals as it exerts social control throu...

  16. Big universe, big data

    DEFF Research Database (Denmark)

    Kremer, Jan; Stensbo-Smidt, Kristoffer; Gieseke, Fabian Cristian

    2017-01-01

    , modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing......, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications....

  17. Will Big Data Close the Missing Heritability Gap?

    Science.gov (United States)

    Kim, Hwasoon; Grueneberg, Alexander; Vazquez, Ana I; Hsu, Stephen; de Los Campos, Gustavo

    2017-11-01

    Despite the important discoveries reported by genome-wide association (GWA) studies, for most traits and diseases the prediction R-squared (R-sq.) achieved with genetic scores remains considerably lower than the trait heritability. Modern biobanks will soon deliver unprecedentedly large biomedical data sets: Will the advent of big data close the gap between the trait heritability and the proportion of variance that can be explained by a genomic predictor? We addressed this question using Bayesian methods and a data analysis approach that produces a surface response relating prediction R-sq. with sample size and model complexity ( e.g. , number of SNPs). We applied the methodology to data from the interim release of the UK Biobank. Focusing on human height as a model trait and using 80,000 records for model training, we achieved a prediction R-sq. in testing ( n = 22,221) of 0.24 (95% C.I.: 0.23-0.25). Our estimates show that prediction R-sq. increases with sample size, reaching an estimated plateau at values that ranged from 0.1 to 0.37 for models using 500 and 50,000 (GWA-selected) SNPs, respectively. Soon much larger data sets will become available. Using the estimated surface response, we forecast that larger sample sizes will lead to further improvements in prediction R-sq. We conclude that big data will lead to a substantial reduction of the gap between trait heritability and the proportion of interindividual differences that can be explained with a genomic predictor. However, even with the power of big data, for complex traits we anticipate that the gap between prediction R-sq. and trait heritability will not be fully closed. Copyright © 2017 by the Genetics Society of America.

  18. Big Data and Chemical Education

    Science.gov (United States)

    Pence, Harry E.; Williams, Antony J.

    2016-01-01

    The amount of computerized information that organizations collect and process is growing so large that the term Big Data is commonly being used to describe the situation. Accordingly, Big Data is defined by a combination of the Volume, Variety, Velocity, and Veracity of the data being processed. Big Data tools are already having an impact in…

  19. Big data in Finnish financial services

    OpenAIRE

    Laurila, M. (Mikko)

    2017-01-01

    Abstract This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. ...

  20. Empowering Personalized Medicine with Big Data and Semantic Web Technology: Promises, Challenges, and Use Cases.

    Science.gov (United States)

    Panahiazar, Maryam; Taslimitehrani, Vahid; Jadhav, Ashutosh; Pathak, Jyotishman

    2014-10-01

    In healthcare, big data tools and technologies have the potential to create significant value by improving outcomes while lowering costs for each individual patient. Diagnostic images, genetic test results and biometric information are increasingly generated and stored in electronic health records presenting us with challenges in data that is by nature high volume, variety and velocity, thereby necessitating novel ways to store, manage and process big data. This presents an urgent need to develop new, scalable and expandable big data infrastructure and analytical methods that can enable healthcare providers access knowledge for the individual patient, yielding better decisions and outcomes. In this paper, we briefly discuss the nature of big data and the role of semantic web and data analysis for generating "smart data" which offer actionable information that supports better decision for personalized medicine. In our view, the biggest challenge is to create a system that makes big data robust and smart for healthcare providers and patients that can lead to more effective clinical decision-making, improved health outcomes, and ultimately, managing the healthcare costs. We highlight some of the challenges in using big data and propose the need for a semantic data-driven environment to address them. We illustrate our vision with practical use cases, and discuss a path for empowering personalized medicine using big data and semantic web technology.

  1. Big data in fashion industry

    Science.gov (United States)

    Jain, S.; Bruniaux, J.; Zeng, X.; Bruniaux, P.

    2017-10-01

    Significant work has been done in the field of big data in last decade. The concept of big data includes analysing voluminous data to extract valuable information. In the fashion world, big data is increasingly playing a part in trend forecasting, analysing consumer behaviour, preference and emotions. The purpose of this paper is to introduce the term fashion data and why it can be considered as big data. It also gives a broad classification of the types of fashion data and briefly defines them. Also, the methodology and working of a system that will use this data is briefly described.

  2. Changing the personality of a face: Perceived Big Two and Big Five personality factors modeled in real photographs.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2016-04-01

    General, spontaneous evaluations of strangers based on their faces have been shown to reflect judgments of these persons' intention and ability to harm. These evaluations can be mapped onto a 2D space defined by the dimensions trustworthiness (intention) and dominance (ability). Here we go beyond general evaluations and focus on more specific personality judgments derived from the Big Two and Big Five personality concepts. In particular, we investigate whether Big Two/Big Five personality judgments can be mapped onto the 2D space defined by the dimensions trustworthiness and dominance. Results indicate that judgments of the Big Two personality dimensions almost perfectly map onto the 2D space. In contrast, at least 3 of the Big Five dimensions (i.e., neuroticism, extraversion, and conscientiousness) go beyond the 2D space, indicating that additional dimensions are necessary to describe more specific face-based personality judgments accurately. Building on this evidence, we model the Big Two/Big Five personality dimensions in real facial photographs. Results from 2 validation studies show that the Big Two/Big Five are perceived reliably across different samples of faces and participants. Moreover, results reveal that participants differentiate reliably between the different Big Two/Big Five dimensions. Importantly, this high level of agreement and differentiation in personality judgments from faces likely creates a subjective reality which may have serious consequences for those being perceived-notably, these consequences ensue because the subjective reality is socially shared, irrespective of the judgments' validity. The methodological approach introduced here might prove useful in various psychological disciplines. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. The BigBOSS Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna

    2011-01-01

    BigBOSS will obtain observational constraints that will bear on three of the four 'science frontier' questions identified by the Astro2010 Cosmology and Fundamental Phyics Panel of the Decadal Survey: Why is the universe accelerating; what is dark matter and what are the properties of neutrinos? Indeed, the BigBOSS project was recommended for substantial immediate R and D support the PASAG report. The second highest ground-based priority from the Astro2010 Decadal Survey was the creation of a funding line within the NSF to support a 'Mid-Scale Innovations' program, and it used BigBOSS as a 'compelling' example for support. This choice was the result of the Decadal Survey's Program Priorization panels reviewing 29 mid-scale projects and recommending BigBOSS 'very highly'.

  4. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Science.gov (United States)

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  5. The research and application of the power big data

    Science.gov (United States)

    Zhang, Suxiang; Zhang, Dong; Zhang, Yaping; Cao, Jinping; Xu, Huiming

    2017-01-01

    Facing the increasing environment crisis, how to improve energy efficiency is the important problem. Power big data is main support tool to realize demand side management and response. With the promotion of smart power consumption, distributed clean energy and electric vehicles etc get wide application; meanwhile, the continuous development of the Internet of things technology, more applications access the endings in the grid power link, which leads to that a large number of electric terminal equipment, new energy access smart grid, and it will produce massive heterogeneous and multi-state electricity data. These data produce the power grid enterprise's precious wealth, as the power big data. How to transform it into valuable knowledge and effective operation becomes an important problem, it needs to interoperate in the smart grid. In this paper, we had researched the various applications of power big data and integrate the cloud computing and big data technology, which include electricity consumption online monitoring, the short-term power load forecasting and the analysis of the energy efficiency. Based on Hadoop, HBase and Hive etc., we realize the ETL and OLAP functions; and we also adopt the parallel computing framework to achieve the power load forecasting algorithms and propose a parallel locally weighted linear regression model; we study on energy efficiency rating model to comprehensive evaluate the level of energy consumption of electricity users, which allows users to understand their real-time energy consumption situation, adjust their electricity behavior to reduce energy consumption, it provides decision-making basis for the user. With an intelligent industrial park as example, this paper complete electricity management. Therefore, in the future, power big data will provide decision-making support tools for energy conservation and emissions reduction.

  6. Reactive Oxygen Species-Mediated Loss of Synaptic Akt1 Signaling Leads to Deficient Activity-Dependent Protein Translation Early in Alzheimer's Disease.

    Science.gov (United States)

    Ahmad, Faraz; Singh, Kunal; Das, Debajyoti; Gowaikar, Ruturaj; Shaw, Eisha; Ramachandran, Arathy; Rupanagudi, Khader Valli; Kommaddi, Reddy Peera; Bennett, David A; Ravindranath, Vijayalakshmi

    2017-12-01

    Synaptic deficits are known to underlie the cognitive dysfunction seen in Alzheimer's disease (AD). Generation of reactive oxygen species (ROS) by β-amyloid has also been implicated in AD pathogenesis. However, it is unclear whether ROS contributes to synaptic dysfunction seen in AD pathogenesis and, therefore, we examined whether altered redox signaling could contribute to synaptic deficits in AD. Activity dependent but not basal translation was impaired in synaptoneurosomes from 1-month old presymptomatic APP Swe /PS1ΔE9 (APP/PS1) mice, and this deficit was sustained till middle age (MA, 9-10 months). ROS generation leads to oxidative modification of Akt1 in the synapse and consequent reduction in Akt1-mechanistic target of rapamycin (mTOR) signaling, leading to deficiency in activity-dependent protein translation. Moreover, we found a similar loss of activity-dependent protein translation in synaptoneurosomes from postmortem AD brains. Loss of activity-dependent protein translation occurs presymptomatically early in the pathogenesis of AD. This is caused by ROS-mediated loss of pAkt1, leading to reduced synaptic Akt1-mTOR signaling and is rescued by overexpression of Akt1. ROS-mediated damage is restricted to the synaptosomes, indicating selectivity. We demonstrate that ROS-mediated oxidative modification of Akt1 contributes to synaptic dysfunction in AD, seen as loss of activity-dependent protein translation that is essential for synaptic plasticity and maintenance. Therapeutic strategies promoting Akt1-mTOR signaling at synapses may provide novel target(s) for disease-modifying therapy in AD. Antioxid. Redox Signal. 27, 1269-1280.

  7. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  8. Big data for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kaufman, Marcia

    2013-01-01

    Find the right big data solution for your business or organization Big data management is one of the major challenges facing business, industry, and not-for-profit organizations. Data sets such as customer transactions for a mega-retailer, weather patterns monitored by meteorologists, or social network activity can quickly outpace the capacity of traditional data management tools. If you need to develop or manage big data solutions, you'll appreciate how these four experts define, explain, and guide you through this new and often confusing concept. You'll learn what it is, why it m

  9. Exploring complex and big data

    Directory of Open Access Journals (Sweden)

    Stefanowski Jerzy

    2017-12-01

    Full Text Available This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges, which ultimately seems to be of greater importance than the sheer data volume.

  10. Was there a big bang

    International Nuclear Information System (INIS)

    Narlikar, J.

    1981-01-01

    In discussing the viability of the big-bang model of the Universe relative evidence is examined including the discrepancies in the age of the big-bang Universe, the red shifts of quasars, the microwave background radiation, general theory of relativity aspects such as the change of the gravitational constant with time, and quantum theory considerations. It is felt that the arguments considered show that the big-bang picture is not as soundly established, either theoretically or observationally, as it is usually claimed to be, that the cosmological problem is still wide open and alternatives to the standard big-bang picture should be seriously investigated. (U.K.)

  11. BIG DATA-DRIVEN MARKETING: AN ABSTRACT

    OpenAIRE

    Suoniemi, Samppa; Meyer-Waarden, Lars; Munzel, Andreas

    2017-01-01

    Customer information plays a key role in managing successful relationships with valuable customers. Big data customer analytics use (BD use), i.e., the extent to which customer information derived from big data analytics guides marketing decisions, helps firms better meet customer needs for competitive advantage. This study addresses three research questions: What are the key antecedents of big data customer analytics use? How, and to what extent, does big data customer an...

  12. Big Data Analytics in Medicine and Healthcare.

    Science.gov (United States)

    Ristevski, Blagoj; Chen, Ming

    2018-05-10

    This paper surveys big data with highlighting the big data analytics in medicine and healthcare. Big data characteristics: value, volume, velocity, variety, veracity and variability are described. Big data analytics in medicine and healthcare covers integration and analysis of large amount of complex heterogeneous data such as various - omics data (genomics, epigenomics, transcriptomics, proteomics, metabolomics, interactomics, pharmacogenomics, diseasomics), biomedical data and electronic health records data. We underline the challenging issues about big data privacy and security. Regarding big data characteristics, some directions of using suitable and promising open-source distributed data processing software platform are given.

  13. The trashing of Big Green

    International Nuclear Information System (INIS)

    Felten, E.

    1990-01-01

    The Big Green initiative on California's ballot lost by a margin of 2-to-1. Green measures lost in five other states, shocking ecology-minded groups. According to the postmortem by environmentalists, Big Green was a victim of poor timing and big spending by the opposition. Now its supporters plan to break up the bill and try to pass some provisions in the Legislature

  14. Long-Range Big Quantum-Data Transmission

    Science.gov (United States)

    Zwerger, M.; Pirker, A.; Dunjko, V.; Briegel, H. J.; Dür, W.

    2018-01-01

    We introduce an alternative type of quantum repeater for long-range quantum communication with improved scaling with the distance. We show that by employing hashing, a deterministic entanglement distillation protocol with one-way communication, one obtains a scalable scheme that allows one to reach arbitrary distances, with constant overhead in resources per repeater station, and ultrahigh rates. In practical terms, we show that, also with moderate resources of a few hundred qubits at each repeater station, one can reach intercontinental distances. At the same time, a measurement-based implementation allows one to tolerate high loss but also operational and memory errors of the order of several percent per qubit. This opens the way for long-distance communication of big quantum data.

  15. Big sagebrush (Artemisia tridentata) in a shifting climate context: Assessment of seedling responses to climate

    Science.gov (United States)

    Martha A. Brabec

    2014-01-01

    The loss of big sagebrush (Artemisia tridentata) throughout the Great Basin Desert has motivated efforts to restore it because of fire and other disturbance effects on sagebrush-dependent wildlife and ecosystem function. Initial establishment is the first challenge to restoration, and appropriateness of seeds, climate, and weather variability are factors that may...

  16. The Big Bang Singularity

    Science.gov (United States)

    Ling, Eric

    The big bang theory is a model of the universe which makes the striking prediction that the universe began a finite amount of time in the past at the so called "Big Bang singularity." We explore the physical and mathematical justification of this surprising result. After laying down the framework of the universe as a spacetime manifold, we combine physical observations with global symmetrical assumptions to deduce the FRW cosmological models which predict a big bang singularity. Next we prove a couple theorems due to Stephen Hawking which show that the big bang singularity exists even if one removes the global symmetrical assumptions. Lastly, we investigate the conditions one needs to impose on a spacetime if one wishes to avoid a singularity. The ideas and concepts used here to study spacetimes are similar to those used to study Riemannian manifolds, therefore we compare and contrast the two geometries throughout.

  17. Reframing Open Big Data

    DEFF Research Database (Denmark)

    Marton, Attila; Avital, Michel; Jensen, Tina Blegind

    2013-01-01

    Recent developments in the techniques and technologies of collecting, sharing and analysing data are challenging the field of information systems (IS) research let alone the boundaries of organizations and the established practices of decision-making. Coined ‘open data’ and ‘big data......’, these developments introduce an unprecedented level of societal and organizational engagement with the potential of computational data to generate new insights and information. Based on the commonalities shared by open data and big data, we develop a research framework that we refer to as open big data (OBD......) by employing the dimensions of ‘order’ and ‘relationality’. We argue that these dimensions offer a viable approach for IS research on open and big data because they address one of the core value propositions of IS; i.e. how to support organizing with computational data. We contrast these dimensions with two...

  18. How to use Big Data technologies to optimize operations in Upstream Petroleum Industry

    Directory of Open Access Journals (Sweden)

    Abdelkader Baaziz

    2013-12-01

    Full Text Available “Big Data is the oil of the new economy” is the most famous citation during the three last years. It has even been adopted by the World Economic Forum in 2011. In fact, Big Data is like crude! It’s valuable, but if unrefined it cannot be used. It must be broken down, analyzed for it to have value. But what about Big Data generated by the Petroleum Industry and particularly its upstream segment? Upstream is no stranger to Big Data. Understanding and leveraging data in the upstream segment enables firms to remain competitive throughout planning, exploration, delineation, and field development.Oil & Gas Companies conduct advanced geophysics modeling and simulation to support operations where 2D, 3D & 4D Seismic generate significant data during exploration phases. They closely monitor the performance of their operational assets. To do this, they use tens of thousands of data-collecting sensors in subsurface wells and surface facilities to provide continuous and real-time monitoring of assets and environmental conditions. Unfortunately, this information comes in various and increasingly complex forms, making it a challenge to collect, interpret, and leverage the disparate data. As an example, Chevron’s internal IT traffic alone exceeds 1.5 terabytes a day.Big Data technologies integrate common and disparate data sets to deliver the right information at the appropriate time to the correct decision-maker. These capabilities help firms act on large volumes of data, transforming decision-making from reactive to proactive and optimizing all phases of exploration, development and production. Furthermore, Big Data offers multiple opportunities to ensure safer, more responsible operations. Another invaluable effect of that would be shared learning.The aim of this paper is to explain how to use Big Data technologies to optimize operations. How can Big Data help experts to decision-making leading the desired outcomes?Keywords:Big Data; Analytics

  19. Medical big data: promise and challenges.

    Science.gov (United States)

    Lee, Choong Ho; Yoon, Hyung-Jin

    2017-03-01

    The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  20. Medical big data: promise and challenges

    Directory of Open Access Journals (Sweden)

    Choong Ho Lee

    2017-03-01

    Full Text Available The concept of big data, commonly characterized by volume, variety, velocity, and veracity, goes far beyond the data type and includes the aspects of data analysis, such as hypothesis-generating, rather than hypothesis-testing. Big data focuses on temporal stability of the association, rather than on causal relationship and underlying probability distribution assumptions are frequently not required. Medical big data as material to be analyzed has various features that are not only distinct from big data of other disciplines, but also distinct from traditional clinical epidemiology. Big data technology has many areas of application in healthcare, such as predictive modeling and clinical decision support, disease or safety surveillance, public health, and research. Big data analytics frequently exploits analytic methods developed in data mining, including classification, clustering, and regression. Medical big data analyses are complicated by many technical issues, such as missing values, curse of dimensionality, and bias control, and share the inherent limitations of observation study, namely the inability to test causality resulting from residual confounding and reverse causation. Recently, propensity score analysis and instrumental variable analysis have been introduced to overcome these limitations, and they have accomplished a great deal. Many challenges, such as the absence of evidence of practical benefits of big data, methodological issues including legal and ethical issues, and clinical integration and utility issues, must be overcome to realize the promise of medical big data as the fuel of a continuous learning healthcare system that will improve patient outcome and reduce waste in areas including nephrology.

  1. What is beyond the big five?

    Science.gov (United States)

    Saucier, G; Goldberg, L R

    1998-08-01

    Previous investigators have proposed that various kinds of person-descriptive content--such as differences in attitudes or values, in sheer evaluation, in attractiveness, or in height and girth--are not adequately captured by the Big Five Model. We report on a rather exhaustive search for reliable sources of Big Five-independent variation in data from person-descriptive adjectives. Fifty-three candidate clusters were developed in a college sample using diverse approaches and sources. In a nonstudent adult sample, clusters were evaluated with respect to a minimax criterion: minimum multiple correlation with factors from Big Five markers and maximum reliability. The most clearly Big Five-independent clusters referred to Height, Girth, Religiousness, Employment Status, Youthfulness and Negative Valence (or low-base-rate attributes). Clusters referring to Fashionableness, Sensuality/Seductiveness, Beauty, Masculinity, Frugality, Humor, Wealth, Prejudice, Folksiness, Cunning, and Luck appeared to be potentially beyond the Big Five, although each of these clusters demonstrated Big Five multiple correlations of .30 to .45, and at least one correlation of .20 and over with a Big Five factor. Of all these content areas, Religiousness, Negative Valence, and the various aspects of Attractiveness were found to be represented by a substantial number of distinct, common adjectives. Results suggest directions for supplementing the Big Five when one wishes to extend variable selection outside the domain of personality traits as conventionally defined.

  2. Big Data Analytics and Its Applications

    Directory of Open Access Journals (Sweden)

    Mashooque A. Memon

    2017-10-01

    Full Text Available The term, Big Data, has been authored to refer to the extensive heave of data that can't be managed by traditional data handling methods or techniques. The field of Big Data plays an indispensable role in various fields, such as agriculture, banking, data mining, education, chemistry, finance, cloud computing, marketing, health care stocks. Big data analytics is the method for looking at big data to reveal hidden patterns, incomprehensible relationship and other important data that can be utilize to resolve on enhanced decisions. There has been a perpetually expanding interest for big data because of its fast development and since it covers different areas of applications. Apache Hadoop open source technology created in Java and keeps running on Linux working framework was used. The primary commitment of this exploration is to display an effective and free solution for big data application in a distributed environment, with its advantages and indicating its easy use. Later on, there emerge to be a required for an analytical review of new developments in the big data technology. Healthcare is one of the best concerns of the world. Big data in healthcare imply to electronic health data sets that are identified with patient healthcare and prosperity. Data in the healthcare area is developing past managing limit of the healthcare associations and is relied upon to increment fundamentally in the coming years.

  3. Loss of Corneodesmosin Leads to Severe Skin Barrier Defect, Pruritus, and Atopy: Unraveling the Peeling Skin Disease

    OpenAIRE

    Oji, Vinzenz; Eckl, Katja-Martina; Aufenvenne, Karin; Nätebus, Marc; Tarinski, Tatjana; Ackermann, Katharina; Seller, Natalia; Metze, Dieter; Nürnberg, Gudrun; Fölster-Holst, Regina; Schäfer-Korting, Monika; Hausser, Ingrid; Traupe, Heiko; Hennies, Hans Christian

    2010-01-01

    Generalized peeling skin disease is an autosomal-recessive ichthyosiform erythroderma characterized by lifelong patchy peeling of the skin. After genome-wide linkage analysis, we have identified a homozygous nonsense mutation in CDSN in a large consanguineous family with generalized peeling skin, pruritus, and food allergies, which leads to a complete loss of corneodesmosin. In contrast to hypotrichosis simplex, which can be associated with specific dominant CDSN mutations, peeling skin disea...

  4. 77 FR 27245 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N069; FXRS1265030000S3-123-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN AGENCY: Fish and... plan (CCP) and environmental assessment (EA) for Big Stone National Wildlife Refuge (Refuge, NWR) for...

  5. The BigBoss Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schelgel, D.; Abdalla, F.; Abraham, T.; Ahn, C.; Allende Prieto, C.; Annis, J.; Aubourg, E.; Azzaro, M.; Bailey, S.; Baltay, C.; Baugh, C.; Bebek, C.; Becerril, S.; Blanton, M.; Bolton, A.; Bromley, B.; Cahn, R.; Carton, P.-H.; Cervanted-Cota, J.L.; Chu, Y.; Cortes, M.; /APC, Paris /Brookhaven /IRFU, Saclay /Marseille, CPPM /Marseille, CPT /Durham U. / /IEU, Seoul /Fermilab /IAA, Granada /IAC, La Laguna / /IAC, Mexico / / /Madrid, IFT /Marseille, Lab. Astrophys. / / /New York U. /Valencia U.

    2012-06-07

    BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = {lambda}/{Delta}{lambda} = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (k{sub max} = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (k{sub max} = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.

  6. Big data and educational research

    OpenAIRE

    Beneito-Montagut, Roser

    2017-01-01

    Big data and data analytics offer the promise to enhance teaching and learning, improve educational research and progress education governance. This chapter aims to contribute to the conceptual and methodological understanding of big data and analytics within educational research. It describes the opportunities and challenges that big data and analytics bring to education as well as critically explore the perils of applying a data driven approach to education. Despite the claimed value of the...

  7. Thick-Big Descriptions

    DEFF Research Database (Denmark)

    Lai, Signe Sophus

    The paper discusses the rewards and challenges of employing commercial audience measurements data – gathered by media industries for profitmaking purposes – in ethnographic research on the Internet in everyday life. It questions claims to the objectivity of big data (Anderson 2008), the assumption...... communication systems, language and behavior appear as texts, outputs, and discourses (data to be ‘found’) – big data then documents things that in earlier research required interviews and observations (data to be ‘made’) (Jensen 2014). However, web-measurement enterprises build audiences according...... to a commercial logic (boyd & Crawford 2011) and is as such directed by motives that call for specific types of sellable user data and specific segmentation strategies. In combining big data and ‘thick descriptions’ (Geertz 1973) scholars need to question how ethnographic fieldwork might map the ‘data not seen...

  8. Initial conditions and the structure of the singularity in pre-big-bang cosmology

    NARCIS (Netherlands)

    Feinstein, A.; Kunze, K.E.; Vazquez-Mozo, M.A.

    2000-01-01

    We propose a picture, within the pre-big-bang approach, in which the universe emerges from a bath of plane gravitational and dilatonic waves. The waves interact gravitationally breaking the exact plane symmetry and lead generically to gravitational collapse resulting in a singularity with the

  9. Big Data in Designing Clinical Trials: Opportunities and Challenges.

    Science.gov (United States)

    Mayo, Charles S; Matuszak, Martha M; Schipper, Matthew J; Jolly, Shruti; Hayman, James A; Ten Haken, Randall K

    2017-01-01

    Emergence of big data analytics resource systems (BDARSs) as a part of routine practice in Radiation Oncology is on the horizon. Gradually, individual researchers, vendors, and professional societies are leading initiatives to create and demonstrate use of automated systems. What are the implications for design of clinical trials, as these systems emerge? Gold standard, randomized controlled trials (RCTs) have high internal validity for the patients and settings fitting constraints of the trial, but also have limitations including: reproducibility, generalizability to routine practice, infrequent external validation, selection bias, characterization of confounding factors, ethics, and use for rare events. BDARS present opportunities to augment and extend RCTs. Preliminary modeling using single- and muti-institutional BDARS may lead to better design and less cost. Standardizations in data elements, clinical processes, and nomenclatures used to decrease variability and increase veracity needed for automation and multi-institutional data pooling in BDARS also support ability to add clinical validation phases to clinical trial design and increase participation. However, volume and variety in BDARS present other technical, policy, and conceptual challenges including applicable statistical concepts, cloud-based technologies. In this summary, we will examine both the opportunities and the challenges for use of big data in design of clinical trials.

  10. Big Data in Designing Clinical Trials: Opportunities and Challenges

    Directory of Open Access Journals (Sweden)

    Charles S. Mayo

    2017-08-01

    Full Text Available Emergence of big data analytics resource systems (BDARSs as a part of routine practice in Radiation Oncology is on the horizon. Gradually, individual researchers, vendors, and professional societies are leading initiatives to create and demonstrate use of automated systems. What are the implications for design of clinical trials, as these systems emerge? Gold standard, randomized controlled trials (RCTs have high internal validity for the patients and settings fitting constraints of the trial, but also have limitations including: reproducibility, generalizability to routine practice, infrequent external validation, selection bias, characterization of confounding factors, ethics, and use for rare events. BDARS present opportunities to augment and extend RCTs. Preliminary modeling using single- and muti-institutional BDARS may lead to better design and less cost. Standardizations in data elements, clinical processes, and nomenclatures used to decrease variability and increase veracity needed for automation and multi-institutional data pooling in BDARS also support ability to add clinical validation phases to clinical trial design and increase participation. However, volume and variety in BDARS present other technical, policy, and conceptual challenges including applicable statistical concepts, cloud-based technologies. In this summary, we will examine both the opportunities and the challenges for use of big data in design of clinical trials.

  11. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  12. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  13. Antigravity and the big crunch/big bang transition

    Science.gov (United States)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-08-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  14. Antigravity and the big crunch/big bang transition

    Energy Technology Data Exchange (ETDEWEB)

    Bars, Itzhak [Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089-2535 (United States); Chen, Shih-Hung [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Department of Physics and School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Steinhardt, Paul J., E-mail: steinh@princeton.edu [Department of Physics and Princeton Center for Theoretical Physics, Princeton University, Princeton, NJ 08544 (United States); Turok, Neil [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2012-08-29

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  15. Antigravity and the big crunch/big bang transition

    International Nuclear Information System (INIS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-01-01

    We point out a new phenomenon which seems to be generic in 4d effective theories of scalar fields coupled to Einstein gravity, when applied to cosmology. A lift of such theories to a Weyl-invariant extension allows one to define classical evolution through cosmological singularities unambiguously, and hence construct geodesically complete background spacetimes. An attractor mechanism ensures that, at the level of the effective theory, generic solutions undergo a big crunch/big bang transition by contracting to zero size, passing through a brief antigravity phase, shrinking to zero size again, and re-emerging into an expanding normal gravity phase. The result may be useful for the construction of complete bouncing cosmologies like the cyclic model.

  16. Big data: een zoektocht naar instituties

    NARCIS (Netherlands)

    van der Voort, H.G.; Crompvoets, J

    2016-01-01

    Big data is a well-known phenomenon, even a buzzword nowadays. It refers to an abundance of data and new possibilities to process and use them. Big data is subject of many publications. Some pay attention to the many possibilities of big data, others warn us for their consequences. This special

  17. Data, Data, Data : Big, Linked & Open

    NARCIS (Netherlands)

    Folmer, E.J.A.; Krukkert, D.; Eckartz, S.M.

    2013-01-01

    De gehele business en IT-wereld praat op dit moment over Big Data, een trend die medio 2013 Cloud Computing is gepasseerd (op basis van Google Trends). Ook beleidsmakers houden zich actief bezig met Big Data. Neelie Kroes, vice-president van de Europese Commissie, spreekt over de ‘Big Data

  18. Study of LBS for characterization and analysis of big data benchmarks

    International Nuclear Information System (INIS)

    Chandio, A.A.; Zhang, F.; Memon, T.D.

    2014-01-01

    In the past few years, most organizations are gradually diverting their applications and services to Cloud. This is because Cloud paradigm enables (a) on-demand accessed and (b) large data processing for their applications and users on Internet anywhere in the world. The rapid growth of urbanization in developed and developing countries leads a new emerging concept called Urban Computing, one of the application domains that is rapidly deployed to the Cloud. More precisely, in the concept of Urban Computing, sensors, vehicles, devices, buildings, and roads are used as a component to probe city dynamics. Their data representation is widely available including GPS traces of vehicles. However, their applications are more towards data processing and storage hungry, which is due to their data increment in large volume starts from few dozen of TB (Tera Bytes) to thousands of PT (Peta Bytes) (i.e. Big Data). To increase the development and the assessment of the applications such as LBS (Location Based Services), a benchmark of Big Data is urgently needed. This research is a novel research on LBS to characterize and analyze the Big Data benchmarks. We focused on map-matching, which is being used as pre-processing step in many LBS applications. In this preliminary work, this paper also describes current status of Big Data benchmarks and our future direction. (author)

  19. Study on LBS for Characterization and Analysis of Big Data Benchmarks

    Directory of Open Access Journals (Sweden)

    Aftab Ahmed Chandio

    2014-10-01

    Full Text Available In the past few years, most organizations are gradually diverting their applications and services to Cloud. This is because Cloud paradigm enables (a on-demand accessed and (b large data processing for their applications and users on Internet anywhere in the world. The rapid growth of urbanization in developed and developing countries leads a new emerging concept called Urban Computing, one of the application domains that is rapidly deployed to the Cloud. More precisely, in the concept of Urban Computing, sensors, vehicles, devices, buildings, and roads are used as a component to probe city dynamics. Their data representation is widely available including GPS traces of vehicles. However, their applications are more towards data processing and storage hungry, which is due to their data increment in large volume starts from few dozen of TB (Tera Bytes to thousands of PT (Peta Bytes (i.e. Big Data. To increase the development and the assessment of the applications such as LBS (Location Based Services, a benchmark of Big Data is urgently needed. This research is a novel research on LBS to characterize and analyze the Big Data benchmarks. We focused on map-matching, which is being used as pre-processing step in many LBS applications. In this preliminary work, this paper also describes current status of Big Data benchmarks and our future direction

  20. Pre-Big-Bang bubbles from the gravitational instability of generic string vacua

    International Nuclear Information System (INIS)

    Buonanno, A.; Damour, T.; Veneziano, G.

    1999-01-01

    We formulate the basic postulate of pre-Big-Bang cosmology as one of 'asymptotic past triviality', by which we mean that the initial state is a generic perturbative solution of the tree-level low-energy effective action. Such a past-trivial 'string vacuum' is made of an arbitrary ensemble of incoming gravitational and dilatonic waves, and is generically prone to gravitational instability, leading to the possible formation of many black holes hiding singular space-like hypersurfaces. Each such singular space-like hypersurface of gravitational collapse becomes, in the string-frame metric, the usual Big-Bang t = 0 hypersurface, i.e. the place of birth of a baby Friedmann universe after a period of dilaton-driven inflation. Specializing to the spherically symmetric case, we review and reinterpret previous work on the subject, and propose a simple, scale-invariant criterion for collapse/inflation in terms of asymptotic data at past null infinity. Those data should determine whether, when, and where collapse/inflation occurs, and, when it does, fix its characteristics, including anisotropies on the Big-Bang hypersurface whose imprint could have survived till now. Using Bayesian probability concepts, we finally attempt to answer some fine-tuning objections recently moved to the pre-Big-Bang scenario

  1. Pre-Big-Bang bubbles from the gravitational instability of generic string vacua

    Energy Technology Data Exchange (ETDEWEB)

    Buonanno, A.; Damour, T.; Veneziano, G

    1999-03-22

    We formulate the basic postulate of pre-Big-Bang cosmology as one of 'asymptotic past triviality', by which we mean that the initial state is a generic perturbative solution of the tree-level low-energy effective action. Such a past-trivial 'string vacuum' is made of an arbitrary ensemble of incoming gravitational and dilatonic waves, and is generically prone to gravitational instability, leading to the possible formation of many black holes hiding singular space-like hypersurfaces. Each such singular space-like hypersurface of gravitational collapse becomes, in the string-frame metric, the usual Big-Bang t = 0 hypersurface, i.e. the place of birth of a baby Friedmann universe after a period of dilaton-driven inflation. Specializing to the spherically symmetric case, we review and reinterpret previous work on the subject, and propose a simple, scale-invariant criterion for collapse/inflation in terms of asymptotic data at past null infinity. Those data should determine whether, when, and where collapse/inflation occurs, and, when it does, fix its characteristics, including anisotropies on the Big-Bang hypersurface whose imprint could have survived till now. Using Bayesian probability concepts, we finally attempt to answer some fine-tuning objections recently moved to the pre-Big-Bang scenario.

  2. Carrot Loss during Primary Production : Field Waste and Pack House Waste.

    OpenAIRE

    Bond, Rebekka

    2016-01-01

    Background: it has been suggested that roughly one-third of all food produced for human consumption is lost or wasted globally. The reduction of loss and waste is seen as an important societal issue with considerable ethical, ecological and economic implications. Fruit and vegetables have the highest wastage rates of any food products; (45 %). And a big part of this waste occurs during production, but empirical data on loss during primary production is limited. Carrots are an important hortic...

  3. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  4. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  5. The Big bang and the Quantum

    Science.gov (United States)

    Ashtekar, Abhay

    2010-06-01

    General relativity predicts that space-time comes to an end and physics comes to a halt at the big-bang. Recent developments in loop quantum cosmology have shown that these predictions cannot be trusted. Quantum geometry effects can resolve singularities, thereby opening new vistas. Examples are: The big bang is replaced by a quantum bounce; the `horizon problem' disappears; immediately after the big bounce, there is a super-inflationary phase with its own phenomenological ramifications; and, in presence of a standard inflation potential, initial conditions are naturally set for a long, slow roll inflation independently of what happens in the pre-big bang branch. As in my talk at the conference, I will first discuss the foundational issues and then the implications of the new Planck scale physics near the Big Bang.

  6. Big Bang baryosynthesis

    International Nuclear Information System (INIS)

    Turner, M.S.; Chicago Univ., IL

    1983-01-01

    In these lectures I briefly review Big Bang baryosynthesis. In the first lecture I discuss the evidence which exists for the BAU, the failure of non-GUT symmetrical cosmologies, the qualitative picture of baryosynthesis, and numerical results of detailed baryosynthesis calculations. In the second lecture I discuss the requisite CP violation in some detail, further the statistical mechanics of baryosynthesis, possible complications to the simplest scenario, and one cosmological implication of Big Bang baryosynthesis. (orig./HSI)

  7. Exploiting big data for critical care research.

    Science.gov (United States)

    Docherty, Annemarie B; Lone, Nazir I

    2015-10-01

    Over recent years the digitalization, collection and storage of vast quantities of data, in combination with advances in data science, has opened up a new era of big data. In this review, we define big data, identify examples of critical care research using big data, discuss the limitations and ethical concerns of using these large datasets and finally consider scope for future research. Big data refers to datasets whose size, complexity and dynamic nature are beyond the scope of traditional data collection and analysis methods. The potential benefits to critical care are significant, with faster progress in improving health and better value for money. Although not replacing clinical trials, big data can improve their design and advance the field of precision medicine. However, there are limitations to analysing big data using observational methods. In addition, there are ethical concerns regarding maintaining confidentiality of patients who contribute to these datasets. Big data have the potential to improve medical care and reduce costs, both by individualizing medicine, and bringing together multiple sources of data about individual patients. As big data become increasingly mainstream, it will be important to maintain public confidence by safeguarding data security, governance and confidentiality.

  8. Deep Mixing of 3He: Reconciling Big Bang and Stellar Nucleosynthesis

    International Nuclear Information System (INIS)

    Eggleton, P P; Dearborn, D P; Lattanzio, J

    2006-01-01

    Low-mass stars, ∼ 1-2 solar masses, near the Main Sequence are efficient at producing 3 He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3 He with the predictions of both stellar and Big Bang nucleosynthesis. In this paper we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus we are able to remove the threat that 3 He production in low-mass stars poses to the Big Bang nucleosynthesis of 3 He

  9. Deep mixing of 3He: reconciling Big Bang and stellar nucleosynthesis.

    Science.gov (United States)

    Eggleton, Peter P; Dearborn, David S P; Lattanzio, John C

    2006-12-08

    Low-mass stars, approximately 1 to 2 solar masses, near the Main Sequence are efficient at producing the helium isotope 3He, which they mix into the convective envelope on the giant branch and should distribute into the Galaxy by way of envelope loss. This process is so efficient that it is difficult to reconcile the low observed cosmic abundance of 3He with the predictions of both stellar and Big Bang nucleosynthesis. Here we find, by modeling a red giant with a fully three-dimensional hydrodynamic code and a full nucleosynthetic network, that mixing arises in the supposedly stable and radiative zone between the hydrogen-burning shell and the base of the convective envelope. This mixing is due to Rayleigh-Taylor instability within a zone just above the hydrogen-burning shell, where a nuclear reaction lowers the mean molecular weight slightly. Thus, we are able to remove the threat that 3He production in low-mass stars poses to the Big Bang nucleosynthesis of 3He.

  10. Cascading impacts of anthropogenically driven habitat loss: deforestation, flooding, and possible lead poisoning in howler monkeys (Alouatta pigra).

    Science.gov (United States)

    Serio-Silva, Juan Carlos; Olguín, Eugenia J; Garcia-Feria, Luis; Tapia-Fierro, Karla; Chapman, Colin A

    2015-01-01

    To construct informed conservation plans, researchers must go beyond understanding readily apparent threats such as habitat loss and bush-meat hunting. They must predict subtle and cascading effects of anthropogenic environmental modifications. This study considered a potential cascading effect of deforestation on the howler monkeys (Alouatta pigra) of Balancán, Mexico. Deforestation intensifies flooding. Thus, we predicted that increased flooding of the Usumacinta River, which creates large bodies of water that slowly evaporate, would produce increased lead content in the soils and plants, resulting in lead exposure in the howler monkeys. The average lead levels were 18.18 ± 6.76 ppm in the soils and 5.85 ± 4.37 ppm in the plants. However, the average lead content of the hair of 13 captured howler monkeys was 24.12 ± 5.84 ppm. The lead levels in the animals were correlated with 2 of 15 blood traits (lactate dehydrogenase and total bilirubin) previously documented to be associated with exposure to lead. Our research illustrates the urgent need to set reference values indicating when adverse impacts of high environmental lead levels occur, whether anthropogenic or natural, and the need to evaluate possible cascading effects of deforestation on primates.

  11. Empathy and the Big Five

    OpenAIRE

    Paulus, Christoph

    2016-01-01

    Del Barrio et al. (2004) haben vor mehr als 10 Jahren versucht, eine direkte Beziehung zwischen Empathie und den Big Five herzustellen. Im Mittel hatten in ihrer Stichprobe Frauen höhere Werte in der Empathie und auf den Big Five-Faktoren mit Ausnahme des Faktors Neurotizismus. Zusammenhänge zu Empathie fanden sie in den Bereichen Offenheit, Verträglichkeit, Gewissenhaftigkeit und Extraversion. In unseren Daten besitzen Frauen sowohl in der Empathie als auch den Big Five signifikant höhere We...

  12. Modified big-bubble technique compared to manual dissection deep anterior lamellar keratoplasty in the treatment of keratoconus.

    Science.gov (United States)

    Knutsson, Karl Anders; Rama, Paolo; Paganoni, Giorgio

    2015-08-01

    To evaluate the clinical findings and results of manual dissection deep anterior lamellar keratoplasty (DALK) compared to a modified big-bubble DALK technique in eyes affected by keratoconus. Sixty eyes of 60 patients with keratoconus were treated with one of the two surgical techniques manual DALK (n = 30); big-bubble DALK (n = 30). The main outcomes measured were visual acuity, corneal topographic parameters, thickness of residual stroma and endothelial cell density (ECD). Patients were examined postoperatively at 1 month, 6 months, 1 year and 1 month after suture removal. Final best spectacle-corrected visual acuity (BSCVA) measured 1 month after suture removal was 0.11 ± 0.08 LogMAR in the big-bubble group compared to 0.13 ± 0.08 in the manual DALK group (p = 0.227). In patients treated with the big-bubble technique without complications (Descemet's membrane completely bared), the stromal residue was not measureable. Mean stromal residual thickness in the manual DALK group was 30.50 ± 27.60 μm. Data analysis of the manual DALK group demonstrated a significant correlation between BSCVA and residual stromal thickness; lower residual stromal thickness correlated with better BSCVA values (Spearman ρ = 0.509, p = 0.018). Postoperative ECD was similar in both groups at all intervals, with no statistically significant differences. In both groups, ECD loss was only significant during the 1- to 6-month interval (p = 0.001 and p big-bubble DALK and manual DALK groups, respectively). Manual DALK provides comparable results to big-bubble DALK. Big-bubble DALK permits faster visual recovery and is a surgical technique, which can be easily converted to manual DALK in cases of unsuccessful 'big-bubble' formation. © 2015 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig) proteins.

    Science.gov (United States)

    Raman, Rajeev; Rajanikanth, V; Palaniappan, Raghavan U M; Lin, Yi-Pin; He, Hongxuan; McDonough, Sean P; Sharma, Yogendra; Chang, Yung-Fu

    2010-12-29

    Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig) proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big) domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th) (Lig A9) and 10(th) repeats (Lig A10); and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon). All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm), probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  14. Is the Short Version of the Big Five Inventory (BFI-S Applicable for Use in Telephone Surveys?

    Directory of Open Access Journals (Sweden)

    Brust Oliver A.

    2016-09-01

    Full Text Available The inclusion of psychological indicators in survey research has become more common because they offer the possibility of explaining much of the variance in sociological variables. The Big Five personality dimensions in particular are often used to explain opinions, attitudes, and behavior. However, the short versions of the Big Five Inventory (BFI-S were developed for face-to-face surveys. Studies have shown distortions in the identification of the Big Five factor structure in subsamples of older respondents in landline telephone surveys. We applied the same BFI-S but with a shorter rating scale in a telephone survey with two subsamples (landline and mobile phone. Using exploratory structural equation modeling (ESEM, we identified the Big Five structure in the subsamples and the age groups. This finding leads us to conclude that the BFI-S is a powerful means of including personality characteristics in telephone surveys.

  15. Lead concentration in meat from lead-killed moose and predicted human exposure using Monte Carlo simulation.

    Science.gov (United States)

    Lindboe, M; Henrichsen, E N; Høgåsen, H R; Bernhoft, A

    2012-01-01

    Lead-based hunting ammunitions are still common in most countries. On impact such ammunition releases fragments which are widely distributed within the carcass. In Norway, wild game is an important meat source for segments of the population and 95% of hunters use lead-based bullets. In this paper, we have investigated the lead content of ground meat from moose (Alces alces) intended for human consumption in Norway, and have predicted human exposure through this source. Fifty-two samples from different batches of ground meat from moose killed with lead-based bullets were randomly collected. The lead content was measured by atomic absorption spectroscopy. The lead intake from exposure to moose meat over time, depending on the frequency of intake and portion size, was predicted using Monte Carlo simulation. In 81% of the batches, lead levels were above the limit of quantification of 0.03 mg kg(-1), ranging up to 110 mg kg(-1). The mean lead concentration was 5.6 mg kg(-1), i.e. 56 times the European Commission limit for lead in meat. For consumers eating a moderate meat serving (2 g kg(-1) bw), a single serving would give a lead intake of 11 µg kg(-1) bw on average, with maximum of 220 µg kg(-1) bw. Using Monte Carlo simulation, the median (and 97.5th percentile) predicted weekly intake of lead from moose meat was 12 µg kg(-1) bw (27 µg kg(-1) bw) for one serving per week and 25 µg kg(-1) bw (45 µg kg(-1) bw) for two servings per week. The results indicate that the intake of meat from big game shot with lead-based bullets imposes a significant contribution to the total human lead exposure. The provisional tolerable weekly intake set by the World Health Organization (WHO) of 25 µg kg(-1) bw is likely to be exceeded in people eating moose meat on a regular basis. The European Food Safety Authority (EFSA) has recently concluded that adverse effects may be present at even lower exposure doses. Hence, even occasional consumption of big game meat with lead levels as

  16. Semantic Web Technologies and Big Data Infrastructures: SPARQL Federated Querying of Heterogeneous Big Data Stores

    OpenAIRE

    Konstantopoulos, Stasinos; Charalambidis, Angelos; Mouchakis, Giannis; Troumpoukis, Antonis; Jakobitsch, Jürgen; Karkaletsis, Vangelis

    2016-01-01

    The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges invo...

  17. Quantum fields in a big-crunch-big-bang spacetime

    International Nuclear Information System (INIS)

    Tolley, Andrew J.; Turok, Neil

    2002-01-01

    We consider quantum field theory on a spacetime representing the big-crunch-big-bang transition postulated in ekpyrotic or cyclic cosmologies. We show via several independent methods that an essentially unique matching rule holds connecting the incoming state, in which a single extra dimension shrinks to zero, to the outgoing state in which it reexpands at the same rate. For free fields in our construction there is no particle production from the incoming adiabatic vacuum. When interactions are included the particle production for fixed external momentum is finite at the tree level. We discuss a formal correspondence between our construction and quantum field theory on de Sitter spacetime

  18. Turning big bang into big bounce: II. Quantum dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Malkiewicz, Przemyslaw; Piechocki, Wlodzimierz, E-mail: pmalk@fuw.edu.p, E-mail: piech@fuw.edu.p [Theoretical Physics Department, Institute for Nuclear Studies, Hoza 69, 00-681 Warsaw (Poland)

    2010-11-21

    We analyze the big bounce transition of the quantum Friedmann-Robertson-Walker model in the setting of the nonstandard loop quantum cosmology (LQC). Elementary observables are used to quantize composite observables. The spectrum of the energy density operator is bounded and continuous. The spectrum of the volume operator is bounded from below and discrete. It has equally distant levels defining a quantum of the volume. The discreteness may imply a foamy structure of spacetime at a semiclassical level which may be detected in astro-cosmo observations. The nonstandard LQC method has a free parameter that should be fixed in some way to specify the big bounce transition.

  19. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  20. The ethics of big data in big agriculture

    Directory of Open Access Journals (Sweden)

    Isabelle M. Carbonell

    2016-03-01

    Full Text Available This paper examines the ethics of big data in agriculture, focusing on the power asymmetry between farmers and large agribusinesses like Monsanto. Following the recent purchase of Climate Corp., Monsanto is currently the most prominent biotech agribusiness to buy into big data. With wireless sensors on tractors monitoring or dictating every decision a farmer makes, Monsanto can now aggregate large quantities of previously proprietary farming data, enabling a privileged position with unique insights on a field-by-field basis into a third or more of the US farmland. This power asymmetry may be rebalanced through open-sourced data, and publicly-funded data analytic tools which rival Climate Corp. in complexity and innovation for use in the public domain.

  1. Homogeneous and isotropic big rips?

    CERN Document Server

    Giovannini, Massimo

    2005-01-01

    We investigate the way big rips are approached in a fully inhomogeneous description of the space-time geometry. If the pressure and energy densities are connected by a (supernegative) barotropic index, the spatial gradients and the anisotropic expansion decay as the big rip is approached. This behaviour is contrasted with the usual big-bang singularities. A similar analysis is performed in the case of sudden (quiescent) singularities and it is argued that the spatial gradients may well be non-negligible in the vicinity of pressure singularities.

  2. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  3. [Big data in medicine and healthcare].

    Science.gov (United States)

    Rüping, Stefan

    2015-08-01

    Healthcare is one of the business fields with the highest Big Data potential. According to the prevailing definition, Big Data refers to the fact that data today is often too large and heterogeneous and changes too quickly to be stored, processed, and transformed into value by previous technologies. The technological trends drive Big Data: business processes are more and more executed electronically, consumers produce more and more data themselves - e.g. in social networks - and finally ever increasing digitalization. Currently, several new trends towards new data sources and innovative data analysis appear in medicine and healthcare. From the research perspective, omics-research is one clear Big Data topic. In practice, the electronic health records, free open data and the "quantified self" offer new perspectives for data analytics. Regarding analytics, significant advances have been made in the information extraction from text data, which unlocks a lot of data from clinical documentation for analytics purposes. At the same time, medicine and healthcare is lagging behind in the adoption of Big Data approaches. This can be traced to particular problems regarding data complexity and organizational, legal, and ethical challenges. The growing uptake of Big Data in general and first best-practice examples in medicine and healthcare in particular, indicate that innovative solutions will be coming. This paper gives an overview of the potentials of Big Data in medicine and healthcare.

  4. Perspectives on making big data analytics work for oncology.

    Science.gov (United States)

    El Naqa, Issam

    2016-12-01

    Oncology, with its unique combination of clinical, physical, technological, and biological data provides an ideal case study for applying big data analytics to improve cancer treatment safety and outcomes. An oncology treatment course such as chemoradiotherapy can generate a large pool of information carrying the 5Vs hallmarks of big data. This data is comprised of a heterogeneous mixture of patient demographics, radiation/chemo dosimetry, multimodality imaging features, and biological markers generated over a treatment period that can span few days to several weeks. Efforts using commercial and in-house tools are underway to facilitate data aggregation, ontology creation, sharing, visualization and varying analytics in a secure environment. However, open questions related to proper data structure representation and effective analytics tools to support oncology decision-making need to be addressed. It is recognized that oncology data constitutes a mix of structured (tabulated) and unstructured (electronic documents) that need to be processed to facilitate searching and subsequent knowledge discovery from relational or NoSQL databases. In this context, methods based on advanced analytics and image feature extraction for oncology applications will be discussed. On the other hand, the classical p (variables)≫n (samples) inference problem of statistical learning is challenged in the Big data realm and this is particularly true for oncology applications where p-omics is witnessing exponential growth while the number of cancer incidences has generally plateaued over the past 5-years leading to a quasi-linear growth in samples per patient. Within the Big data paradigm, this kind of phenomenon may yield undesirable effects such as echo chamber anomalies, Yule-Simpson reversal paradox, or misleading ghost analytics. In this work, we will present these effects as they pertain to oncology and engage small thinking methodologies to counter these effects ranging from

  5. Cartography in the Age of Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    WANG Jiayao

    2017-10-01

    Full Text Available Cartography is an ancient science with almost the same long history as the world's oldest culture.Since ancient times,the movement and change of anything and any phenomena,including human activities,have been carried out in a certain time and space.The development of science and technology and the progress of social civilization have made social management and governance more and more dependent on time and space.The information source,theme,content,carrier,form,production methods and application methods of map are different in different historical periods,so that its all-round value is different. With the arrival of the big data age,the scientific paradigm has now entered the era of "data-intensive" paradigm,so is the cartography,with obvious characteristics of big data science.All big data are caused by movement and change of all things and phenomena in the geographic world,so they have space and time characteristics and thus cannot be separated from the spatial reference and time reference.Therefore,big data is big spatio-temporal data essentially.Since the late 1950s and early 1960s,modern cartography,that is,the cartography in the information age,takes spatio-temporal data as the object,and focuses on the processing and expression of spatio-temporal data,but not in the face of the large scale multi-source heterogeneous and multi-dimensional dynamic data flow(or flow datafrom sky to the sea.The real-time dynamic nature,the theme pertinence,the content complexity,the carrier diversification,the expression form personalization,the production method modernization,the application ubiquity of the map,is incomparable in the past period,which leads to the great changes of the theory,technology and application system of cartography.And all these changes happen to occur in the 60 years since the late 1950s and early 1960s,so this article was written to commemorate the 60th anniversary of the "Acta Geodaetica et Cartographica Sinica".

  6. Using 'big data' to validate claims made in the pharmaceutical approval process.

    Science.gov (United States)

    Wasser, Thomas; Haynes, Kevin; Barron, John; Cziraky, Mark

    2015-01-01

    Big Data in the healthcare setting refers to the storage, assimilation, and analysis of large quantities of information regarding patient care. These data can be collected and stored in a wide variety of ways including electronic medical records collected at the patient bedside, or through medical records that are coded and passed to insurance companies for reimbursement. When these data are processed it is possible to validate claims as a part of the regulatory review process regarding the anticipated performance of medications and devices. In order to analyze properly claims by manufacturers and others, there is a need to express claims in terms that are testable in a timeframe that is useful and meaningful to formulary committees. Claims for the comparative benefits and costs, including budget impact, of products and devices need to be expressed in measurable terms, ideally in the context of submission or validation protocols. Claims should be either consistent with accessible Big Data or able to support observational studies where Big Data identifies target populations. Protocols should identify, in disaggregated terms, key variables that would lead to direct or proxy validation. Once these variables are identified, Big Data can be used to query massive quantities of data in the validation process. Research can be passive or active in nature. Passive, where the data are collected retrospectively; active where the researcher is prospectively looking for indicators of co-morbid conditions, side-effects or adverse events, testing these indicators to determine if claims are within desired ranges set forth by the manufacturer. Additionally, Big Data can be used to assess the effectiveness of therapy through health insurance records. This, for example, could indicate that disease or co-morbid conditions cease to be treated. Understanding the basic strengths and weaknesses of Big Data in the claim validation process provides a glimpse of the value that this research

  7. Results from the Big Spring basin water quality monitoring and demonstration projects, Iowa, USA

    Science.gov (United States)

    Rowden, R.D.; Liu, H.; Libra, R.D.

    2001-01-01

    Agricultural practices, hydrology, and water quality of the 267-km2 Big Spring groundwater drainage basin in Clayton County, Iowa, have been monitored since 1981. Land use is agricultural; nitrate-nitrogen (-N) and herbicides are the resulting contaminants in groundwater and surface water. Ordovician Galena Group carbonate rocks comprise the main aquifer in the basin. Recharge to this karstic aquifer is by infiltration, augmented by sinkhole-captured runoff. Groundwater is discharged at Big Spring, where quantity and quality of the discharge are monitored. Monitoring has shown a threefold increase in groundwater nitrate-N concentrations from the 1960s to the early 1980s. The nitrate-N discharged from the basin typically is equivalent to over one-third of the nitrogen fertilizer applied, with larger losses during wetter years. Atrazine is present in groundwater all year; however, contaminant concentrations in the groundwater respond directly to recharge events, and unique chemical signatures of infiltration versus runoff recharge are detectable in the discharge from Big Spring. Education and demonstration efforts have reduced nitrogen fertilizer application rates by one-third since 1981. Relating declines in nitrate and pesticide concentrations to inputs of nitrogen fertilizer and pesticides at Big Spring is problematic. Annual recharge has varied five-fold during monitoring, overshadowing any water-quality improvements resulting from incrementally decreased inputs. ?? Springer-Verlag 2001.

  8. From Big Data to Big Business

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten

    2017-01-01

    Idea in Brief: Problem: There is an enormous profit potential for manufacturing firms in big data, but one of the key barriers to obtaining data-driven growth is the lack of knowledge about which capabilities are needed to extract value and profit from data. Solution: We (BDBB research group at C...

  9. Making big sense from big data in toxicology by read-across.

    Science.gov (United States)

    Hartung, Thomas

    2016-01-01

    Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data--the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

  10. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  11. Big-Leaf Mahogany on CITES Appendix II: Big Challenge, Big Opportunity

    Science.gov (United States)

    JAMES GROGAN; PAULO BARRETO

    2005-01-01

    On 15 November 2003, big-leaf mahogany (Swietenia macrophylla King, Meliaceae), the most valuable widely traded Neotropical timber tree, gained strengthened regulatory protection from its listing on Appendix II of the Convention on International Trade in Endangered Species ofWild Fauna and Flora (CITES). CITES is a United Nations-chartered agreement signed by 164...

  12. Big Data as Information Barrier

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-07-01

    Full Text Available The article covers analysis of ‘Big Data’ which has been discussed over last 10 years. The reasons and factors for the issue are revealed. It has proved that the factors creating ‘Big Data’ issue has existed for quite a long time, and from time to time, would cause the informational barriers. Such barriers were successfully overcome through the science and technologies. The conducted analysis refers the “Big Data” issue to a form of informative barrier. This issue may be solved correctly and encourages development of scientific and calculating methods.

  13. Big Data in Space Science

    OpenAIRE

    Barmby, Pauline

    2018-01-01

    It seems like “big data” is everywhere these days. In planetary science and astronomy, we’ve been dealing with large datasets for a long time. So how “big” is our data? How does it compare to the big data that a bank or an airline might have? What new tools do we need to analyze big datasets, and how can we make better use of existing tools? What kinds of science problems can we address with these? I’ll address these questions with examples including ESA’s Gaia mission, ...

  14. Bid to recreate the Big Bang and unlock the secrets of life hits a

    CERN Multimedia

    Morgan, James

    2007-01-01

    "It was not the kind of "big band" they were hopint for - but the explosion at the new £6.81 bn particle accelerator in Switzerland on Saturday, was "not a major setback", says a British scientist who is leading the project." (1 page)

  15. Big Data in Medicine is Driving Big Changes

    Science.gov (United States)

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  16. Main Issues in Big Data Security

    Directory of Open Access Journals (Sweden)

    Julio Moreno

    2016-09-01

    Full Text Available Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.

  17. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust?

    Science.gov (United States)

    Arora, Vineet M

    2018-06-01

    With the advent of electronic medical records (EMRs) fueling the rise of big data, the use of predictive analytics, machine learning, and artificial intelligence are touted as transformational tools to improve clinical care. While major investments are being made in using big data to transform health care delivery, little effort has been directed toward exploiting big data to improve graduate medical education (GME). Because our current system relies on faculty observations of competence, it is not unreasonable to ask whether big data in the form of clinical EMRs and other novel data sources can answer questions of importance in GME such as when is a resident ready for independent practice.The timing is ripe for such a transformation. A recent National Academy of Medicine report called for reforms to how GME is delivered and financed. While many agree on the need to ensure that GME meets our nation's health needs, there is little consensus on how to measure the performance of GME in meeting this goal. During a recent workshop at the National Academy of Medicine on GME outcomes and metrics in October 2017, a key theme emerged: Big data holds great promise to inform GME performance at individual, institutional, and national levels. In this Invited Commentary, several examples are presented, such as using big data to inform clinical experience and provide clinically meaningful data to trainees, and using novel data sources, including ambient data, to better measure the quality of GME training.

  18. ATM loss leads to synthetic lethality in BRCA1 BRCT mutant mice associated with exacerbated defects in homology-directed repair.

    Science.gov (United States)

    Chen, Chun-Chin; Kass, Elizabeth M; Yen, Wei-Feng; Ludwig, Thomas; Moynahan, Mary Ellen; Chaudhuri, Jayanta; Jasin, Maria

    2017-07-18

    BRCA1 is essential for homology-directed repair (HDR) of DNA double-strand breaks in part through antagonism of the nonhomologous end-joining factor 53BP1. The ATM kinase is involved in various aspects of DNA damage signaling and repair, but how ATM participates in HDR and genetically interacts with BRCA1 in this process is unclear. To investigate this question, we used the Brca1 S1598F mouse model carrying a mutation in the BRCA1 C-terminal domain of BRCA1. Whereas ATM loss leads to a mild HDR defect in adult somatic cells, we find that ATM inhibition leads to severely reduced HDR in Brca1 S1598F cells. Consistent with a critical role for ATM in HDR in this background, loss of ATM leads to synthetic lethality of Brca1 S1598F mice. Whereas both ATM and BRCA1 promote end resection, which can be regulated by 53BP1, 53bp1 deletion does not rescue the HDR defects of Atm mutant cells, in contrast to Brca1 mutant cells. These results demonstrate that ATM has a role in HDR independent of the BRCA1-53BP1 antagonism and that its HDR function can become critical in certain contexts.

  19. Envisioning the future of 'big data' biomedicine.

    Science.gov (United States)

    Bui, Alex A T; Van Horn, John Darrell

    2017-05-01

    Through the increasing availability of more efficient data collection procedures, biomedical scientists are now confronting ever larger sets of data, often finding themselves struggling to process and interpret what they have gathered. This, while still more data continues to accumulate. This torrent of biomedical information necessitates creative thinking about how the data are being generated, how they might be best managed, analyzed, and eventually how they can be transformed into further scientific understanding for improving patient care. Recognizing this as a major challenge, the National Institutes of Health (NIH) has spearheaded the "Big Data to Knowledge" (BD2K) program - the agency's most ambitious biomedical informatics effort ever undertaken to date. In this commentary, we describe how the NIH has taken on "big data" science head-on, how a consortium of leading research centers are developing the means for handling large-scale data, and how such activities are being marshalled for the training of a new generation of biomedical data scientists. All in all, the NIH BD2K program seeks to position data science at the heart of 21 st Century biomedical research. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A SWOT Analysis of Big Data

    Science.gov (United States)

    Ahmadi, Mohammad; Dileepan, Parthasarati; Wheatley, Kathleen K.

    2016-01-01

    This is the decade of data analytics and big data, but not everyone agrees with the definition of big data. Some researchers see it as the future of data analysis, while others consider it as hype and foresee its demise in the near future. No matter how it is defined, big data for the time being is having its glory moment. The most important…

  1. A survey of big data research

    Science.gov (United States)

    Fang, Hua; Zhang, Zhaoyang; Wang, Chanpaul Jin; Daneshmand, Mahmoud; Wang, Chonggang; Wang, Honggang

    2015-01-01

    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians and social scientists are needed to tackle, discover and understand big data. This survey presents an overview of big data initiatives, technologies and research in industries and academia, and discusses challenges and potential solutions. PMID:26504265

  2. Big Data in Action for Government : Big Data Innovation in Public Services, Policy, and Engagement

    OpenAIRE

    World Bank

    2017-01-01

    Governments have an opportunity to harness big data solutions to improve productivity, performance and innovation in service delivery and policymaking processes. In developing countries, governments have an opportunity to adopt big data solutions and leapfrog traditional administrative approaches

  3. 78 FR 3911 - Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive...

    Science.gov (United States)

    2013-01-17

    ... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R3-R-2012-N259; FXRS1265030000-134-FF03R06000] Big Stone National Wildlife Refuge, Big Stone and Lac Qui Parle Counties, MN; Final Comprehensive... significant impact (FONSI) for the environmental assessment (EA) for Big Stone National Wildlife Refuge...

  4. Big domains are novel Ca²+-binding modules: evidences from big domains of Leptospira immunoglobulin-like (Lig proteins.

    Directory of Open Access Journals (Sweden)

    Rajeev Raman

    Full Text Available BACKGROUND: Many bacterial surface exposed proteins mediate the host-pathogen interaction more effectively in the presence of Ca²+. Leptospiral immunoglobulin-like (Lig proteins, LigA and LigB, are surface exposed proteins containing Bacterial immunoglobulin like (Big domains. The function of proteins which contain Big fold is not known. Based on the possible similarities of immunoglobulin and βγ-crystallin folds, we here explore the important question whether Ca²+ binds to a Big domains, which would provide a novel functional role of the proteins containing Big fold. PRINCIPAL FINDINGS: We selected six individual Big domains for this study (three from the conserved part of LigA and LigB, denoted as Lig A3, Lig A4, and LigBCon5; two from the variable region of LigA, i.e., 9(th (Lig A9 and 10(th repeats (Lig A10; and one from the variable region of LigB, i.e., LigBCen2. We have also studied the conserved region covering the three and six repeats (LigBCon1-3 and LigCon. All these proteins bind the calcium-mimic dye Stains-all. All the selected four domains bind Ca²+ with dissociation constants of 2-4 µM. Lig A9 and Lig A10 domains fold well with moderate thermal stability, have β-sheet conformation and form homodimers. Fluorescence spectra of Big domains show a specific doublet (at 317 and 330 nm, probably due to Trp interaction with a Phe residue. Equilibrium unfolding of selected Big domains is similar and follows a two-state model, suggesting the similarity in their fold. CONCLUSIONS: We demonstrate that the Lig are Ca²+-binding proteins, with Big domains harbouring the binding motif. We conclude that despite differences in sequence, a Big motif binds Ca²+. This work thus sets up a strong possibility for classifying the proteins containing Big domains as a novel family of Ca²+-binding proteins. Since Big domain is a part of many proteins in bacterial kingdom, we suggest a possible function these proteins via Ca²+ binding.

  5. 2nd INNS Conference on Big Data

    CERN Document Server

    Manolopoulos, Yannis; Iliadis, Lazaros; Roy, Asim; Vellasco, Marley

    2017-01-01

    The book offers a timely snapshot of neural network technologies as a significant component of big data analytics platforms. It promotes new advances and research directions in efficient and innovative algorithmic approaches to analyzing big data (e.g. deep networks, nature-inspired and brain-inspired algorithms); implementations on different computing platforms (e.g. neuromorphic, graphics processing units (GPUs), clouds, clusters); and big data analytics applications to solve real-world problems (e.g. weather prediction, transportation, energy management). The book, which reports on the second edition of the INNS Conference on Big Data, held on October 23–25, 2016, in Thessaloniki, Greece, depicts an interesting collaborative adventure of neural networks with big data and other learning technologies.

  6. Atmosphere Impact Losses

    Science.gov (United States)

    Schlichting, Hilke E.; Mukhopadhyay, Sujoy

    2018-02-01

    Determining the origin of volatiles on terrestrial planets and quantifying atmospheric loss during planet formation is crucial for understanding the history and evolution of planetary atmospheres. Using geochemical observations of noble gases and major volatiles we determine what the present day inventory of volatiles tells us about the sources, the accretion process and the early differentiation of the Earth. We further quantify the key volatile loss mechanisms and the atmospheric loss history during Earth's formation. Volatiles were accreted throughout the Earth's formation, but Earth's early accretion history was volatile poor. Although nebular Ne and possible H in the deep mantle might be a fingerprint of this early accretion, most of the mantle does not remember this signature implying that volatile loss occurred during accretion. Present day geochemistry of volatiles shows no evidence of hydrodynamic escape as the isotopic compositions of most volatiles are chondritic. This suggests that atmospheric loss generated by impacts played a major role during Earth's formation. While many of the volatiles have chondritic isotopic ratios, their relative abundances are certainly not chondritic again suggesting volatile loss tied to impacts. Geochemical evidence of atmospheric loss comes from the {}3He/{}^{22}Ne, halogen ratios (e.g., F/Cl) and low H/N ratios. In addition, the geochemical ratios indicate that most of the water could have been delivered prior to the Moon forming impact and that the Moon forming impact did not drive off the ocean. Given the importance of impacts in determining the volatile budget of the Earth we examine the contributions to atmospheric loss from both small and large impacts. We find that atmospheric mass loss due to impacts can be characterized into three different regimes: 1) Giant Impacts, that create a strong shock transversing the whole planet and that can lead to atmospheric loss globally. 2) Large enough impactors (m_{cap} ≳ √{2

  7. Metals in riparian wildlife of the lead mining district of southeastern Missouri. [Rana catesbeiana, Ondatra zibethicus; Butorides striatus; Nerodia sipedon; Stelgidopteryx serripennis, Riparia riparia

    Energy Technology Data Exchange (ETDEWEB)

    Niethammer, K.R.; Atkinson, R.D.; Baskett, T.S.; Samson, F.B.

    1985-03-01

    Five species of riparian vertebrates (425 individuals) primarily representing upper trophic levels were collected from the Big River and Black River drainages in two lead mining districts of southeastern Missouri, 1981-82. Big River is subject to metal pollution via erosion and seepage from large tailings piles from inactive lead mines. Black River drains part of a currently mined area. Bull-frogs (Rana catesbeiana), muskrats (Ondatra zibethicus), and green-backed herons (Butorides striatus) collected downstream from the source of metal contamination to Big River had significantly higher lead and cadmium levels than specimens collected at either an uncontaminated upstream site or on Black River. Northern water snakes (Nerodia sipedon) had elevated lead levels below the tailings source, but did not seem to accumulate cadmium. Levels of lead, cadmium, or zinc in northern rough-winged swallows (Stelgidopteryx serripennis) were not related to collecting locality. Carcasses of ten bank swallows (Riparia riparia) collected from a colony nesting in a tailings pile along the Big River had lead concentrations of 2.0-39 ppm wet weight. Differences between zinc concentrations in vertebrates collected from contaminated and uncontaminated sites were less apparent than differences in lead and cadmium. There was little relationship between metal concentrations in the animals studied and their trophic levels. Bull-frogs are the most promising species examined for monitoring environmental levels of lead, cadmium, and zinc.

  8. Research on Ontology Modeling of Steel Manufacturing Process Based on Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Bao Qing

    2016-01-01

    Full Text Available As an important method that steel industries ride the Indutrie 4.0 wave, knowledge management is expected to be versatile, effective and intelligent. Mechanism modeling difficulties, numerous influencing factors and complex industrial chains hinder the development of knowledge and information integration. Using data potentials, big data analysis can be an effective way to deal with knowledge acquisition as it solves the inaccuracy and imperfection mechanism modeling may lead to. This paper proposes a big data knowledge management system(BDAKMS adhering to data driven, intelligent analysis, service publication, dynamic update principle which can effectively extracts knowledge from mass data. Then, ontology modeling gives the knowledge unified descriptions as well as inference details combined with semantic web techniques.

  9. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  10. Compromised data from social media to big data

    CERN Document Server

    Redden, Joanna; Langlois, Ganaele

    2015-01-01

    There has been a data rush in the past decade brought about by online communication and, in particular, social media (Facebook, Twitter, Youtube, among others), which promises a new age of digital enlightenment. But social data is compromised: it is being seized by specific economic interests, it leads to a fundamental shift in the relationship between research and the public good, and it fosters new forms of control and surveillance. Compromised Data: From Social Media to Big Data explores how we perform critical research within a compromised social data framework. The expert, international l

  11. Ethische aspecten van big data

    NARCIS (Netherlands)

    N. (Niek) van Antwerpen; Klaas Jan Mollema

    2017-01-01

    Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor

  12. Epidemiology in wonderland: Big Data and precision medicine.

    Science.gov (United States)

    Saracci, Rodolfo

    2018-03-01

    Big Data and precision medicine, two major contemporary challenges for epidemiology, are critically examined from two different angles. In Part 1 Big Data collected for research purposes (Big research Data) and Big Data used for research although collected for other primary purposes (Big secondary Data) are discussed in the light of the fundamental common requirement of data validity, prevailing over "bigness". Precision medicine is treated developing the key point that high relative risks are as a rule required to make a variable or combination of variables suitable for prediction of disease occurrence, outcome or response to treatment; the commercial proliferation of allegedly predictive tests of unknown or poor validity is commented. Part 2 proposes a "wise epidemiology" approach to: (a) choosing in a context imprinted by Big Data and precision medicine-epidemiological research projects actually relevant to population health, (b) training epidemiologists, (c) investigating the impact on clinical practices and doctor-patient relation of the influx of Big Data and computerized medicine and (d) clarifying whether today "health" may be redefined-as some maintain in purely technological terms.

  13. Big Data and Analytics in Healthcare.

    Science.gov (United States)

    Tan, S S-L; Gao, G; Koch, S

    2015-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.

  14. Big Data for Business Ecosystem Players

    Directory of Open Access Journals (Sweden)

    Perko Igor

    2016-06-01

    Full Text Available In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use-namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.

  15. "Big data" in economic history.

    Science.gov (United States)

    Gutmann, Myron P; Merchant, Emily Klancher; Roberts, Evan

    2018-03-01

    Big data is an exciting prospect for the field of economic history, which has long depended on the acquisition, keying, and cleaning of scarce numerical information about the past. This article examines two areas in which economic historians are already using big data - population and environment - discussing ways in which increased frequency of observation, denser samples, and smaller geographic units allow us to analyze the past with greater precision and often to track individuals, places, and phenomena across time. We also explore promising new sources of big data: organically created economic data, high resolution images, and textual corpora.

  16. Big Data Knowledge in Global Health Education.

    Science.gov (United States)

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  17. GEOSS: Addressing Big Data Challenges

    Science.gov (United States)

    Nativi, S.; Craglia, M.; Ochiai, O.

    2014-12-01

    In the sector of Earth Observation, the explosion of data is due to many factors including: new satellite constellations, the increased capabilities of sensor technologies, social media, crowdsourcing, and the need for multidisciplinary and collaborative research to face Global Changes. In this area, there are many expectations and concerns about Big Data. Vendors have attempted to use this term for their commercial purposes. It is necessary to understand whether Big Data is a radical shift or an incremental change for the existing digital infrastructures. This presentation tries to explore and discuss the impact of Big Data challenges and new capabilities on the Global Earth Observation System of Systems (GEOSS) and particularly on its common digital infrastructure called GCI. GEOSS is a global and flexible network of content providers allowing decision makers to access an extraordinary range of data and information at their desk. The impact of the Big Data dimensionalities (commonly known as 'V' axes: volume, variety, velocity, veracity, visualization) on GEOSS is discussed. The main solutions and experimentation developed by GEOSS along these axes are introduced and analyzed. GEOSS is a pioneering framework for global and multidisciplinary data sharing in the Earth Observation realm; its experience on Big Data is valuable for the many lessons learned.

  18. Big data for bipolar disorder.

    Science.gov (United States)

    Monteith, Scott; Glenn, Tasha; Geddes, John; Whybrow, Peter C; Bauer, Michael

    2016-12-01

    The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.

  19. BIG DATA IN TAMIL: OPPORTUNITIES, BENEFITS AND CHALLENGES

    OpenAIRE

    R.S. Vignesh Raj; Babak Khazaei; Ashik Ali

    2015-01-01

    This paper gives an overall introduction on big data and has tried to introduce Big Data in Tamil. It discusses the potential opportunities, benefits and likely challenges from a very Tamil and Tamil Nadu perspective. The paper has also made original contribution by proposing the ‘big data’s’ terminology in Tamil. The paper further suggests a few areas to explore using big data Tamil on the lines of the Tamil Nadu Government ‘vision 2023’. Whilst, big data has something to offer everyone, it ...

  20. Big data in biomedicine.

    Science.gov (United States)

    Costa, Fabricio F

    2014-04-01

    The increasing availability and growth rate of biomedical information, also known as 'big data', provides an opportunity for future personalized medicine programs that will significantly improve patient care. Recent advances in information technology (IT) applied to biomedicine are changing the landscape of privacy and personal information, with patients getting more control of their health information. Conceivably, big data analytics is already impacting health decisions and patient care; however, specific challenges need to be addressed to integrate current discoveries into medical practice. In this article, I will discuss the major breakthroughs achieved in combining omics and clinical health data in terms of their application to personalized medicine. I will also review the challenges associated with using big data in biomedicine and translational science. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  2. Big inquiry

    Energy Technology Data Exchange (ETDEWEB)

    Wynne, B [Lancaster Univ. (UK)

    1979-06-28

    The recently published report entitled 'The Big Public Inquiry' from the Council for Science and Society and the Outer Circle Policy Unit is considered, with especial reference to any future enquiry which may take place into the first commercial fast breeder reactor. Proposals embodied in the report include stronger rights for objectors and an attempt is made to tackle the problem that participation in a public inquiry is far too late to be objective. It is felt by the author that the CSS/OCPU report is a constructive contribution to the debate about big technology inquiries but that it fails to understand the deeper currents in the economic and political structure of technology which so influence the consequences of whatever formal procedures are evolved.

  3. Big data analytics with R and Hadoop

    CERN Document Server

    Prajapati, Vignesh

    2013-01-01

    Big Data Analytics with R and Hadoop is a tutorial style book that focuses on all the powerful big data tasks that can be achieved by integrating R and Hadoop.This book is ideal for R developers who are looking for a way to perform big data analytics with Hadoop. This book is also aimed at those who know Hadoop and want to build some intelligent applications over Big data with R packages. It would be helpful if readers have basic knowledge of R.

  4. Big data in forensic science and medicine.

    Science.gov (United States)

    Lefèvre, Thomas

    2018-07-01

    In less than a decade, big data in medicine has become quite a phenomenon and many biomedical disciplines got their own tribune on the topic. Perspectives and debates are flourishing while there is a lack for a consensual definition for big data. The 3Vs paradigm is frequently evoked to define the big data principles and stands for Volume, Variety and Velocity. Even according to this paradigm, genuine big data studies are still scarce in medicine and may not meet all expectations. On one hand, techniques usually presented as specific to the big data such as machine learning techniques are supposed to support the ambition of personalized, predictive and preventive medicines. These techniques are mostly far from been new and are more than 50 years old for the most ancient. On the other hand, several issues closely related to the properties of big data and inherited from other scientific fields such as artificial intelligence are often underestimated if not ignored. Besides, a few papers temper the almost unanimous big data enthusiasm and are worth attention since they delineate what is at stakes. In this context, forensic science is still awaiting for its position papers as well as for a comprehensive outline of what kind of contribution big data could bring to the field. The present situation calls for definitions and actions to rationally guide research and practice in big data. It is an opportunity for grounding a true interdisciplinary approach in forensic science and medicine that is mainly based on evidence. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Who Chokes Under Pressure? The Big Five Personality Traits and Decision-Making under Pressure.

    Science.gov (United States)

    Byrne, Kaileigh A; Silasi-Mansat, Crina D; Worthy, Darrell A

    2015-02-01

    The purpose of the present study was to examine whether the Big Five personality factors could predict who thrives or chokes under pressure during decision-making. The effects of the Big Five personality factors on decision-making ability and performance under social (Experiment 1) and combined social and time pressure (Experiment 2) were examined using the Big Five Personality Inventory and a dynamic decision-making task that required participants to learn an optimal strategy. In Experiment 1, a hierarchical multiple regression analysis showed an interaction between neuroticism and pressure condition. Neuroticism negatively predicted performance under social pressure, but did not affect decision-making under low pressure. Additionally, the negative effect of neuroticism under pressure was replicated using a combined social and time pressure manipulation in Experiment 2. These results support distraction theory whereby pressure taxes highly neurotic individuals' cognitive resources, leading to sub-optimal performance. Agreeableness also negatively predicted performance in both experiments.

  6. NASA's Big Data Task Force

    Science.gov (United States)

    Holmes, C. P.; Kinter, J. L.; Beebe, R. F.; Feigelson, E.; Hurlburt, N. E.; Mentzel, C.; Smith, G.; Tino, C.; Walker, R. J.

    2017-12-01

    Two years ago NASA established the Ad Hoc Big Data Task Force (BDTF - https://science.nasa.gov/science-committee/subcommittees/big-data-task-force), an advisory working group with the NASA Advisory Council system. The scope of the Task Force included all NASA Big Data programs, projects, missions, and activities. The Task Force focused on such topics as exploring the existing and planned evolution of NASA's science data cyber-infrastructure that supports broad access to data repositories for NASA Science Mission Directorate missions; best practices within NASA, other Federal agencies, private industry and research institutions; and Federal initiatives related to big data and data access. The BDTF has completed its two-year term and produced several recommendations plus four white papers for NASA's Science Mission Directorate. This presentation will discuss the activities and results of the TF including summaries of key points from its focused study topics. The paper serves as an introduction to the papers following in this ESSI session.

  7. Big Data Technologies

    Science.gov (United States)

    Bellazzi, Riccardo; Dagliati, Arianna; Sacchi, Lucia; Segagni, Daniele

    2015-01-01

    The so-called big data revolution provides substantial opportunities to diabetes management. At least 3 important directions are currently of great interest. First, the integration of different sources of information, from primary and secondary care to administrative information, may allow depicting a novel view of patient’s care processes and of single patient’s behaviors, taking into account the multifaceted nature of chronic care. Second, the availability of novel diabetes technologies, able to gather large amounts of real-time data, requires the implementation of distributed platforms for data analysis and decision support. Finally, the inclusion of geographical and environmental information into such complex IT systems may further increase the capability of interpreting the data gathered and extract new knowledge from them. This article reviews the main concepts and definitions related to big data, it presents some efforts in health care, and discusses the potential role of big data in diabetes care. Finally, as an example, it describes the research efforts carried on in the MOSAIC project, funded by the European Commission. PMID:25910540

  8. The Berlin Inventory of Gambling behavior - Screening (BIG-S): Validation using a clinical sample.

    Science.gov (United States)

    Wejbera, Martin; Müller, Kai W; Becker, Jan; Beutel, Manfred E

    2017-05-18

    Published diagnostic questionnaires for gambling disorder in German are either based on DSM-III criteria or focus on aspects other than life time prevalence. This study was designed to assess the usability of the DSM-IV criteria based Berlin Inventory of Gambling Behavior Screening tool in a clinical sample and adapt it to DSM-5 criteria. In a sample of 432 patients presenting for behavioral addiction assessment at the University Medical Center Mainz, we checked the screening tool's results against clinical diagnosis and compared a subsample of n=300 clinically diagnosed gambling disorder patients with a comparison group of n=132. The BIG-S produced a sensitivity of 99.7% and a specificity of 96.2%. The instrument's unidimensionality and the diagnostic improvements of DSM-5 criteria were verified by exploratory and confirmatory factor analysis as well as receiver operating characteristic analysis. The BIG-S is a reliable and valid screening tool for gambling disorder and demonstrated its concise and comprehensible operationalization of current DSM-5 criteria in a clinical setting.

  9. Nature of dislocation hysteresis losses and nonlinear effect in lead at high vibration amplitudes

    International Nuclear Information System (INIS)

    Lomakin, V.V.; Pal-Val, L.N.; Platkov, V.Y.; Roshchupkin, A.M.

    1982-01-01

    The nature of the dislocation hysteresis was established and changes in this hysteresis were determined by investigating the dependence of the dislocation-induced absorption of ultrasound (coefficient α) on the amplitude of ultrasound epsilon-c 0 in single crystals of pure lead and of lead containing Tl and Sn impurities. The investigation was carried out in a wide range of epsilon-c 0 under superconducting transition conditions. In the superconducting (s) state both pure Pb and that doped with T1 exhibited a maximum in the dependence α(epsilon-c 0 ) at high values of epsilon-c 0 ; on transition to the normal (n) state this maximum changed to a plateau. This provided a direct proof of a change in the static nature of the dislocation hysteresis to the dynamic process because of an increase in the coefficient of the electron drag of dislocations. Estimates were obtained of the range of lengths of dislocation loops: 2.4 x 10 - 4 cm - 4 cm. In the case of lead containing Sn the dynamic hysteresis occurred both in the normal and superconducting states. In the range of amplitudes above that of the maximum and at the beginning of the plateau all single crystals exhibited a rise of α on increase of epsilon-c 0 in the superconducting and normal states; this rise was due to nonlinear effects observed in the case of strong bending of L/sub N/ loops. An analysis was made of the amplitude dependence of the losses associated with this effect. The results were in good agreement with the experimental data

  10. Traffic information computing platform for big data

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn; Liu, Yan, E-mail: ztduan@chd.edu.cn; Dai, Jiting, E-mail: ztduan@chd.edu.cn; Kang, Jun, E-mail: ztduan@chd.edu.cn [Chang' an University School of Information Engineering, Xi' an, China and Shaanxi Engineering and Technical Research Center for Road and Traffic Detection, Xi' an (China)

    2014-10-06

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  11. Traffic information computing platform for big data

    International Nuclear Information System (INIS)

    Duan, Zongtao; Li, Ying; Zheng, Xibin; Liu, Yan; Dai, Jiting; Kang, Jun

    2014-01-01

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users

  12. Fremtidens landbrug bliver big business

    DEFF Research Database (Denmark)

    Hansen, Henning Otte

    2016-01-01

    Landbrugets omverdensforhold og konkurrencevilkår ændres, og det vil nødvendiggøre en udvikling i retning af “big business“, hvor landbrugene bliver endnu større, mere industrialiserede og koncentrerede. Big business bliver en dominerende udvikling i dansk landbrug - men ikke den eneste...

  13. Quantum nature of the big bang.

    Science.gov (United States)

    Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parampreet

    2006-04-14

    Some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field. Specifically, the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows: (i) the scalar field is shown to serve as an internal clock, thereby providing a detailed realization of the "emergent time" idea; (ii) the physical Hilbert space, Dirac observables, and semiclassical states are constructed rigorously; (iii) the Hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce. Thanks to the nonperturbative, background independent methods, unlike in other approaches the quantum evolution is deterministic across the deep Planck regime.

  14. Mentoring in Schools: An Impact Study of Big Brothers Big Sisters School-Based Mentoring

    Science.gov (United States)

    Herrera, Carla; Grossman, Jean Baldwin; Kauh, Tina J.; McMaken, Jennifer

    2011-01-01

    This random assignment impact study of Big Brothers Big Sisters School-Based Mentoring involved 1,139 9- to 16-year-old students in 10 cities nationwide. Youth were randomly assigned to either a treatment group (receiving mentoring) or a control group (receiving no mentoring) and were followed for 1.5 school years. At the end of the first school…

  15. Big data processing in the cloud - Challenges and platforms

    Science.gov (United States)

    Zhelev, Svetoslav; Rozeva, Anna

    2017-12-01

    Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.

  16. Ethics and Epistemology in Big Data Research.

    Science.gov (United States)

    Lipworth, Wendy; Mason, Paul H; Kerridge, Ian; Ioannidis, John P A

    2017-12-01

    Biomedical innovation and translation are increasingly emphasizing research using "big data." The hope is that big data methods will both speed up research and make its results more applicable to "real-world" patients and health services. While big data research has been embraced by scientists, politicians, industry, and the public, numerous ethical, organizational, and technical/methodological concerns have also been raised. With respect to technical and methodological concerns, there is a view that these will be resolved through sophisticated information technologies, predictive algorithms, and data analysis techniques. While such advances will likely go some way towards resolving technical and methodological issues, we believe that the epistemological issues raised by big data research have important ethical implications and raise questions about the very possibility of big data research achieving its goals.

  17. Victoria Stodden: Scholarly Communication in the Era of Big Data and Big Computation

    OpenAIRE

    Stodden, Victoria

    2015-01-01

    Victoria Stodden gave the keynote address for Open Access Week 2015. "Scholarly communication in the era of big data and big computation" was sponsored by the University Libraries, Computational Modeling and Data Analytics, the Department of Computer Science, the Department of Statistics, the Laboratory for Interdisciplinary Statistical Analysis (LISA), and the Virginia Bioinformatics Institute. Victoria Stodden is an associate professor in the Graduate School of Library and Information Scien...

  18. Big Data: Concept, Potentialities and Vulnerabilities

    Directory of Open Access Journals (Sweden)

    Fernando Almeida

    2018-03-01

    Full Text Available The evolution of information systems and the growth in the use of the Internet and social networks has caused an explosion in the amount of available data relevant to the activities of the companies. Therefore, the treatment of these available data is vital to support operational, tactical and strategic decisions. This paper aims to present the concept of big data and the main technologies that support the analysis of large data volumes. The potential of big data is explored considering nine sectors of activity, such as financial, retail, healthcare, transports, agriculture, energy, manufacturing, public, and media and entertainment. In addition, the main current opportunities, vulnerabilities and privacy challenges of big data are discussed. It was possible to conclude that despite the potential for using the big data to grow in the previously identified areas, there are still some challenges that need to be considered and mitigated, namely the privacy of information, the existence of qualified human resources to work with Big Data and the promotion of a data-driven organizational culture.

  19. Big data analytics a management perspective

    CERN Document Server

    Corea, Francesco

    2016-01-01

    This book is about innovation, big data, and data science seen from a business perspective. Big data is a buzzword nowadays, and there is a growing necessity within practitioners to understand better the phenomenon, starting from a clear stated definition. This book aims to be a starting reading for executives who want (and need) to keep the pace with the technological breakthrough introduced by new analytical techniques and piles of data. Common myths about big data will be explained, and a series of different strategic approaches will be provided. By browsing the book, it will be possible to learn how to implement a big data strategy and how to use a maturity framework to monitor the progress of the data science team, as well as how to move forward from one stage to the next. Crucial challenges related to big data will be discussed, where some of them are more general - such as ethics, privacy, and ownership – while others concern more specific business situations (e.g., initial public offering, growth st...

  20. Human factors in Big Data

    NARCIS (Netherlands)

    Boer, J. de

    2016-01-01

    Since 2014 I am involved in various (research) projects that try to make the hype around Big Data more concrete and tangible for the industry and government. Big Data is about multiple sources of (real-time) data that can be analysed, transformed to information and be used to make 'smart' decisions.

  1. Loss of corneodesmosin leads to severe skin barrier defect, pruritus, and atopy: unraveling the peeling skin disease.

    Science.gov (United States)

    Oji, Vinzenz; Eckl, Katja-Martina; Aufenvenne, Karin; Nätebus, Marc; Tarinski, Tatjana; Ackermann, Katharina; Seller, Natalia; Metze, Dieter; Nürnberg, Gudrun; Fölster-Holst, Regina; Schäfer-Korting, Monika; Hausser, Ingrid; Traupe, Heiko; Hennies, Hans Christian

    2010-08-13

    Generalized peeling skin disease is an autosomal-recessive ichthyosiform erythroderma characterized by lifelong patchy peeling of the skin. After genome-wide linkage analysis, we have identified a homozygous nonsense mutation in CDSN in a large consanguineous family with generalized peeling skin, pruritus, and food allergies, which leads to a complete loss of corneodesmosin. In contrast to hypotrichosis simplex, which can be associated with specific dominant CDSN mutations, peeling skin disease is characterized by a complete loss of CDSN expression. The skin phenotype is consistent with a recent murine Cdsn knockout model. Using three-dimensional human skin models, we demonstrate that lack of corneodesmosin causes an epidermal barrier defect supposed to account for the predisposition to atopic diseases, and we confirm the role of corneodesmosin as a decisive epidermal adhesion molecule. Therefore, peeling skin disease will represent a new model disorder for atopic diseases, similarly to Netherton syndrome and ichthyosis vulgaris in the recent past.

  2. A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset.

    Science.gov (United States)

    Kamal, Sarwar; Ripon, Shamim Hasnat; Dey, Nilanjan; Ashour, Amira S; Santhi, V

    2016-07-01

    In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Slaves to Big Data. Or Are We?

    Directory of Open Access Journals (Sweden)

    Mireille Hildebrandt

    2013-10-01

    Full Text Available

    In this contribution, the notion of Big Data is discussed in relation to the monetisation of personal data. The claim of some proponents, as well as adversaries, that Big Data implies that ‘n = all’, meaning that we no longer need to rely on samples because we have all the data, is scrutinised and found to be both overly optimistic and unnecessarily pessimistic. A set of epistemological and ethical issues is presented, focusing on the implications of Big Data for our perception, cognition, fairness, privacy and due process. The article then looks into the idea of user-centric personal data management to investigate to what extent it provides solutions for some of the problems triggered by the Big Data conundrum. Special attention is paid to the core principle of data protection legislation, namely purpose binding. Finally, this contribution seeks to inquire into the influence of Big Data politics on self, mind and society, and asks how we can prevent ourselves from becoming slaves to Big Data.

  4. Modelling the low-tar BIG gasification concept[Biomass Integrated gasification

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, Lars; Elmegaard, B.; Qvale, B.; Henriksen, Ulrrik [Technical univ. of Denmark (Denmark); Bentzen, J.D.; Hummelshoej, R. [COWI A/S (Denmark)

    2007-07-01

    A low-tar, high-efficient biomass gasification concept for medium- to large-scale power plants has been designed. The concept is named 'Low-Tar BIG' (BIG = Biomass Integrated Gasification). The concept is based on separate pyrolysis and gasification units. The volatile gases from the pyrolysis (containing tar) are partially oxidised in a separate chamber, and hereby the tar content is dramatically reduced. Thus, the investment, and running cost of a gas cleaning system can be reduced, and the reliability can be increased. Both pyrolysis and gasification chamber are bubbling fluid beds, fluidised with steam. For moist fuels, the gasifier can be integrated with a steam drying process, where the produced steam is used in the pyrolysis/gasification chamber. In this paper, mathematical models and results from initial tests of a laboratory Low-Tar BIG gasifier are presented. Two types of models are presented: 1. The gasifier-dryer applied in different power plant systems: Gas engine, Simple cycle gas turbine, Recuperated gas turbine and Integrated Gasification and Combined Cycle (IGCC). The paper determines the differences in efficiency of these systems and shows that the gasifier will be applicable for very different fuels with different moisture contents, depending on the system. 2. A thermodynamic Low-Tar BIG model. This model is based on mass and heat balance between four reactors: Pyrolysis, partial oxidation, gasification, gas-solid mixer. The paper describes the results from this study and compares the results to actual laboratory tests. The study shows, that the Low-Tar BIG process can use very wet fuels (up to 65-70% moist) and still produce heat and power with a remarkable high electric efficiency. Hereby the process offers the unique combination of large scale gasification and low-cost gas cleaning and use of low-cost fuels which very likely is the necessary combination that will lead to a breakthrough of gasification technology. (au)

  5. Will Organization Design Be Affected By Big Data?

    Directory of Open Access Journals (Sweden)

    Giles Slinger

    2014-12-01

    Full Text Available Computing power and analytical methods allow us to create, collate, and analyze more data than ever before. When datasets are unusually large in volume, velocity, and variety, they are referred to as “big data.” Some observers have suggested that in order to cope with big data (a organizational structures will need to change and (b the processes used to design organizations will be different. In this article, we differentiate big data from relatively slow-moving, linked people data. We argue that big data will change organizational structures as organizations pursue the opportunities presented by big data. The processes by which organizations are designed, however, will be relatively unaffected by big data. Instead, organization design processes will be more affected by the complex links found in people data.

  6. Big geo data surface approximation using radial basis functions: A comparative study

    Science.gov (United States)

    Majdisova, Zuzana; Skala, Vaclav

    2017-12-01

    Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.

  7. Hellsgate Big Game Winter Range Wildlife Mitigation Project : Annual Report 2008.

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Richard P.; Berger, Matthew T.; Rushing, Samuel; Peone, Cory

    2009-01-01

    The Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) was proposed by the Confederated Tribes of the Colville Reservation (CTCR) as partial mitigation for hydropower's share of the wildlife losses resulting from Chief Joseph and Grand Coulee Dams. At present, the Hellsgate Project protects and manages 57,418 acres (approximately 90 miles2) for the biological requirements of managed wildlife species; most are located on or near the Columbia River (Lake Rufus Woods and Lake Roosevelt) and surrounded by Tribal land. To date we have acquired about 34,597 habitat units (HUs) towards a total 35,819 HUs lost from original inundation due to hydropower development. In addition to the remaining 1,237 HUs left unmitigated, 600 HUs from the Washington Department of Fish and Wildlife that were traded to the Colville Tribes and 10 secure nesting islands are also yet to be mitigated. This annual report for 2008 describes the management activities of the Hellsgate Big Game Winter Range Wildlife Mitigation Project (Hellsgate Project) during the past year.

  8. Big Data

    OpenAIRE

    Bútora, Matúš

    2017-01-01

    Cieľom bakalárskej práca je popísať problematiku Big Data a agregačné operácie OLAP pre podporu rozhodovania, ktoré sú na ne aplikované pomocou technológie Apache Hadoop. Prevažná časť práce je venovaná popisu práve tejto technológie. Posledná kapitola sa zaoberá spôsobom aplikovania agregačných operácií a problematikou ich realizácie. Nasleduje celkové zhodnotenie práce a možnosti využitia výsledného systému do budúcna. The aim of the bachelor thesis is to describe the Big Data issue and ...

  9. BigDansing

    KAUST Repository

    Khayyat, Zuhair; Ilyas, Ihab F.; Jindal, Alekh; Madden, Samuel; Ouzzani, Mourad; Papotti, Paolo; Quiané -Ruiz, Jorge-Arnulfo; Tang, Nan; Yin, Si

    2015-01-01

    of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic

  10. Leveraging Mobile Network Big Data for Developmental Policy ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Some argue that big data and big data users offer advantages to generate evidence. ... Supported by IDRC, this research focused on transportation planning in urban ... Using mobile network big data for land use classification CPRsouth 2015.

  11. Optimizing Hadoop Performance for Big Data Analytics in Smart Grid

    Directory of Open Access Journals (Sweden)

    Mukhtaj Khan

    2017-01-01

    Full Text Available The rapid deployment of Phasor Measurement Units (PMUs in power systems globally is leading to Big Data challenges. New high performance computing techniques are now required to process an ever increasing volume of data from PMUs. To that extent the Hadoop framework, an open source implementation of the MapReduce computing model, is gaining momentum for Big Data analytics in smart grid applications. However, Hadoop has over 190 configuration parameters, which can have a significant impact on the performance of the Hadoop framework. This paper presents an Enhanced Parallel Detrended Fluctuation Analysis (EPDFA algorithm for scalable analytics on massive volumes of PMU data. The novel EPDFA algorithm builds on an enhanced Hadoop platform whose configuration parameters are optimized by Gene Expression Programming. Experimental results show that the EPDFA is 29 times faster than the sequential DFA in processing PMU data and 1.87 times faster than a parallel DFA, which utilizes the default Hadoop configuration settings.

  12. Practice Variation in Big-4 Transparency Reports

    DEFF Research Database (Denmark)

    Girdhar, Sakshi; Klarskov Jeppesen, Kim

    2018-01-01

    Purpose: The purpose of this paper is to examine the transparency reports published by the Big-4 public accounting firms in the UK, Germany and Denmark to understand the determinants of their content within the networks of big accounting firms. Design/methodology/approach: The study draws...... on a qualitative research approach, in which the content of transparency reports is analyzed and semi-structured interviews are conducted with key people from the Big-4 firms who are responsible for developing the transparency reports. Findings: The findings show that the content of transparency reports...... is inconsistent and the transparency reporting practice is not uniform within the Big-4 networks. Differences were found in the way in which the transparency reporting practices are coordinated globally by the respective central governing bodies of the Big-4. The content of the transparency reports...

  13. An Efficient Big Data Anonymization Algorithm Based on Chaos and Perturbation Techniques

    Directory of Open Access Journals (Sweden)

    Can Eyupoglu

    2018-05-01

    Full Text Available The topic of big data has attracted increasing interest in recent years. The emergence of big data leads to new difficulties in terms of protection models used for data privacy, which is of necessity for sharing and processing data. Protecting individuals’ sensitive information while maintaining the usability of the data set published is the most important challenge in privacy preserving. In this regard, data anonymization methods are utilized in order to protect data against identity disclosure and linking attacks. In this study, a novel data anonymization algorithm based on chaos and perturbation has been proposed for privacy and utility preserving in big data. The performance of the proposed algorithm is evaluated in terms of Kullback–Leibler divergence, probabilistic anonymity, classification accuracy, F-measure and execution time. The experimental results have shown that the proposed algorithm is efficient and performs better in terms of Kullback–Leibler divergence, classification accuracy and F-measure compared to most of the existing algorithms using the same data set. Resulting from applying chaos to perturb data, such successful algorithm is promising to be used in privacy preserving data mining and data publishing.

  14. Health impact assessment and monetary valuation of IQ loss in pre-school children due to lead exposure through locally produced food.

    Science.gov (United States)

    Bierkens, J; Buekers, J; Van Holderbeke, M; Torfs, R

    2012-01-01

    A case study has been performed which involved the full chain assessment from policy drivers to health effect quantification of lead exposure through locally produced food on loss of IQ in pre-school children at the population level across the EU-27, including monetary valuation of the estimated health impact. Main policy scenarios cover the period from 2000 to 2020 and include the most important Community policy developments expected to affect the environmental release of lead (Pb) and corresponding human exposure patterns. Three distinct scenarios were explored: the emission situation based on 2000 data, a business-as-usual scenario (BAU) up to 2010 and 2020 and a scenario incorporating the most likely technological change expected (Most Feasible Technical Reductions, MFTR) in response to current and future legislation. Consecutive model calculations (MSCE-HM, WATSON, XtraFOOD, IEUBK) were performed by different partners on the project as part of the full chain approach to derive estimates of blood lead (B-Pb) levels in children as a consequence of the consumption of local produce. The estimated B-Pb levels were translated into an average loss of IQ points/child using an empirical relationship based on a meta-analysis performed by Schwartz (1994). The calculated losses in IQ points were subsequently further translated into the average cost/child using a cost estimate of €10.000 per loss of IQ point based on data from a literature review. The estimated average reduction of cost/child (%) for all countries considered in 2010 under BAU and MFTR are 12.16 and 18.08% as compared to base line conditions, respectively. In 2020 the percentages amount to 20.19 and 23.39%. The case study provides an example of the full-chain impact pathway approach taking into account all foreseeable pathways both for assessing the environmental fate and the associated human exposure and the mode of toxic action to arrive at quantitative estimates of health impacts at the individual and

  15. Big data and biomedical informatics: a challenging opportunity.

    Science.gov (United States)

    Bellazzi, R

    2014-05-22

    Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations.

  16. Was the big bang hot

    International Nuclear Information System (INIS)

    Wright, E.L.

    1983-01-01

    The author considers experiments to confirm the substantial deviations from a Planck curve in the Woody and Richards spectrum of the microwave background, and search for conducting needles in our galaxy. Spectral deviations and needle-shaped grains are expected for a cold Big Bang, but are not required by a hot Big Bang. (Auth.)

  17. Passport to the Big Bang

    CERN Multimedia

    De Melis, Cinzia

    2013-01-01

    Le 2 juin 2013, le CERN inaugure le projet Passeport Big Bang lors d'un grand événement public. Affiche et programme. On 2 June 2013 CERN launches a scientific tourist trail through the Pays de Gex and the Canton of Geneva known as the Passport to the Big Bang. Poster and Programme.

  18. Loss of Cdc42 leads to defects in synaptic plasticity and remote memory recall.

    Science.gov (United States)

    Kim, Il Hwan; Wang, Hong; Soderling, Scott H; Yasuda, Ryohei

    2014-07-08

    Cdc42 is a signaling protein important for reorganization of actin cytoskeleton and morphogenesis of cells. However, the functional role of Cdc42 in synaptic plasticity and in behaviors such as learning and memory are not well understood. Here we report that postnatal forebrain deletion of Cdc42 leads to deficits in synaptic plasticity and in remote memory recall using conditional knockout of Cdc42. We found that deletion of Cdc42 impaired LTP in the Schaffer collateral synapses and postsynaptic structural plasticity of dendritic spines in CA1 pyramidal neurons in the hippocampus. Additionally, loss of Cdc42 did not affect memory acquisition, but instead significantly impaired remote memory recall. Together these results indicate that the postnatal functions of Cdc42 may be crucial for the synaptic plasticity in hippocampal neurons, which contribute to the capacity for remote memory recall.

  19. Keynote: Big Data, Big Opportunities

    OpenAIRE

    Borgman, Christine L.

    2014-01-01

    The enthusiasm for big data is obscuring the complexity and diversity of data in scholarship and the challenges for stewardship. Inside the black box of data are a plethora of research, technology, and policy issues. Data are not shiny objects that are easily exchanged. Rather, data are representations of observations, objects, or other entities used as evidence of phenomena for the purposes of research or scholarship. Data practices are local, varying from field to field, individual to indiv...

  20. Integrating R and Hadoop for Big Data Analysis

    OpenAIRE

    Bogdan Oancea; Raluca Mariana Dragoescu

    2014-01-01

    Analyzing and working with big data could be very diffi cult using classical means like relational database management systems or desktop software packages for statistics and visualization. Instead, big data requires large clusters with hundreds or even thousands of computing nodes. Offi cial statistics is increasingly considering big data for deriving new statistics because big data sources could produce more relevant and timely statistics than traditional sources. One of the software tools ...

  1. The retinoblastoma gene is frequently altered leading to loss of expression in primary breast tumours.

    Science.gov (United States)

    Varley, J M; Armour, J; Swallow, J E; Jeffreys, A J; Ponder, B A; T'Ang, A; Fung, Y K; Brammar, W J; Walker, R A

    1989-06-01

    We have analysed the organisation of the retinoblastoma (RB1) gene in 77 primary breast carcinomas, in metastatic tissue derived from 16 of those primary tumours, and in a variety of benign breast lesions. Expression of RB1 was also assessed in most samples by immunohistochemical detection of the RB1 protein in tissue sections. Structural abnormalities to RB1 were detected in DNA from 15/77 (19%) of primary breast carcinomas examined. Where DNA was available from metastatic tissue derived from such primary tumours, the same aberration could be detected. No alterations were seen in benign breast lesions. 16/56 (29%) of tumours examined for expression by immunohistochemical methods showed a proportion of tumour cells to be completely negative for the RB1 protein. All tumours in which a structural alteration to RB1 was detected had a proportion of negative cells, except for one case where all cells were positive. Several primary tumour samples were identified where there was no detectable structural change to the gene, but there was loss of expression in some tumour cells. The data presented here demonstrate that changes to the RB1 gene leading to loss of expression of both alleles are frequent in primary human breast tumours.

  2. The challenges of big data.

    Science.gov (United States)

    Mardis, Elaine R

    2016-05-01

    The largely untapped potential of big data analytics is a feeding frenzy that has been fueled by the production of many next-generation-sequencing-based data sets that are seeking to answer long-held questions about the biology of human diseases. Although these approaches are likely to be a powerful means of revealing new biological insights, there are a number of substantial challenges that currently hamper efforts to harness the power of big data. This Editorial outlines several such challenges as a means of illustrating that the path to big data revelations is paved with perils that the scientific community must overcome to pursue this important quest. © 2016. Published by The Company of Biologists Ltd.

  3. Big³. Editorial.

    Science.gov (United States)

    Lehmann, C U; Séroussi, B; Jaulent, M-C

    2014-05-22

    To provide an editorial introduction into the 2014 IMIA Yearbook of Medical Informatics with an overview of the content, the new publishing scheme, and upcoming 25th anniversary. A brief overview of the 2014 special topic, Big Data - Smart Health Strategies, and an outline of the novel publishing model is provided in conjunction with a call for proposals to celebrate the 25th anniversary of the Yearbook. 'Big Data' has become the latest buzzword in informatics and promise new approaches and interventions that can improve health, well-being, and quality of life. This edition of the Yearbook acknowledges the fact that we just started to explore the opportunities that 'Big Data' will bring. However, it will become apparent to the reader that its pervasive nature has invaded all aspects of biomedical informatics - some to a higher degree than others. It was our goal to provide a comprehensive view at the state of 'Big Data' today, explore its strengths and weaknesses, as well as its risks, discuss emerging trends, tools, and applications, and stimulate the development of the field through the aggregation of excellent survey papers and working group contributions to the topic. For the first time in history will the IMIA Yearbook be published in an open access online format allowing a broader readership especially in resource poor countries. For the first time, thanks to the online format, will the IMIA Yearbook be published twice in the year, with two different tracks of papers. We anticipate that the important role of the IMIA yearbook will further increase with these changes just in time for its 25th anniversary in 2016.

  4. The investigation of multi-channel splitters and big-bend waveguides based on 2D sunflower-typed photonic crystals

    Science.gov (United States)

    Liu, Wei; Sun, XiaoHong; Fan, QingBin; Wang, Shuai; Qi, YongLe

    2016-12-01

    Different kinds of multi-channel splitters and big-bend waveguides have been designed and investigated by using sunflower-typed photonic crystals. By comparing the transmission spectra of two kinds of 4-channels beam splitters, we find that "C" type splitter has a relative uniform splitting ratio for different channels in a certain wavelength range. Furthermore three types of waveguides with different bending degrees have been investigated. Except for a little loss in the short wavelength with the increase of the bending degrees, they have almost the same transmission spectra structures. The result can be extended to big-bend waveguides with arbitrary bending degrees. This research is valuable for developing new-typed integrated optical communication devices.

  5. Cosmological BCS mechanism and the big bang singularity

    Science.gov (United States)

    Alexander, Stephon; Biswas, Tirthabir

    2009-07-01

    We provide a novel mechanism that resolves the big bang singularity present in Friedman-Lemaitre-Robertson-Walker space-times without the need for ghost fields. Building on the fact that a four-fermion interaction arises in general relativity when fermions are covariantly coupled, we show that at early times the decrease in scale factor enhances the correlation between pairs of fermions. This enhancement leads to a BCS-like condensation of the fermions and opens a gap dynamically driving the Hubble parameter H to zero and results in a nonsingular bounce, at least in some special cases.

  6. Cloud Based Big Data Infrastructure: Architectural Components and Automated Provisioning

    OpenAIRE

    Demchenko, Yuri; Turkmen, Fatih; Blanchet, Christophe; Loomis, Charles; Laat, Caees de

    2016-01-01

    This paper describes the general architecture and functional components of the cloud based Big Data Infrastructure (BDI). The proposed BDI architecture is based on the analysis of the emerging Big Data and data intensive technologies and supported by the definition of the Big Data Architecture Framework (BDAF) that defines the following components of the Big Data technologies: Big Data definition, Data Management including data lifecycle and data structures, Big Data Infrastructure (generical...

  7. Physics with Big Karl Brainstorming. Abstracts

    International Nuclear Information System (INIS)

    Machner, H.; Lieb, J.

    2000-08-01

    Before summarizing details of the meeting, a short description of the spectrometer facility Big Karl is given. The facility is essentially a new instrument using refurbished dipole magnets from its predecessor. The large acceptance quadrupole magnets and the beam optics are new. Big Karl has a design very similar as the focussing spectrometers at MAMI (Mainz), AGOR (Groningen) and the high resolution spectrometer (HRS) in Hall A at Jefferson Laboratory with ΔE/E = 10 -4 but at some lower maximum momentum. The focal plane detectors consisting of multiwire drift chambers and scintillating hodoscopes are similar. Unlike HRS, Big Karl still needs Cerenkov counters and polarimeters in its focal plane; detectors which are necessary to perform some of the experiments proposed during the brainstorming. In addition, BIG KARL allows emission angle reconstruction via track measurements in its focal plane with high resolution. In the following the physics highlights, the proposed and potential experiments are summarized. During the meeting it became obvious that the physics to be explored at Big Karl can be grouped into five distinct categories, and this summary is organized accordingly. (orig.)

  8. Seed bank and big sagebrush plant community composition in a range margin for big sagebrush

    Science.gov (United States)

    Martyn, Trace E.; Bradford, John B.; Schlaepfer, Daniel R.; Burke, Ingrid C.; Laurenroth, William K.

    2016-01-01

    The potential influence of seed bank composition on range shifts of species due to climate change is unclear. Seed banks can provide a means of both species persistence in an area and local range expansion in the case of increasing habitat suitability, as may occur under future climate change. However, a mismatch between the seed bank and the established plant community may represent an obstacle to persistence and expansion. In big sagebrush (Artemisia tridentata) plant communities in Montana, USA, we compared the seed bank to the established plant community. There was less than a 20% similarity in the relative abundance of species between the established plant community and the seed bank. This difference was primarily driven by an overrepresentation of native annual forbs and an underrepresentation of big sagebrush in the seed bank compared to the established plant community. Even though we expect an increase in habitat suitability for big sagebrush under future climate conditions at our sites, the current mismatch between the plant community and the seed bank could impede big sagebrush range expansion into increasingly suitable habitat in the future.

  9. Application and Prospect of Big Data in Water Resources

    Science.gov (United States)

    Xi, Danchi; Xu, Xinyi

    2017-04-01

    Because of developed information technology and affordable data storage, we h ave entered the era of data explosion. The term "Big Data" and technology relate s to it has been created and commonly applied in many fields. However, academic studies just got attention on Big Data application in water resources recently. As a result, water resource Big Data technology has not been fully developed. This paper introduces the concept of Big Data and its key technologies, including the Hadoop system and MapReduce. In addition, this paper focuses on the significance of applying the big data in water resources and summarizing prior researches by others. Most studies in this field only set up theoretical frame, but we define the "Water Big Data" and explain its tridimensional properties which are time dimension, spatial dimension and intelligent dimension. Based on HBase, the classification system of Water Big Data is introduced: hydrology data, ecology data and socio-economic data. Then after analyzing the challenges in water resources management, a series of solutions using Big Data technologies such as data mining and web crawler, are proposed. Finally, the prospect of applying big data in water resources is discussed, it can be predicted that as Big Data technology keeps developing, "3D" (Data Driven Decision) will be utilized more in water resources management in the future.

  10. Big Data in food and agriculture

    Directory of Open Access Journals (Sweden)

    Kelly Bronson

    2016-06-01

    Full Text Available Farming is undergoing a digital revolution. Our existing review of current Big Data applications in the agri-food sector has revealed several collection and analytics tools that may have implications for relationships of power between players in the food system (e.g. between farmers and large corporations. For example, Who retains ownership of the data generated by applications like Monsanto Corproation's Weed I.D . “app”? Are there privacy implications with the data gathered by John Deere's precision agricultural equipment? Systematically tracing the digital revolution in agriculture, and charting the affordances as well as the limitations of Big Data applied to food and agriculture, should be a broad research goal for Big Data scholarship. Such a goal brings data scholarship into conversation with food studies and it allows for a focus on the material consequences of big data in society.

  11. Big data optimization recent developments and challenges

    CERN Document Server

    2016-01-01

    The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for both academics and practitioners interested, and to benefit society, industry, academia, and government. Presenting applications in a variety of industries, this book will be useful for the researchers aiming to analyses large scale data. Several optimization algorithms for big data including convergent parallel algorithms, limited memory bundle algorithm, diagonal bundle method, convergent parallel algorithms, network analytics, and many more have been explored in this book.

  12. Una aproximación a Big Data = An approach to Big Data

    OpenAIRE

    Puyol Moreno, Javier

    2014-01-01

    Big Data puede ser considerada como una tendencia en el avance de la tecnología que ha abierto la puerta a un nuevo enfoque para la comprensión y la toma de decisiones, que se utiliza para describir las enormes cantidades de datos (estructurados, no estructurados y semi- estructurados) que sería demasiado largo y costoso para cargar una base de datos relacional para su análisis. Así, el concepto de Big Data se aplica a toda la información que no puede ser procesada o analizada utilizando herr...

  13. Toward a Literature-Driven Definition of Big Data in Healthcare.

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    The aim of this study was to provide a definition of big data in healthcare. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. A total of 196 papers were included. Big data can be defined as datasets with Log(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data.

  14. Big Data Analytic, Big Step for Patient Management and Care in Puerto Rico.

    Science.gov (United States)

    Borrero, Ernesto E

    2018-01-01

    This letter provides an overview of the application of big data in health care system to improve quality of care, including predictive modelling for risk and resource use, precision medicine and clinical decision support, quality of care and performance measurement, public health and research applications, among others. The author delineates the tremendous potential for big data analytics and discuss how it can be successfully implemented in clinical practice, as an important component of a learning health-care system.

  15. Big Data and Biomedical Informatics: A Challenging Opportunity

    Science.gov (United States)

    2014-01-01

    Summary Big data are receiving an increasing attention in biomedicine and healthcare. It is therefore important to understand the reason why big data are assuming a crucial role for the biomedical informatics community. The capability of handling big data is becoming an enabler to carry out unprecedented research studies and to implement new models of healthcare delivery. Therefore, it is first necessary to deeply understand the four elements that constitute big data, namely Volume, Variety, Velocity, and Veracity, and their meaning in practice. Then, it is mandatory to understand where big data are present, and where they can be beneficially collected. There are research fields, such as translational bioinformatics, which need to rely on big data technologies to withstand the shock wave of data that is generated every day. Other areas, ranging from epidemiology to clinical care, can benefit from the exploitation of the large amounts of data that are nowadays available, from personal monitoring to primary care. However, building big data-enabled systems carries on relevant implications in terms of reproducibility of research studies and management of privacy and data access; proper actions should be taken to deal with these issues. An interesting consequence of the big data scenario is the availability of new software, methods, and tools, such as map-reduce, cloud computing, and concept drift machine learning algorithms, which will not only contribute to big data research, but may be beneficial in many biomedical informatics applications. The way forward with the big data opportunity will require properly applied engineering principles to design studies and applications, to avoid preconceptions or over-enthusiasms, to fully exploit the available technologies, and to improve data processing and data management regulations. PMID:24853034

  16. Losses in Ferroelectric Materials

    Science.gov (United States)

    Liu, Gang; Zhang, Shujun; Jiang, Wenhua; Cao, Wenwu

    2015-01-01

    Ferroelectric materials are the best dielectric and piezoelectric materials known today. Since the discovery of barium titanate in the 1940s, lead zirconate titanate ceramics in the 1950s and relaxor-PT single crystals (such as lead magnesium niobate-lead titanate and lead zinc niobate-lead titanate) in the 1980s and 1990s, perovskite ferroelectric materials have been the dominating piezoelectric materials for electromechanical devices, and are widely used in sensors, actuators and ultrasonic transducers. Energy losses (or energy dissipation) in ferroelectrics are one of the most critical issues for high power devices, such as therapeutic ultrasonic transducers, large displacement actuators, SONAR projectors, and high frequency medical imaging transducers. The losses of ferroelectric materials have three distinct types, i.e., elastic, piezoelectric and dielectric losses. People have been investigating the mechanisms of these losses and are trying hard to control and minimize them so as to reduce performance degradation in electromechanical devices. There are impressive progresses made in the past several decades on this topic, but some confusions still exist. Therefore, a systematic review to define related concepts and clear up confusions is urgently in need. With this objective in mind, we provide here a comprehensive review on the energy losses in ferroelectrics, including related mechanisms, characterization techniques and collections of published data on many ferroelectric materials to provide a useful resource for interested scientists and engineers to design electromechanical devices and to gain a global perspective on the complex physical phenomena involved. More importantly, based on the analysis of available information, we proposed a general theoretical model to describe the inherent relationships among elastic, dielectric, piezoelectric and mechanical losses. For multi-domain ferroelectric single crystals and ceramics, intrinsic and extrinsic energy

  17. Losses in Ferroelectric Materials.

    Science.gov (United States)

    Liu, Gang; Zhang, Shujun; Jiang, Wenhua; Cao, Wenwu

    2015-03-01

    Ferroelectric materials are the best dielectric and piezoelectric materials known today. Since the discovery of barium titanate in the 1940s, lead zirconate titanate ceramics in the 1950s and relaxor-PT single crystals (such as lead magnesium niobate-lead titanate and lead zinc niobate-lead titanate) in the 1980s and 1990s, perovskite ferroelectric materials have been the dominating piezoelectric materials for electromechanical devices, and are widely used in sensors, actuators and ultrasonic transducers. Energy losses (or energy dissipation) in ferroelectrics are one of the most critical issues for high power devices, such as therapeutic ultrasonic transducers, large displacement actuators, SONAR projectors, and high frequency medical imaging transducers. The losses of ferroelectric materials have three distinct types, i.e., elastic, piezoelectric and dielectric losses. People have been investigating the mechanisms of these losses and are trying hard to control and minimize them so as to reduce performance degradation in electromechanical devices. There are impressive progresses made in the past several decades on this topic, but some confusions still exist. Therefore, a systematic review to define related concepts and clear up confusions is urgently in need. With this objective in mind, we provide here a comprehensive review on the energy losses in ferroelectrics, including related mechanisms, characterization techniques and collections of published data on many ferroelectric materials to provide a useful resource for interested scientists and engineers to design electromechanical devices and to gain a global perspective on the complex physical phenomena involved. More importantly, based on the analysis of available information, we proposed a general theoretical model to describe the inherent relationships among elastic, dielectric, piezoelectric and mechanical losses. For multi-domain ferroelectric single crystals and ceramics, intrinsic and extrinsic energy

  18. Big Data and historical social science

    Directory of Open Access Journals (Sweden)

    Peter Bearman

    2015-11-01

    Full Text Available “Big Data” can revolutionize historical social science if it arises from substantively important contexts and is oriented towards answering substantively important questions. Such data may be especially important for answering previously largely intractable questions about the timing and sequencing of events, and of event boundaries. That said, “Big Data” makes no difference for social scientists and historians whose accounts rest on narrative sentences. Since such accounts are the norm, the effects of Big Data on the practice of historical social science may be more limited than one might wish.

  19. The Inverted Big-Bang

    OpenAIRE

    Vaas, Ruediger

    2004-01-01

    Our universe appears to have been created not out of nothing but from a strange space-time dust. Quantum geometry (loop quantum gravity) makes it possible to avoid the ominous beginning of our universe with its physically unrealistic (i.e. infinite) curvature, extreme temperature, and energy density. This could be the long sought after explanation of the big-bang and perhaps even opens a window into a time before the big-bang: Space itself may have come from an earlier collapsing universe tha...

  20. Minsky on "Big Government"

    Directory of Open Access Journals (Sweden)

    Daniel de Santana Vasconcelos

    2014-03-01

    Full Text Available This paper objective is to assess, in light of the main works of Minsky, his view and analysis of what he called the "Big Government" as that huge institution which, in parallels with the "Big Bank" was capable of ensuring stability in the capitalist system and regulate its inherently unstable financial system in mid-20th century. In this work, we analyze how Minsky proposes an active role for the government in a complex economic system flawed by financial instability.

  1. Classical propagation of strings across a big crunch/big bang singularity

    International Nuclear Information System (INIS)

    Niz, Gustavo; Turok, Neil

    2007-01-01

    One of the simplest time-dependent solutions of M theory consists of nine-dimensional Euclidean space times 1+1-dimensional compactified Milne space-time. With a further modding out by Z 2 , the space-time represents two orbifold planes which collide and re-emerge, a process proposed as an explanation of the hot big bang [J. Khoury, B. A. Ovrut, P. J. Steinhardt, and N. Turok, Phys. Rev. D 64, 123522 (2001).][P. J. Steinhardt and N. Turok, Science 296, 1436 (2002).][N. Turok, M. Perry, and P. J. Steinhardt, Phys. Rev. D 70, 106004 (2004).]. When the two planes are near, the light states of the theory consist of winding M2-branes, describing fundamental strings in a particular ten-dimensional background. They suffer no blue-shift as the M theory dimension collapses, and their equations of motion are regular across the transition from big crunch to big bang. In this paper, we study the classical evolution of fundamental strings across the singularity in some detail. We also develop a simple semiclassical approximation to the quantum evolution which allows one to compute the quantum production of excitations on the string and implement it in a simplified example

  2. Five-dimensional null-cone structure of big bang singularity

    Energy Technology Data Exchange (ETDEWEB)

    Lauro, S.; Schucking, E.L.

    1985-04-01

    The Friedmann model PHI of positive space curvature, vanishing pressure and cosmological constant when isometrically imbedded as a hypersurface in five-dimensional Minkowski space MV is globally rigid: if F(PHI) and F'(PHI) are isometric embeddings in MV there is a motion of MV such that F'= F. The big bang singularity is the vertex of a null half-cone in MV. Global rigidity leads to an invariant characterization of the singularity. The structure of matter at the singularity is governed by the de Sitter group.

  3. Five-dimensional null-cone structure of big bang singularity

    International Nuclear Information System (INIS)

    Lauro, S.; Schucking, E.L.

    1985-01-01

    The Friedmann model PHI of positive space curvature, vanishing pressure and cosmological constant when isometrically imbedded as a hypersurface in five-dimensional Minkowski space M 5 is globally rigid: if F(PHI) and F'(PHI) are isometric embeddings in M 5 there is a motion π of M 5 such that F'=π 0 F. The big bang singularity is the vertex of a null half-cone in M 5 . Global rigidity leads to an invariant characterization of the singularity. The structure of matter at the singularity is governed by the de Sitter group. (author)

  4. The Information Panopticon in the Big Data Era

    Directory of Open Access Journals (Sweden)

    Martin Berner

    2014-04-01

    Full Text Available Taking advantage of big data opportunities is challenging for traditional organizations. In this article, we take a panoptic view of big data – obtaining information from more sources and making it visible to all organizational levels. We suggest that big data requires the transformation from command and control hierarchies to post-bureaucratic organizational structures wherein employees at all levels can be empowered while simultaneously being controlled. We derive propositions that show how to best exploit big data technologies in organizations.

  5. Du-Zhong (Eucommia ulmoides Oliv.) Cortex Extract Alleviates Lead Acetate-Induced Bone Loss in Rats.

    Science.gov (United States)

    Qi, Shanshan; Zheng, Hongxing; Chen, Chen; Jiang, Hai

    2018-05-09

    The purpose of this study was to evaluate the protective effect of Du-Zhong cortex extract (DZCE) on lead acetate-induced bone loss in rats. Forty female Sprague-Dawley rats were randomly divided into four groups: group I (control) was provided with distilled water. Group II (PbAc) received 500 ppm lead acetate in drinking water for 60 days. Group III (PbAc+DZCE) received 500 ppm lead acetate in drinking water, and given intragastric DZCE (100 mg/kg body weight) for 60 days. Group IV (DZCE) was given intragastric DZCE (100 mg/kg body weight) for 60 days. The bone mineral density, serum biochemical markers, bone histomorphology, and bone marrow adipocyte parameters were analyzed using dual-energy X-ray absorptiometry, biochemistry, histomorphometry, and histopathology, respectively. The results showed that the lumbar spine and femur bone mineral density was significantly decreased in PbAc group compared with the control (P  0.05, vs. control and DZCE group). Serum calcium and serum phosphorus in the PbAc+DZCE group were greater than that in the PbAc group (P control group (P control, and DZCE groups (P > 0.05). Serum OPG and OPG/RANKL ration were significantly higher in the PbAc+DZCE group than that in the PbAc group (P control group, but those were restored in the PbAc+DZCE groups. The bone marrow adipocyte number, percent adipocyte volume per tissue volume (AV/TV), and mean adipocyte diameter were significantly increased in the PbAc group compared to the control (P control group were not significant. The results above indicate that the Du-Zhong cortex extract has protective effects on both stimulation of bone formation and suppression of bone resorption in lead-exposed rats, therefore, Du-Zhong cortex extract has the potential to prevent or treat osteoporosis resulting from lead expose.

  6. WE-H-BRB-00: Big Data in Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  7. WE-H-BRB-00: Big Data in Radiation Oncology

    International Nuclear Information System (INIS)

    2016-01-01

    Big Data in Radiation Oncology: (1) Overview of the NIH 2015 Big Data Workshop, (2) Where do we stand in the applications of big data in radiation oncology?, and (3) Learning Health Systems for Radiation Oncology: Needs and Challenges for Future Success The overriding goal of this trio panel of presentations is to improve awareness of the wide ranging opportunities for big data impact on patient quality care and enhancing potential for research and collaboration opportunities with NIH and a host of new big data initiatives. This presentation will also summarize the Big Data workshop that was held at the NIH Campus on August 13–14, 2015 and sponsored by AAPM, ASTRO, and NIH. The workshop included discussion of current Big Data cancer registry initiatives, safety and incident reporting systems, and other strategies that will have the greatest impact on radiation oncology research, quality assurance, safety, and outcomes analysis. Learning Objectives: To discuss current and future sources of big data for use in radiation oncology research To optimize our current data collection by adopting new strategies from outside radiation oncology To determine what new knowledge big data can provide for clinical decision support for personalized medicine L. Xing, NIH/NCI Google Inc.

  8. De impact van Big Data op Internationale Betrekkingen

    NARCIS (Netherlands)

    Zwitter, Andrej

    Big Data changes our daily lives, but does it also change international politics? In this contribution, Andrej Zwitter (NGIZ chair at Groningen University) argues that Big Data impacts on international relations in ways that we only now start to understand. To comprehend how Big Data influences

  9. Epidemiology in the Era of Big Data

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-01-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called ‘3 Vs’: variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that, while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field’s future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future. PMID:25756221

  10. Big data and analytics strategic and organizational impacts

    CERN Document Server

    Morabito, Vincenzo

    2015-01-01

    This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners...

  11. From the Big Bang to the Nobel Prize and on to the James Webb Space Telescope

    Science.gov (United States)

    Mather, John C.

    2008-01-01

    The history of the universe in a nutshell, from the Big Bang to now. and on to the future - John Mather will tell the story of how we got here, how the Universe began with a Big Bang, how it could have produced an Earth where sentient beings can live, and how those beings are discovering their history. Mather was Project Scientist for NASA's Cosmic Background Explorer (COBE) satellite, which measured the spectrum (the color) of the heat radiation from the Big Bang, discovered hot and cold spots in that radiation, and hunted for the first objects that formed after the great explosion. He will explain Einstein's biggest mistake, show how Edwin Hubble discovered the expansion of the univerre, how the COBE mission was built, and how the COBE data support the Big Bang theory. He will also show NASA's plans for the next great telescope in space, the Jarnes Webb Space Telescope. It will look even farther back in time than the Hubble Space Telescope, and will look inside the dusty cocoons where rtars and planets are being born today. Planned for launch in 2013, it may lead to another Nobel Prize for some lucky observer.

  12. Big Science and Long-tail Science

    CERN Document Server

    2008-01-01

    Jim Downing and I were privileged to be the guests of Salavtore Mele at CERN yesterday and to see the Atlas detector of the Large Hadron Collider . This is a wow experience - although I knew it was big, I hadnt realised how big.

  13. Sexual dimorphism in relation to big-game hunting and economy in modern human populations.

    Science.gov (United States)

    Collier, S

    1993-08-01

    Postcranial skeletal data from two recent Eskimo populations are used to test David Frayer's model of sexual dimorphism reduction in Europe between the Upper Paleolithic and Mesolithic. Frayer argued that a change from big-game hunting and adoption of new technology in the Mesolithic reduced selection for large body size in males and led to a reduction in skeletal sexual dimorphism. Though aspects of Frayer's work have been criticized in the literature, the association of big-game hunting and high sexual dimorphism is untested. This study employs univariate and multivariate analysis to test that association by examining sexual dimorphism of cranial and postcranial bones of two recent Alaskan Eskimo populations, one being big-game (whale and other large marine mammal) hunting people, and the second being salmon fishing, riverine people. While big-game hunting influences skeletal robusticity, it cannot be said to lead to greater sexual dimorphism generally. The two populations had different relative sexual dimorphism levels for different parts of the body. Notably, the big-game hunting (whaling) Eskimos had the lower multivariate dimorphism in the humerus, which could be expected to be the structure under greatest exertion by such hunting in males. While the exertions of the whale hunting economic activities led to high skeletal robusticity, as predicted by Frayer's model, this was true of the females as well as the males, resulting in low sexual dimorphism in some features. Females are half the sexual dimorphism equation, and they cannot be seen as constants in any model of economic behavior.

  14. Toward a Literature-Driven Definition of Big Data in Healthcare

    Directory of Open Access Journals (Sweden)

    Emilie Baro

    2015-01-01

    Full Text Available Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n and the number of variables (p for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n*p≥7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR data.

  15. Toward a Literature-Driven Definition of Big Data in Healthcare

    Science.gov (United States)

    Baro, Emilie; Degoul, Samuel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Objective. The aim of this study was to provide a definition of big data in healthcare. Methods. A systematic search of PubMed literature published until May 9, 2014, was conducted. We noted the number of statistical individuals (n) and the number of variables (p) for all papers describing a dataset. These papers were classified into fields of study. Characteristics attributed to big data by authors were also considered. Based on this analysis, a definition of big data was proposed. Results. A total of 196 papers were included. Big data can be defined as datasets with Log⁡(n∗p) ≥ 7. Properties of big data are its great variety and high velocity. Big data raises challenges on veracity, on all aspects of the workflow, on extracting meaningful information, and on sharing information. Big data requires new computational methods that optimize data management. Related concepts are data reuse, false knowledge discovery, and privacy issues. Conclusion. Big data is defined by volume. Big data should not be confused with data reuse: data can be big without being reused for another purpose, for example, in omics. Inversely, data can be reused without being necessarily big, for example, secondary use of Electronic Medical Records (EMR) data. PMID:26137488

  16. Big-Eyed Bugs Have Big Appetite for Pests

    Science.gov (United States)

    Many kinds of arthropod natural enemies (predators and parasitoids) inhabit crop fields in Arizona and can have a large negative impact on several pest insect species that also infest these crops. Geocoris spp., commonly known as big-eyed bugs, are among the most abundant insect predators in field c...

  17. Big Data - What is it and why it matters.

    Science.gov (United States)

    Tattersall, Andy; Grant, Maria J

    2016-06-01

    Big data, like MOOCs, altmetrics and open access, is a term that has been commonplace in the library community for some time yet, despite its prevalence, many in the library and information sector remain unsure of the relationship between big data and their roles. This editorial explores what big data could mean for the day-to-day practice of health library and information workers, presenting examples of big data in action, considering the ethics of accessing big data sets and the potential for new roles for library and information workers. © 2016 Health Libraries Group.

  18. Research on information security in big data era

    Science.gov (United States)

    Zhou, Linqi; Gu, Weihong; Huang, Cheng; Huang, Aijun; Bai, Yongbin

    2018-05-01

    Big data is becoming another hotspot in the field of information technology after the cloud computing and the Internet of Things. However, the existing information security methods can no longer meet the information security requirements in the era of big data. This paper analyzes the challenges and a cause of data security brought by big data, discusses the development trend of network attacks under the background of big data, and puts forward my own opinions on the development of security defense in technology, strategy and product.

  19. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  20. A little big history of Tiananmen

    NARCIS (Netherlands)

    Quaedackers, E.; Grinin, L.E.; Korotayev, A.V.; Rodrigue, B.H.

    2011-01-01

    This contribution aims at demonstrating the usefulness of studying small-scale subjects such as Tiananmen, or the Gate of Heavenly Peace, in Beijing - from a Big History perspective. By studying such a ‘little big history’ of Tiananmen, previously overlooked yet fundamental explanations for why

  1. Addressing big data issues in Scientific Data Infrastructure

    NARCIS (Netherlands)

    Demchenko, Y.; Membrey, P.; Grosso, P.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2013-01-01

    Big Data are becoming a new technology focus both in science and in industry. This paper discusses the challenges that are imposed by Big Data on the modern and future Scientific Data Infrastructure (SDI). The paper discusses a nature and definition of Big Data that include such features as Volume,

  2. Big Data - Smart Health Strategies

    Science.gov (United States)

    2014-01-01

    Summary Objectives To select best papers published in 2013 in the field of big data and smart health strategies, and summarize outstanding research efforts. Methods A systematic search was performed using two major bibliographic databases for relevant journal papers. The references obtained were reviewed in a two-stage process, starting with a blinded review performed by the two section editors, and followed by a peer review process operated by external reviewers recognized as experts in the field. Results The complete review process selected four best papers, illustrating various aspects of the special theme, among them: (a) using large volumes of unstructured data and, specifically, clinical notes from Electronic Health Records (EHRs) for pharmacovigilance; (b) knowledge discovery via querying large volumes of complex (both structured and unstructured) biological data using big data technologies and relevant tools; (c) methodologies for applying cloud computing and big data technologies in the field of genomics, and (d) system architectures enabling high-performance access to and processing of large datasets extracted from EHRs. Conclusions The potential of big data in biomedicine has been pinpointed in various viewpoint papers and editorials. The review of current scientific literature illustrated a variety of interesting methods and applications in the field, but still the promises exceed the current outcomes. As we are getting closer towards a solid foundation with respect to common understanding of relevant concepts and technical aspects, and the use of standardized technologies and tools, we can anticipate to reach the potential that big data offer for personalized medicine and smart health strategies in the near future. PMID:25123721

  3. About Big Data and its Challenges and Benefits in Manufacturing

    OpenAIRE

    Bogdan NEDELCU

    2013-01-01

    The aim of this article is to show the importance of Big Data and its growing influence on companies. It also shows what kind of big data is currently generated and how much big data is estimated to be generated. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. There are also shown some major influences that big data has over one major segment in the industry (manufacturing) and the challenges that appear.

  4. From the Big Bang to the Nobel Prize and on to James Webb Space Telescope

    Science.gov (United States)

    Mather, John C.

    2009-01-01

    The history of the universe in a nutshell, from the Big Bang to now, and on to the future - John Mather will tell the story of how we got here, how the Universe began with a Big Bang, how it could have produced an Earth where sentient beings can live, and how those beings are discovering their history. Mather was Project Scientist for NASA s Cosmic Background Explorer (COBE) satellite, which measured the spectrum (the color) of the heat radiation from the Big Bang, discovered hot and cold spots in that radiation, and hunted for the first objects that formed after the great explosion. He will explain Einstein s biggest mistake, show how Edwin Hubble discovered the expansion of the universe, how the COBE mission was built, and how the COBE data support the Big Bang theory. He will also show NASA s plans for the next great telescope in space, the James Webb Space Telescope. It will look even farther back in time than the Hubble Space Telescope, and will look inside the dusty cocoons where stars and planets are being born today. Planned for launch in 2013, it may lead to another Nobel Prize for some lucky observer.

  5. Big Data Management in US Hospitals: Benefits and Barriers.

    Science.gov (United States)

    Schaeffer, Chad; Booton, Lawrence; Halleck, Jamey; Studeny, Jana; Coustasse, Alberto

    Big data has been considered as an effective tool for reducing health care costs by eliminating adverse events and reducing readmissions to hospitals. The purposes of this study were to examine the emergence of big data in the US health care industry, to evaluate a hospital's ability to effectively use complex information, and to predict the potential benefits that hospitals might realize if they are successful in using big data. The findings of the research suggest that there were a number of benefits expected by hospitals when using big data analytics, including cost savings and business intelligence. By using big data, many hospitals have recognized that there have been challenges, including lack of experience and cost of developing the analytics. Many hospitals will need to invest in the acquiring of adequate personnel with experience in big data analytics and data integration. The findings of this study suggest that the adoption, implementation, and utilization of big data technology will have a profound positive effect among health care providers.

  6. Big Data Strategy for Telco: Network Transformation

    OpenAIRE

    F. Amin; S. Feizi

    2014-01-01

    Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and ...

  7. Big Data in Shipping - Challenges and Opportunities

    OpenAIRE

    Rødseth, Ørnulf Jan; Perera, Lokukaluge Prasad; Mo, Brage

    2016-01-01

    Big Data is getting popular in shipping where large amounts of information is collected to better understand and improve logistics, emissions, energy consumption and maintenance. Constraints to the use of big data include cost and quality of on-board sensors and data acquisition systems, satellite communication, data ownership and technical obstacles to effective collection and use of big data. New protocol standards may simplify the process of collecting and organizing the data, including in...

  8. [Relevance of big data for molecular diagnostics].

    Science.gov (United States)

    Bonin-Andresen, M; Smiljanovic, B; Stuhlmüller, B; Sörensen, T; Grützkau, A; Häupl, T

    2018-04-01

    Big data analysis raises the expectation that computerized algorithms may extract new knowledge from otherwise unmanageable vast data sets. What are the algorithms behind the big data discussion? In principle, high throughput technologies in molecular research already introduced big data and the development and application of analysis tools into the field of rheumatology some 15 years ago. This includes especially omics technologies, such as genomics, transcriptomics and cytomics. Some basic methods of data analysis are provided along with the technology, however, functional analysis and interpretation requires adaptation of existing or development of new software tools. For these steps, structuring and evaluating according to the biological context is extremely important and not only a mathematical problem. This aspect has to be considered much more for molecular big data than for those analyzed in health economy or epidemiology. Molecular data are structured in a first order determined by the applied technology and present quantitative characteristics that follow the principles of their biological nature. These biological dependencies have to be integrated into software solutions, which may require networks of molecular big data of the same or even different technologies in order to achieve cross-technology confirmation. More and more extensive recording of molecular processes also in individual patients are generating personal big data and require new strategies for management in order to develop data-driven individualized interpretation concepts. With this perspective in mind, translation of information derived from molecular big data will also require new specifications for education and professional competence.

  9. Big data in psychology: A framework for research advancement.

    Science.gov (United States)

    Adjerid, Idris; Kelley, Ken

    2018-02-22

    The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. np→dγ for big-bang nucleosynthesis

    International Nuclear Information System (INIS)

    Chen, Jiunn-Wei; Savage, Martin J.

    1999-01-01

    The cross section for np→dγ is calculated at energies relevant to big-bang nucleosynthesis using the recently developed effective field theory that describes the two-nucleon sector. The E1 amplitude is computed up to N 3 LO and depends only upon nucleon-nucleon phase shift data. In contrast, the M1 contribution is computed up to next-to-leading order, and the four-nucleon-one-magnetic-photon counterterm that enters is determined by the cross section for cold neutron capture. The uncertainty in the calculation for nucleon energies up to E∼1 MeV is estimated to be (less-or-similar sign)4%. (c) 1999 The American Physical Society

  11. 'Big data' in pharmaceutical science: challenges and opportunities.

    Science.gov (United States)

    Dossetter, Al G; Ecker, Gerhard; Laverty, Hugh; Overington, John

    2014-05-01

    Future Medicinal Chemistry invited a selection of experts to express their views on the current impact of big data in drug discovery and design, as well as speculate on future developments in the field. The topics discussed include the challenges of implementing big data technologies, maintaining the quality and privacy of data sets, and how the industry will need to adapt to welcome the big data era. Their enlightening responses provide a snapshot of the many and varied contributions being made by big data to the advancement of pharmaceutical science.

  12. Big Data: An Opportunity for Collaboration with Computer Scientists on Data-Driven Science

    Science.gov (United States)

    Baru, C.

    2014-12-01

    Big data technologies are evolving rapidly, driven by the need to manage ever increasing amounts of historical data; process relentless streams of human and machine-generated data; and integrate data of heterogeneous structure from extremely heterogeneous sources of information. Big data is inherently an application-driven problem. Developing the right technologies requires an understanding of the applications domain. Though, an intriguing aspect of this phenomenon is that the availability of the data itself enables new applications not previously conceived of! In this talk, we will discuss how the big data phenomenon creates an imperative for collaboration among domain scientists (in this case, geoscientists) and computer scientists. Domain scientists provide the application requirements as well as insights about the data involved, while computer scientists help assess whether problems can be solved with currently available technologies or require adaptaion of existing technologies and/or development of new technologies. The synergy can create vibrant collaborations potentially leading to new science insights as well as development of new data technologies and systems. The area of interface between geosciences and computer science, also referred to as geoinformatics is, we believe, a fertile area for interdisciplinary research.

  13. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  14. [Some reflections on evidenced-based medicine, precision medicine, and big data-based research].

    Science.gov (United States)

    Tang, J L; Li, L M

    2018-01-10

    Evidence-based medicine remains the best paradigm for medical practice. However, evidence alone is not decisions; decisions must also consider resources available and the values of people. Evidence shows that most of those treated with blood pressure-lowering, cholesterol-lowering, glucose-lowering and anti-cancer drugs do not benefit from preventing severe complications such as cardiovascular events and deaths. This implies that diagnosis and treatment in modern medicine in many circumstances is imprecise. It has become a dream to identify and treat only those few who can respond to the treatment. Precision medicine has thus come into being. Precision medicine is however not a new idea and cannot rely solely on gene sequencing as it was initially proposed. Neither is the large cohort and multi-factorial approach a new idea; in fact it has been used widely since 1950s. Since its very beginning, medicine has never stopped in searching for more precise diagnostic and therapeutic methods and already made achievements at various levels of our understanding and knowledge, such as vaccine, blood transfusion, imaging, and cataract surgery. Genetic biotechnology is not the only path to precision but merely a new method. Most genes are found only weakly associated with disease and are thus unlikely to lead to great improvement in diagnostic and therapeutic precision. The traditional multi-factorial approach by embracing big data and incorporating genetic factors is probably the most realistic way ahead for precision medicine. Big data boasts of possession of the total population and large sample size and claims correlation can displace causation. They are serious misleading concepts. Science has never had to observe the totality in order to draw a valid conclusion; a large sample size is required only when the anticipated effect is small and clinically less meaningful; emphasis on correlation over causation is equivalent to rejection of the scientific principles and methods

  15. Childhood Deafness: How Big a Problem In Malawi?

    African Journals Online (AJOL)

    INTRODUCTION. Few studies have been made of the prevalence of hearing loss in African populations, certainly in compari- son to the considerable amount known about the inci- dence, prevalence and antecedents of visual impairment and blindness (1). Yet hearing loss at any age leads to considerable social ...

  16. Solution of a braneworld big crunch/big bang cosmology

    International Nuclear Information System (INIS)

    McFadden, Paul L.; Turok, Neil; Steinhardt, Paul J.

    2007-01-01

    We solve for the cosmological perturbations in a five-dimensional background consisting of two separating or colliding boundary branes, as an expansion in the collision speed V divided by the speed of light c. Our solution permits a detailed check of the validity of four-dimensional effective theory in the vicinity of the event corresponding to the big crunch/big bang singularity. We show that the four-dimensional description fails at the first nontrivial order in (V/c) 2 . At this order, there is nontrivial mixing of the two relevant four-dimensional perturbation modes (the growing and decaying modes) as the boundary branes move from the narrowly separated limit described by Kaluza-Klein theory to the well-separated limit where gravity is confined to the positive-tension brane. We comment on the cosmological significance of the result and compute other quantities of interest in five-dimensional cosmological scenarios

  17. [Big data and their perspectives in radiation therapy].

    Science.gov (United States)

    Guihard, Sébastien; Thariat, Juliette; Clavier, Jean-Baptiste

    2017-02-01

    The concept of big data indicates a change of scale in the use of data and data aggregation into large databases through improved computer technology. One of the current challenges in the creation of big data in the context of radiation therapy is the transformation of routine care items into dark data, i.e. data not yet collected, and the fusion of databases collecting different types of information (dose-volume histograms and toxicity data for example). Processes and infrastructures devoted to big data collection should not impact negatively on the doctor-patient relationship, the general process of care or the quality of the data collected. The use of big data requires a collective effort of physicians, physicists, software manufacturers and health authorities to create, organize and exploit big data in radiotherapy and, beyond, oncology. Big data involve a new culture to build an appropriate infrastructure legally and ethically. Processes and issues are discussed in this article. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  18. Current applications of big data in obstetric anesthesiology.

    Science.gov (United States)

    Klumpner, Thomas T; Bauer, Melissa E; Kheterpal, Sachin

    2017-06-01

    The narrative review aims to highlight several recently published 'big data' studies pertinent to the field of obstetric anesthesiology. Big data has been used to study rare outcomes, to identify trends within the healthcare system, to identify variations in practice patterns, and to highlight potential inequalities in obstetric anesthesia care. Big data studies have helped define the risk of rare complications of obstetric anesthesia, such as the risk of neuraxial hematoma in thrombocytopenic parturients. Also, large national databases have been used to better understand trends in anesthesia-related adverse events during cesarean delivery as well as outline potential racial/ethnic disparities in obstetric anesthesia care. Finally, real-time analysis of patient data across a number of disparate health information systems through the use of sophisticated clinical decision support and surveillance systems is one promising application of big data technology on the labor and delivery unit. 'Big data' research has important implications for obstetric anesthesia care and warrants continued study. Real-time electronic surveillance is a potentially useful application of big data technology on the labor and delivery unit.

  19. Volume and Value of Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.

  20. Using Big Book to Teach Things in My House

    OpenAIRE

    Effrien, Intan; Lailatus, Sa’diyah; Nuruliftitah Maja, Neneng

    2017-01-01

    The purpose of this study to determine students' interest in learning using the big book media. Big book is a big book from the general book. The big book contains simple words and images that match the content of sentences and spelling. From here researchers can know the interest and development of students' knowledge. As well as train researchers to remain crative in developing learning media for students.

  1. Big Data Analytics Methodology in the Financial Industry

    Science.gov (United States)

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  2. Big data: survey, technologies, opportunities, and challenges.

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Ali, Waleed Kamaleldin Mahmoud; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  3. Big Data: Survey, Technologies, Opportunities, and Challenges

    Science.gov (United States)

    Khan, Nawsher; Yaqoob, Ibrar; Hashem, Ibrahim Abaker Targio; Inayat, Zakira; Mahmoud Ali, Waleed Kamaleldin; Alam, Muhammad; Shiraz, Muhammad; Gani, Abdullah

    2014-01-01

    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data. PMID:25136682

  4. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    Science.gov (United States)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  5. Hot big bang or slow freeze?

    Science.gov (United States)

    Wetterich, C.

    2014-09-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze - a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple ;crossover model; without a big bang singularity. In the infinite past space-time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  6. Big Data

    DEFF Research Database (Denmark)

    Aaen, Jon; Nielsen, Jeppe Agger

    2016-01-01

    Big Data byder sig til som en af tidens mest hypede teknologiske innovationer, udråbt til at rumme kimen til nye, værdifulde operationelle indsigter for private virksomheder og offentlige organisationer. Mens de optimistiske udmeldinger er mange, er forskningen i Big Data i den offentlige sektor...... indtil videre begrænset. Denne artikel belyser, hvordan den offentlige sundhedssektor kan genanvende og udnytte en stadig større mængde data under hensyntagen til offentlige værdier. Artiklen bygger på et casestudie af anvendelsen af store mængder sundhedsdata i Dansk AlmenMedicinsk Database (DAMD......). Analysen viser, at (gen)brug af data i nye sammenhænge er en flerspektret afvejning mellem ikke alene økonomiske rationaler og kvalitetshensyn, men også kontrol over personfølsomme data og etiske implikationer for borgeren. I DAMD-casen benyttes data på den ene side ”i den gode sags tjeneste” til...

  7. A Perplexed Economist Confronts 'too Big to Fail'

    Directory of Open Access Journals (Sweden)

    Scherer, F. M.

    2010-12-01

    Full Text Available This paper examines premises and data underlying the assertion that some financial institutions in the U.S. economy were "too big to fail" and hence warranted government bailout. It traces the merger histories enhancing the dominance of six leading firms in the U. S. banking industry and he sharp increases in the concentration of financial institution assets accompanying that merger wave. Financial institution profits are found to have soared in tandem with rising concentration. The paper advances hypotheses why these phenomena might be related and surveys relevant empirical literature on the relationships between market concentration, interest rates received and charged by banks, and economies of scale in banking.

  8. Curating Big Data Made Simple: Perspectives from Scientific Communities.

    Science.gov (United States)

    Sowe, Sulayman K; Zettsu, Koji

    2014-03-01

    The digital universe is exponentially producing an unprecedented volume of data that has brought benefits as well as fundamental challenges for enterprises and scientific communities alike. This trend is inherently exciting for the development and deployment of cloud platforms to support scientific communities curating big data. The excitement stems from the fact that scientists can now access and extract value from the big data corpus, establish relationships between bits and pieces of information from many types of data, and collaborate with a diverse community of researchers from various domains. However, despite these perceived benefits, to date, little attention is focused on the people or communities who are both beneficiaries and, at the same time, producers of big data. The technical challenges posed by big data are as big as understanding the dynamics of communities working with big data, whether scientific or otherwise. Furthermore, the big data era also means that big data platforms for data-intensive research must be designed in such a way that research scientists can easily search and find data for their research, upload and download datasets for onsite/offsite use, perform computations and analysis, share their findings and research experience, and seamlessly collaborate with their colleagues. In this article, we present the architecture and design of a cloud platform that meets some of these requirements, and a big data curation model that describes how a community of earth and environmental scientists is using the platform to curate data. Motivation for developing the platform, lessons learnt in overcoming some challenges associated with supporting scientists to curate big data, and future research directions are also presented.

  9. Big data analytics in healthcare: promise and potential.

    Science.gov (United States)

    Raghupathi, Wullianallur; Raghupathi, Viju

    2014-01-01

    To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.

  10. Data warehousing in the age of big data

    CERN Document Server

    Krishnan, Krish

    2013-01-01

    Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Expert author Krish Krishnan helps you make sense of how Big Data fits into the world of data warehousing in clear and concise detail. The book is presented in three distinct parts. Part 1 discusses Big Data, its technologies and use cases from early adopters. Part 2 addresses data warehousing, its shortcomings, and new architecture

  11. The Death of the Big Men

    DEFF Research Database (Denmark)

    Martin, Keir

    2010-01-01

    Recently Tolai people og Papua New Guinea have adopted the term 'Big Shot' to decribe an emerging post-colonial political elite. The mergence of the term is a negative moral evaluation of new social possibilities that have arisen as a consequence of the Big Shots' privileged position within a glo...

  12. Big data and software defined networks

    CERN Document Server

    Taheri, Javid

    2018-01-01

    Big Data Analytics and Software Defined Networking (SDN) are helping to drive the management of data usage of the extraordinary increase of computer processing power provided by Cloud Data Centres (CDCs). This new book investigates areas where Big-Data and SDN can help each other in delivering more efficient services.

  13. Big Data-Survey

    Directory of Open Access Journals (Sweden)

    P.S.G. Aruna Sri

    2016-03-01

    Full Text Available Big data is the term for any gathering of information sets, so expensive and complex, that it gets to be hard to process for utilizing customary information handling applications. The difficulties incorporate investigation, catch, duration, inquiry, sharing, stockpiling, Exchange, perception, and protection infringement. To reduce spot business patterns, anticipate diseases, conflict etc., we require bigger data sets when compared with the smaller data sets. Enormous information is hard to work with utilizing most social database administration frameworks and desktop measurements and perception bundles, needing rather enormously parallel programming running on tens, hundreds, or even a large number of servers. In this paper there was an observation on Hadoop architecture, different tools used for big data and its security issues.

  14. Addressing Data Veracity in Big Data Applications

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Computer Science; Chelmis, Charalampos [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Electrical Engineering; Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Electrical Engineering

    2014-10-27

    Big data applications such as in smart electric grids, transportation, and remote environment monitoring involve geographically dispersed sensors that periodically send back information to central nodes. In many cases, data from sensors is not available at central nodes at a frequency that is required for real-time modeling and decision-making. This may be due to physical limitations of the transmission networks, or due to consumers limiting frequent transmission of data from sensors located at their premises for security and privacy concerns. Such scenarios lead to partial data problem and raise the issue of data veracity in big data applications. We describe a novel solution to the problem of making short term predictions (up to a few hours ahead) in absence of real-time data from sensors in Smart Grid. A key implication of our work is that by using real-time data from only a small subset of influential sensors, we are able to make predictions for all sensors. We thus reduce the communication complexity involved in transmitting sensory data in Smart Grids. We use real-world electricity consumption data from smart meters to empirically demonstrate the usefulness of our method. Our dataset consists of data collected at 15-min intervals from 170 smart meters in the USC Microgrid for 7 years, totaling 41,697,600 data points.

  15. Big Data Analytics, Infectious Diseases and Associated Ethical Impacts

    OpenAIRE

    Garattini, C.; Raffle, J.; Aisyah, D. N.; Sartain, F.; Kozlakidis, Z.

    2017-01-01

    The exponential accumulation, processing and accrual of big data in healthcare are only possible through an equally rapidly evolving field of big data analytics. The latter offers the capacity to rationalize, understand and use big data to serve many different purposes, from improved services modelling to prediction of treatment outcomes, to greater patient and disease stratification. In the area of infectious diseases, the application of big data analytics has introduced a number of changes ...

  16. Evaluation of Data Management Systems for Geospatial Big Data

    OpenAIRE

    Amirian, Pouria; Basiri, Anahid; Winstanley, Adam C.

    2014-01-01

    Big Data encompasses collection, management, processing and analysis of the huge amount of data that varies in types and changes with high frequency. Often data component of Big Data has a positional component as an important part of it in various forms, such as postal address, Internet Protocol (IP) address and geographical location. If the positional components in Big Data extensively used in storage, retrieval, analysis, processing, visualization and knowledge discovery (geospatial Big Dat...

  17. Lead intoxication in dogs: risk assessment of feeding dogs trimmings of lead-shot game.

    Science.gov (United States)

    Høgåsen, Helga R; Ørnsrud, Robin; Knutsen, Helle K; Bernhoft, Aksel

    2016-07-25

    Expanding lead-based bullets, commonly used for hunting of big game, produce a scattering of lead particles in the carcass around the wound channel. Trimmings around this channel, which are sometimes fed to dogs, may contain lead particles. The aim of this study was to assess potential health effects of feeding dogs such trimmings. Lead ingestion most commonly causes gastrointestinal and neurological clinical signs, although renal, skeletal, haematological, cardiovascular and biochemical effects have also been reported. Experimental data indicate that a daily dose of around 1 mg lead as lead acetate/kg body weight for ten days may be considered as a Lowest Observed Effect Level in dogs. Acute toxicity documentation from the Centers for Disease Control and Prevention indicates 300 mg/kg body weight as the lowest dose of lead acetate causing death in dogs after oral ingestion. Our assessment suggests that dogs fed trimmings of lead-shot game may be affected by the amounts of lead present, and that even deadly exposure could occasionally occur. The intestinal absorption of lead from bullets was assumed to be 10-80 % of that of lead acetate, reflecting both the variability in particle size and uncertainty about the bioavailability of metallic lead in dogs. Despite data gaps, this study indicates that feeding dogs trimmings of lead-shot game may represent a risk of lead intoxication. More research is needed to assess the exact consequences, if lead-based bullets are still to be used. Meanwhile, we recommend that trimmings close to the wound channel should be made inaccessible to dogs, as well as to other domestic or wild animals.

  18. The roles of the Q (q) wave in lead I and QRS frontal axis for diagnosing loss of left ventricular capture during cardiac resynchronization therapy.

    Science.gov (United States)

    Cao, Yuan-Yuan; Su, Yan-Gang; Bai, Jin; Wang, Wei; Wang, Jing-Feng; Qin, Sheng-Mei; Ge, Jun-Bo

    2015-01-01

    Loss of left ventricular (LV) capture may lead to deterioration of heart failure in patients with cardiac resynchronization therapy (CRT). Recognition of loss of LV capture in time is important in clinical practice. A total of 422 electrocardiograms were acquired and analyzed from 53 CRT patients at 8 different pacing settings (LV only, right ventricle [RV] only, biventricular [BV] pacing with LV preactivation of 60, 40, 20, and 0 milliseconds and RV preactivation of 20 and 40 milliseconds). A modified Ammann algorithm by adding a third step-presence of Q (q, or QS) wave-to the original 2-step Ammann algorithm and a QRS axis shift method were devised to identify the loss of LV capture. The accuracy of modified Ammann algorithm was significantly higher than that of Ammann algorithm (78.9% vs. 69.1%, P capture. The LV preactivation, or simultaneous BV activation and LV lead positioned in nonposterior or noninferior wall can increase the diagnostic power of the modified Ammann algorithm and QRS axis shift method. © 2014 Wiley Periodicals, Inc.

  19. A New Look at Big History

    Science.gov (United States)

    Hawkey, Kate

    2014-01-01

    The article sets out a "big history" which resonates with the priorities of our own time. A globalizing world calls for new spacial scales to underpin what the history curriculum addresses, "big history" calls for new temporal scales, while concern over climate change calls for a new look at subject boundaries. The article…

  20. West Virginia's big trees: setting the record straight

    Science.gov (United States)

    Melissa Thomas-Van Gundy; Robert. Whetsell

    2016-01-01

    People love big trees, people love to find big trees, and people love to find big trees in the place they call home. Having been suspicious for years, my coauthor and historian Rob Whetsell, approached me with a species identification challenge. There are several photographs of giant trees used by many people to illustrate the past forests of West Virginia,...

  1. Sosiaalinen asiakassuhdejohtaminen ja big data

    OpenAIRE

    Toivonen, Topi-Antti

    2015-01-01

    Tässä tutkielmassa käsitellään sosiaalista asiakassuhdejohtamista sekä hyötyjä, joita siihen voidaan saada big datan avulla. Sosiaalinen asiakassuhdejohtaminen on terminä uusi ja monille tuntematon. Tutkimusta motivoi aiheen vähäinen tutkimus, suomenkielisen tutkimuksen puuttuminen kokonaan sekä sosiaalisen asiakassuhdejohtamisen mahdollinen olennainen rooli yritysten toiminnassa tulevaisuudessa. Big dataa käsittelevissä tutkimuksissa keskitytään monesti sen tekniseen puoleen, eikä sovellutuk...

  2. Lead intoxication under environmental hypoxia impairs oral health.

    Science.gov (United States)

    Terrizzi, Antonela R; Fernandez-Solari, Javier; Lee, Ching M; Martínez, María Pilar; Conti, María Ines

    2014-01-01

    We have reported that chronic lead intoxication under hypoxic environment induces alveolar bone loss that can lead to periodontal damage with the subsequent loss of teeth. The aim of the present study was to assess the modification of oral inflammatory parameters involved in the pathogenesis of periodontitis in the same experimental model. In gingival tissue, hypoxia increased inducible nitric oxid synthase (iNOS) activity (p lead decreased prostaglandin E2 (PGE2) content (p lead and PGE2 content was increased by both lead and hypoxia (p lead under hypoxic conditions. Results suggest a wide participation of inflammatory markers that mediate alveolar bone loss induced by these environmental conditions. The lack of information regarding oral health in lead-contaminated populations that coexist with hypoxia induced us to evaluate the alteration of inflammatory parameters in rat oral tissues to elucidate the link between periodontal damage and these environmental conditions.

  3. Detection of Equipment Faults Before Beam Loss

    CERN Document Server

    Galambos, J.

    2016-01-01

    High-power hadron accelerators have strict limits on fractional beam loss. In principle, once a high-quality beam is set up in an acceptable state, beam loss should remain steady. However, in practice, there are many trips in operational machines, owing to excessive beam loss. This paper deals with monitoring equipment health to identify precursor signals that indicate an issue with equipment that will lead to unacceptable beam loss. To this end, a variety of equipment and beam signal measurements are described. In particular, several operational examples from the Spallation Neutron Source (SNS) of deteriorating equipment functionality leading to beam loss are reported.

  4. My loss is your loss ... Sometimes: loss aversion and the effect of motivational biases.

    Science.gov (United States)

    Wilson, Robyn S; Arvai, Joseph L; Arkes, Hal R

    2008-08-01

    Findings from previous studies of individual decision-making behavior predict that losses will loom larger than gains. It is less clear, however, if this loss aversion applies to the way in which individuals attribute value to the gains and losses of others, or if it is robust across a broad spectrum of policy and management decision contexts. Consistent with previous work, the results from a series of experiments reported here revealed that subjects exhibited loss aversion when evaluating their own financial gains and losses. The presence of loss aversion was also confirmed for the way in which individuals attribute value to the financial gains and losses of others. However, similar evaluations within social and environmental contexts did not exhibit loss aversion. In addition, research subjects expected that individuals who were unknown to them would significantly undervalue the subjects' own losses across all contexts. The implications of these findings for risk-based policy and management are many. Specifically, they warrant caution when relying upon loss aversion to explain or predict the reaction of affected individuals to risk-based decisions that involve moral or protected values. The findings also suggest that motivational biases may lead decisionmakers to assume that their attitudes and beliefs are common among those affected by a decision, while those affected may expect unfamiliar others to be unable to identify and act in accordance with shared values.

  5. D-branes in a big bang/big crunch universe: Misner space

    International Nuclear Information System (INIS)

    Hikida, Yasuaki; Nayak, Rashmi R.; Panigrahi, Kamal L.

    2005-01-01

    We study D-branes in a two-dimensional lorentzian orbifold R 1,1 /Γ with a discrete boost Γ. This space is known as Misner or Milne space, and includes big crunch/big bang singularity. In this space, there are D0-branes in spiral orbits and D1-branes with or without flux on them. In particular, we observe imaginary parts of partition functions, and interpret them as the rates of open string pair creation for D0-branes and emission of winding closed strings for D1-branes. These phenomena occur due to the time-dependence of the background. Open string 2→2 scattering amplitude on a D1-brane is also computed and found to be less singular than closed string case

  6. D-branes in a big bang/big crunch universe: Misner space

    Energy Technology Data Exchange (ETDEWEB)

    Hikida, Yasuaki [Theory Group, High Energy Accelerator Research Organization (KEK), Tukuba, Ibaraki 305-0801 (Japan); Nayak, Rashmi R. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy); Panigrahi, Kamal L. [Dipartimento di Fisica and INFN, Sezione di Roma 2, ' Tor Vergata' , Rome 00133 (Italy)

    2005-09-01

    We study D-branes in a two-dimensional lorentzian orbifold R{sup 1,1}/{gamma} with a discrete boost {gamma}. This space is known as Misner or Milne space, and includes big crunch/big bang singularity. In this space, there are D0-branes in spiral orbits and D1-branes with or without flux on them. In particular, we observe imaginary parts of partition functions, and interpret them as the rates of open string pair creation for D0-branes and emission of winding closed strings for D1-branes. These phenomena occur due to the time-dependence of the background. Open string 2{yields}2 scattering amplitude on a D1-brane is also computed and found to be less singular than closed string case.

  7. BIG1 is required for the survival of deep layer neurons, neuronal polarity, and the formation of axonal tracts between the thalamus and neocortex in developing brain.

    Directory of Open Access Journals (Sweden)

    Jia-Jie Teoh

    Full Text Available BIG1, an activator protein of the small GTPase, Arf, and encoded by the Arfgef1 gene, is one of candidate genes for epileptic encephalopathy. To know the involvement of BIG1 in epileptic encephalopathy, we analyzed BIG1-deficient mice and found that BIG1 regulates neurite outgrowth and brain development in vitro and in vivo. The loss of BIG1 decreased the size of the neocortex and hippocampus. In BIG1-deficient mice, the neuronal progenitor cells (NPCs and the interneurons were unaffected. However, Tbr1+ and Ctip2+ deep layer (DL neurons showed spatial-temporal dependent apoptosis. This apoptosis gradually progressed from the piriform cortex (PIR, peaked in the neocortex, and then progressed into the hippocampus from embryonic day 13.5 (E13.5 to E17.5. The upper layer (UL and DL order in the neocortex was maintained in BIG1-deficient mice, but the excitatory neurons tended to accumulate before their destination layers. Further pulse-chase migration assay showed that the migration defect was non-cell autonomous and secondary to the progression of apoptosis into the BIG1-deficient neocortex after E15.5. In BIG1-deficient mice, we observed an ectopic projection of corticothalamic axons from the primary somatosensory cortex (S1 into the dorsal lateral geniculate nucleus (dLGN. The thalamocortical axons were unable to cross the diencephalon-telencephalon boundary (DTB. In vitro, BIG1-deficient neurons showed a delay in neuronal polarization. BIG1-deficient neurons were also hypersensitive to low dose glutamate (5 μM, and died via apoptosis. This study showed the role of BIG1 in the survival of DL neurons in developing embryonic brain and in the generation of neuronal polarity.

  8. Astroinformatics: the big data of the universe

    OpenAIRE

    Barmby, Pauline

    2016-01-01

    In astrophysics we like to think that our field was the originator of big data, back when it had to be carried around in big sky charts and books full of tables. These days, it's easier to move astrophysics data around, but we still have a lot of it, and upcoming telescope  facilities will generate even more. I discuss how astrophysicists approach big data in general, and give examples from some Western Physics & Astronomy research projects.  I also give an overview of ho...

  9. Recent big flare

    International Nuclear Information System (INIS)

    Moriyama, Fumio; Miyazawa, Masahide; Yamaguchi, Yoshisuke

    1978-01-01

    The features of three big solar flares observed at Tokyo Observatory are described in this paper. The active region, McMath 14943, caused a big flare on September 16, 1977. The flare appeared on both sides of a long dark line which runs along the boundary of the magnetic field. Two-ribbon structure was seen. The electron density of the flare observed at Norikura Corona Observatory was 3 x 10 12 /cc. Several arc lines which connect both bright regions of different magnetic polarity were seen in H-α monochrome image. The active region, McMath 15056, caused a big flare on December 10, 1977. At the beginning, several bright spots were observed in the region between two main solar spots. Then, the area and the brightness increased, and the bright spots became two ribbon-shaped bands. A solar flare was observed on April 8, 1978. At first, several bright spots were seen around the solar spot in the active region, McMath 15221. Then, these bright spots developed to a large bright region. On both sides of a dark line along the magnetic neutral line, bright regions were generated. These developed to a two-ribbon flare. The time required for growth was more than one hour. A bright arc which connects two ribbons was seen, and this arc may be a loop prominence system. (Kato, T.)

  10. Big Bang Day : The Great Big Particle Adventure - 3. Origins

    CERN Multimedia

    2008-01-01

    In this series, comedian and physicist Ben Miller asks the CERN scientists what they hope to find. If the LHC is successful, it will explain the nature of the Universe around us in terms of a few simple ingredients and a few simple rules. But the Universe now was forged in a Big Bang where conditions were very different, and the rules were very different, and those early moments were crucial to determining how things turned out later. At the LHC they can recreate conditions as they were billionths of a second after the Big Bang, before atoms and nuclei existed. They can find out why matter and antimatter didn't mutually annihilate each other to leave behind a Universe of pure, brilliant light. And they can look into the very structure of space and time - the fabric of the Universe

  11. Evidence for Evolution as Support for Big Bang

    Science.gov (United States)

    Gopal-Krishna

    1997-12-01

    With the exception of ZERO, the concept of BIG BANG is by far the most bizarre creation of the human mind. Three classical pillars of the Big Bang model of the origin of the universe are generally thought to be: (i) The abundances of the light elements; (ii) the microwave back-ground radiation; and (iii) the change with cosmic epoch in the average properties of galaxies (both active and non-active types). Evidence is also mounting for redshift dependence of the intergalactic medium, as discussed elsewhere in this volume in detail. In this contribution, I endeavour to highlight a selection of recent advances pertaining to the third category. The widely different levels of confidence in the claimed observational constraints in the field of cosmology can be guaged from the following excerpts from two leading astrophysicists: "I would bet odds of 10 to 1 on the validity of the general 'hot Big Bang' concept as a description of how our universe has evolved since it was around 1 sec. old" -M. Rees (1995), in 'Perspectives in Astrophysical Cosmology' CUP. "With the much more sensitive observations available today, no astrophysical property shows evidence of evolution, such as was claimed in the 1950s to disprove the Steady State theory" -F. Hoyle (1987), in 'Fifty years in cosmology', B. M. Birla Memorial Lecture, Hyderabad, India. The burgeoning multi-wavelength culture in astronomy has provided a tremendous boost to observational cosmology in recent years. We now proceed to illustrate this with a sequence of examples which reinforce the picture of an evolving universe. Also provided are some relevant details of the data used in these studies so that their scope can be independently judged by the readers.

  12. Inflated granularity: Spatial “Big Data” and geodemographics

    Directory of Open Access Journals (Sweden)

    Craig M Dalton

    2015-08-01

    Full Text Available Data analytics, particularly the current rhetoric around “Big Data”, tend to be presented as new and innovative, emerging ahistorically to revolutionize modern life. In this article, we situate one branch of Big Data analytics, spatial Big Data, through a historical predecessor, geodemographic analysis, to help develop a critical approach to current data analytics. Spatial Big Data promises an epistemic break in marketing, a leap from targeting geodemographic areas to targeting individuals. Yet it inherits characteristics and problems from geodemographics, including a justification through the market, and a process of commodification through the black-boxing of technology. As researchers develop sustained critiques of data analytics and its effects on everyday life, we must so with a grounding in the cultural and historical contexts from which data technologies emerged. This article and others (Barnes and Wilson, 2014 develop a historically situated, critical approach to spatial Big Data. This history illustrates connections to the critical issues of surveillance, redlining, and the production of consumer subjects and geographies. The shared histories and structural logics of spatial Big Data and geodemographics create the space for a continued critique of data analyses’ role in society.

  13. Big data analysis for smart farming

    NARCIS (Netherlands)

    Kempenaar, C.; Lokhorst, C.; Bleumer, E.J.B.; Veerkamp, R.F.; Been, Th.; Evert, van F.K.; Boogaardt, M.J.; Ge, L.; Wolfert, J.; Verdouw, C.N.; Bekkum, van Michael; Feldbrugge, L.; Verhoosel, Jack P.C.; Waaij, B.D.; Persie, van M.; Noorbergen, H.

    2016-01-01

    In this report we describe results of a one-year TO2 institutes project on the development of big data technologies within the milk production chain. The goal of this project is to ‘create’ an integration platform for big data analysis for smart farming and to develop a show case. This includes both

  14. Big History or the 13800 million years from the Big Bang to the Human Brain

    Science.gov (United States)

    Gústafsson, Ludvik E.

    2017-04-01

    Big History is the integrated history of the Cosmos, Earth, Life, and Humanity. It is an attempt to understand our existence as a continuous unfolding of processes leading to ever more complex structures. Three major steps in the development of the Universe can be distinguished, the first being the creation of matter/energy and forces in the context of an expanding universe, while the second and third steps were reached when completely new qualities of matter came into existence. 1. Matter comes out of nothing Quantum fluctuations and the inflation event are thought to be responsible for the creation of stable matter particles in what is called the Big Bang. Along with simple particles the universe is formed. Later larger particles like atoms and the most simple chemical elements hydrogen and helium evolved. Gravitational contraction of hydrogen and helium formed the first stars und later on the first galaxies. Massive stars ended their lives in violent explosions releasing heavier elements like carbon, oxygen, nitrogen, sulfur and iron into the universe. Subsequent star formation led to star systems with bodies containing these heavier elements. 2. Matter starts to live About 9200 million years after the Big Bang a rather inconspicous star of middle size formed in one of a billion galaxies. The leftovers of the star formation clumped into bodies rotating around the central star. In some of them elements like silicon, oxygen, iron and many other became the dominant matter. On the third of these bodies from the central star much of the surface was covered with an already very common chemical compound in the universe, water. Fluid water and plenty of various elements, especially carbon, were the ingredients of very complex chemical compounds that made up even more complex structures. These were able to replicate themselves. Life had appeared, the only occasion that we human beings know of. Life evolved subsequently leading eventually to the formation of multicellular

  15. A survey on Big Data Stream Mining

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Big Data can be static on one machine or distributed ... decision making, and process automation. Big data .... Concept Drifting: concept drifting mean the classifier .... transactions generated by a prefix tree structure. EstDec ...

  16. Emerging technology and architecture for big-data analytics

    CERN Document Server

    Chang, Chip; Yu, Hao

    2017-01-01

    This book describes the current state of the art in big-data analytics, from a technology and hardware architecture perspective. The presentation is designed to be accessible to a broad audience, with general knowledge of hardware design and some interest in big-data analytics. Coverage includes emerging technology and devices for data-analytics, circuit design for data-analytics, and architecture and algorithms to support data-analytics. Readers will benefit from the realistic context used by the authors, which demonstrates what works, what doesn’t work, and what are the fundamental problems, solutions, upcoming challenges and opportunities. Provides a single-source reference to hardware architectures for big-data analytics; Covers various levels of big-data analytics hardware design abstraction and flow, from device, to circuits and systems; Demonstrates how non-volatile memory (NVM) based hardware platforms can be a viable solution to existing challenges in hardware architecture for big-data analytics.

  17. Toward a manifesto for the 'public understanding of big data'.

    Science.gov (United States)

    Michael, Mike; Lupton, Deborah

    2016-01-01

    In this article, we sketch a 'manifesto' for the 'public understanding of big data'. On the one hand, this entails such public understanding of science and public engagement with science and technology-tinged questions as follows: How, when and where are people exposed to, or do they engage with, big data? Who are regarded as big data's trustworthy sources, or credible commentators and critics? What are the mechanisms by which big data systems are opened to public scrutiny? On the other hand, big data generate many challenges for public understanding of science and public engagement with science and technology: How do we address publics that are simultaneously the informant, the informed and the information of big data? What counts as understanding of, or engagement with, big data, when big data themselves are multiplying, fluid and recursive? As part of our manifesto, we propose a range of empirical, conceptual and methodological exhortations. We also provide Appendix 1 that outlines three novel methods for addressing some of the issues raised in the article. © The Author(s) 2015.

  18. What do Big Data do in Global Governance?

    DEFF Research Database (Denmark)

    Krause Hansen, Hans; Porter, Tony

    2017-01-01

    Two paradoxes associated with big data are relevant to global governance. First, while promising to increase the capacities of humans in governance, big data also involve an increasingly independent role for algorithms, technical artifacts, the Internet of things, and other objects, which can...... reduce the control of human actors. Second, big data involve new boundary transgressions as data are brought together from multiple sources while also creating new boundary conflicts as powerful actors seek to gain advantage by controlling big data and excluding competitors. These changes are not just...... about new data sources for global decision-makers, but instead signal more profound changes in the character of global governance....

  19. Redshift structure of the big bang in inhomogeneous cosmological models. I. Spherical dust solutions

    International Nuclear Information System (INIS)

    Hellaby, C.; Lake, K.

    1984-01-01

    The redshift from the big bang in the standard model is always infinite, but in inhomogeneous cosmological models infinite blueshifts are also possible. To avoid such divergent energy fluxes, we require that all realistic cosmological models must not display infinite blueshifts. We apply this requirement to the Tolman model (spherically symmetric dust), using the geometrical optics approximation, and assuming that the geodesic tangent vectors may be expanded in power series. We conclude that the bang time must be simultaneous. The stronger requirement, that only infinite redshifts from the big bang may occur, does not lead to a stronger condition on the metric. Further consequences of simultaneity are that no decaying mode fluctuations are possible, and that the only acceptable model which is homogeneous at late times is the Robertson-Walker model

  20. Redshift structure of the big bang in inhomogeneous cosmological models. I. Spherical dust solutions

    Energy Technology Data Exchange (ETDEWEB)

    Hellaby, C.; Lake, K.

    1984-07-01

    The redshift from the big bang in the standard model is always infinite, but in inhomogeneous cosmological models infinite blueshifts are also possible. To avoid such divergent energy fluxes, we require that all realistic cosmological models must not display infinite blueshifts. We apply this requirement to the Tolman model (spherically symmetric dust), using the geometrical optics approximation, and assuming that the geodesic tangent vectors may be expanded in power series. We conclude that the bang time must be simultaneous. The stronger requirement, that only infinite redshifts from the big bang may occur, does not lead to a stronger condition on the metric. Further consequences of simultaneity are that no decaying mode fluctuations are possible, and that the only acceptable model which is homogeneous at late times is the Robertson-Walker model.

  1. 76 FR 7810 - Big Horn County Resource Advisory Committee

    Science.gov (United States)

    2011-02-11

    ..., Wyoming 82801. Comments may also be sent via e-mail to [email protected] , with the words Big... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  2. Hot big bang or slow freeze?

    Energy Technology Data Exchange (ETDEWEB)

    Wetterich, C.

    2014-09-07

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  3. Hot big bang or slow freeze?

    International Nuclear Information System (INIS)

    Wetterich, C.

    2014-01-01

    We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe

  4. Hot big bang or slow freeze?

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2014-09-01

    Full Text Available We confront the big bang for the beginning of the universe with an equivalent picture of a slow freeze — a very cold and slowly evolving universe. In the freeze picture the masses of elementary particles increase and the gravitational constant decreases with cosmic time, while the Newtonian attraction remains unchanged. The freeze and big bang pictures both describe the same observations or physical reality. We present a simple “crossover model” without a big bang singularity. In the infinite past space–time is flat. Our model is compatible with present observations, describing the generation of primordial density fluctuations during inflation as well as the present transition to a dark energy-dominated universe.

  5. Big Data: Survey, Technologies, Opportunities, and Challenges

    Directory of Open Access Journals (Sweden)

    Nawsher Khan

    2014-01-01

    Full Text Available Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.

  6. Pre-big bang cosmology and quantum fluctuations

    International Nuclear Information System (INIS)

    Ghosh, A.; Pollifrone, G.; Veneziano, G.

    2000-01-01

    The quantum fluctuations of a homogeneous, isotropic, open pre-big bang model are discussed. By solving exactly the equations for tensor and scalar perturbations we find that particle production is negligible during the perturbative Pre-Big Bang phase

  7. Nursing Management Minimum Data Set: Cost-Effective Tool To Demonstrate the Value of Nurse Staffing in the Big Data Science Era.

    Science.gov (United States)

    Pruinelli, Lisiane; Delaney, Connie W; Garciannie, Amy; Caspers, Barbara; Westra, Bonnie L

    2016-01-01

    There is a growing body of evidence of the relationship of nurse staffing to patient, nurse, and financial outcomes. With the advent of big data science and developing big data analytics in nursing, data science with the reuse of big data is emerging as a timely and cost-effective approach to demonstrate nursing value. The Nursing Management Minimum Date Set (NMMDS) provides standard administrative data elements, definitions, and codes to measure the context where care is delivered and, consequently, the value of nursing. The integration of the NMMDS elements in the current health system provides evidence for nursing leaders to measure and manage decisions, leading to better patient, staffing, and financial outcomes. It also enables the reuse of data for clinical scholarship and research.

  8. Analysis of Big Data Maturity Stage in Hospitality Industry

    OpenAIRE

    Shabani, Neda; Munir, Arslan; Bose, Avishek

    2017-01-01

    Big data analytics has an extremely significant impact on many areas in all businesses and industries including hospitality. This study aims to guide information technology (IT) professionals in hospitality on their big data expedition. In particular, the purpose of this study is to identify the maturity stage of the big data in hospitality industry in an objective way so that hotels be able to understand their progress, and realize what it will take to get to the next stage of big data matur...

  9. A Multidisciplinary Perspective of Big Data in Management Research

    OpenAIRE

    Sheng, Jie; Amankwah-Amoah, J.; Wang, X.

    2017-01-01

    In recent years, big data has emerged as one of the prominent buzzwords in business and management. In spite of the mounting body of research on big data across the social science disciplines, scholars have offered little synthesis on the current state of knowledge. To take stock of academic research that contributes to the big data revolution, this paper tracks scholarly work's perspectives on big data in the management domain over the past decade. We identify key themes emerging in manageme...

  10. An embedding for the big bang

    Science.gov (United States)

    Wesson, Paul S.

    1994-01-01

    A cosmological model is given that has good physical properties for the early and late universe but is a hypersurface in a flat five-dimensional manifold. The big bang can therefore be regarded as an effect of a choice of coordinates in a truncated higher-dimensional geometry. Thus the big bang is in some sense a geometrical illusion.

  11. Thermoeconomic analysis of Biomass Integrated Gasification Gas Turbine Combined Cycle (BIG GT CC) cogeneration plant

    Energy Technology Data Exchange (ETDEWEB)

    Arrieta, Felipe Raul Ponce; Lora, Electo Silva [Escola Federal de Engenharia de Itajuba, MG (Brazil). Nucleo de Estudos de Sistemas Termicos]. E-mails: aponce@iem.efei.br; electo@iem.efei.br; Perez, Silvia Azucena Nebra de [Universidade Estadual de Campinas, SP (Brazil). Faculdade de Engenharia Mecanica. Dept. de Energia]. E-mail: sanebra@fem. unicamp.br

    2000-07-01

    Using thermoeconomics as a tool to identify the location and magnitude of the real thermodynamic losses (energy waste, or exergy destruction and exergy losses) it is possible to assess the production costs of each product (electric power and heat) and the exergetic and exergoeconomic cost of each flow in a cogeneration plant to assist in decision-marketing procedures concerning to plant design, investment, operation and allocations of research funds. Thermo economic analysis of Biomass Integrated Gasification Gas Turbine Combined Cycle (BIG GT CC) cogeneration plant for its applications in sugar cane mills brings the following results: the global exergetic efficiency is low; the highest irreversibilities occur in the following equipment, by order: scrubber (38%), gas turbine (16%), dryer (12%), gasifier and HRSG (6%); due to the adopted cost distribution methodology, the unit exergetic cost of the heat (4,11) is lower than electricity (4,71); the lower market price of biomass is one of the most sensible parameter in the possible implementation of BIG-GT technology in sugar cane industry; the production costs are 31 US$/MWh and 32 US$/MWh for electricity and heat, respectively. The electricity cost is, after all, competitive with the actual market price. The electricity and heat costs are lower or almost equal than other values reported for actual Rankine cycle cogeneration plants. (author)

  12. Big Data as Governmentality in International Development

    DEFF Research Database (Denmark)

    Flyverbom, Mikkel; Madsen, Anders Koed; Rasche, Andreas

    2017-01-01

    Statistics have long shaped the field of visibility for the governance of development projects. The introduction of big data has altered the field of visibility. Employing Dean's “analytics of government” framework, we analyze two cases—malaria tracking in Kenya and monitoring of food prices...... in Indonesia. Our analysis shows that big data introduces a bias toward particular types of visualizations. What problems are being made visible through big data depends to some degree on how the underlying data is visualized and who is captured in the visualizations. It is also influenced by technical factors...

  13. 75 FR 71069 - Big Horn County Resource Advisory Committee

    Science.gov (United States)

    2010-11-22

    ....us , with the words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  14. 76 FR 26240 - Big Horn County Resource Advisory Committee

    Science.gov (United States)

    2011-05-06

    ... words Big Horn County RAC in the subject line. Facsimilies may be sent to 307-674-2668. All comments... DEPARTMENT OF AGRICULTURE Forest Service Big Horn County Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Big Horn County Resource Advisory Committee...

  15. Big Science

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1986-05-15

    Astronomy, like particle physics, has become Big Science where the demands of front line research can outstrip the science budgets of whole nations. Thus came into being the European Southern Observatory (ESO), founded in 1962 to provide European scientists with a major modern observatory to study the southern sky under optimal conditions.

  16. Big Data in Market Research: Why More Data Does Not Automatically Mean Better Information

    Directory of Open Access Journals (Sweden)

    Bosch Volker

    2016-11-01

    Full Text Available Big data will change market research at its core in the long term because consumption of products and media can be logged electronically more and more, making it measurable on a large scale. Unfortunately, big data datasets are rarely representative, even if they are huge. Smart algorithms are needed to achieve high precision and prediction quality for digital and non-representative approaches. Also, big data can only be processed with complex and therefore error-prone software, which leads to measurement errors that need to be corrected. Another challenge is posed by missing but critical variables. The amount of data can indeed be overwhelming, but it often lacks important information. The missing observations can only be filled in by using statistical data imputation. This requires an additional data source with the additional variables, for example a panel. Linear imputation is a statistical procedure that is anything but trivial. It is an instrument to “transport information,” and the higher the observed data correlates with the data to be imputed, the better it works. It makes structures visible even if the depth of the data is limited.

  17. Can we avoid the Sixth Mass Extinction? Setting today's extinction crisis in the context of the Big Five

    Science.gov (United States)

    Barnosky, A. D.

    2012-12-01

    While the ultimate extinction driver now—Homo sapiens—is unique with respect to the drivers of past extinctions, comparison of parallel neontological and paleontological information helps calibrate how far the so-called Sixth Mass Extinction has progressed and whether it is inevitable. Such comparisons document that rates of extinction today are approaching or exceeding those that characterized the Big Five Mass Extinctions. Continuation of present extinction rates for vertebrates, for example, would result in 75% species loss—the minimum benchmark exhibited in the Big Five extinctions—within 3 to 22 centuries, assuming constant rates of loss and no threshold effects. Preceding and during each of the Big Five, the global ecosystem experienced major changes in climate, atmospheric chemisty, and ocean chemistry—not unlike what is being observed presently. Nevertheless, only 1-2% of well-assessed modern species have been lost over the past five centuries, still far below what characterized past mass extinctions in the strict paleontological sense. For mammals, adding in the end-Pleistocene species that died out would increase the species-loss percentage by some 5%. If threatened vertebrate species were to actually go extinct, losses would rise to between 14 and 40%, depending on the group. Such observations highlight that, although many species have already had their populations drastically reduced to near-critical levels, the Sixth Mass Extinction has not yet progressed to the point where it is unavoidable. Put another way, the vast majority of species that have occupied the world in concert with Homo sapiens are still alive and are possible to save. That task, however, will require slowing the abnormally high extinction rates that are now in progress, which in turn requires unified efforts to cap human population growth, decrease the average human footprint, reduce fossil fuel use while simultaneously increasing clean energy technologies, integrate

  18. Commentary: Epidemiology in the era of big data.

    Science.gov (United States)

    Mooney, Stephen J; Westreich, Daniel J; El-Sayed, Abdulrahman M

    2015-05-01

    Big Data has increasingly been promoted as a revolutionary development in the future of science, including epidemiology. However, the definition and implications of Big Data for epidemiology remain unclear. We here provide a working definition of Big Data predicated on the so-called "three V's": variety, volume, and velocity. From this definition, we argue that Big Data has evolutionary and revolutionary implications for identifying and intervening on the determinants of population health. We suggest that as more sources of diverse data become publicly available, the ability to combine and refine these data to yield valid answers to epidemiologic questions will be invaluable. We conclude that while epidemiology as practiced today will continue to be practiced in the Big Data future, a component of our field's future value lies in integrating subject matter knowledge with increased technical savvy. Our training programs and our visions for future public health interventions should reflect this future.

  19. Digital humanitarians how big data is changing the face of humanitarian response

    CERN Document Server

    Meier, Patrick

    2015-01-01

    The Rise of Digital HumanitariansMapping Haiti LiveSupporting Search And Rescue EffortsPreparing For The Long Haul Launching An SMS Life Line Sending In The Choppers Openstreetmap To The Rescue Post-Disaster Phase The Human Story Doing Battle With Big Data Rise Of Digital Humanitarians This Book And YouThe Rise of Big (Crisis) DataBig (Size) Data Finding Needles In Big (Size) Data Policy, Not Simply Technology Big (False) Data Unpacking Big (False) Data Calling 991 And 999 Big (

  20. Big Data Provenance: Challenges, State of the Art and Opportunities.

    Science.gov (United States)

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  1. Predicting big bang deuterium

    Energy Technology Data Exchange (ETDEWEB)

    Hata, N.; Scherrer, R.J.; Steigman, G.; Thomas, D.; Walker, T.P. [Department of Physics, Ohio State University, Columbus, Ohio 43210 (United States)

    1996-02-01

    We present new upper and lower bounds to the primordial abundances of deuterium and {sup 3}He based on observational data from the solar system and the interstellar medium. Independent of any model for the primordial production of the elements we find (at the 95{percent} C.L.): 1.5{times}10{sup {minus}5}{le}(D/H){sub {ital P}}{le}10.0{times}10{sup {minus}5} and ({sup 3}He/H){sub {ital P}}{le}2.6{times}10{sup {minus}5}. When combined with the predictions of standard big bang nucleosynthesis, these constraints lead to a 95{percent} C.L. bound on the primordial abundance deuterium: (D/H){sub best}=(3.5{sup +2.7}{sub {minus}1.8}){times}10{sup {minus}5}. Measurements of deuterium absorption in the spectra of high-redshift QSOs will directly test this prediction. The implications of this prediction for the primordial abundances of {sup 4}He and {sup 7}Li are discussed, as well as those for the universal density of baryons. {copyright} {ital 1996 The American Astronomical Society.}

  2. [Embracing medical innovation in the era of big data].

    Science.gov (United States)

    You, Suning

    2015-01-01

    Along with the advent of big data era worldwide, medical field has to place itself in it inevitably. The current article thoroughly introduces the basic knowledge of big data, and points out the coexistence of its advantages and disadvantages. Although the innovations in medical field are struggling, the current medical pattern will be changed fundamentally by big data. The article also shows quick change of relevant analysis in big data era, depicts a good intention of digital medical, and proposes some wise advices to surgeons.

  3. Big Data and Health Economics: Opportunities, Challenges and Risks

    Directory of Open Access Journals (Sweden)

    Diego Bodas-Sagi

    2018-03-01

    Full Text Available Big Data offers opportunities in many fields. Healthcare is not an exception. In this paper we summarize the possibilities of Big Data and Big Data technologies to offer useful information to policy makers. In a world with tight public budgets and ageing populations we feel necessary to save costs in any production process. The use of outcomes from Big Data could be in the future a way to improve decisions at a lower cost than today. In addition to list the advantages of properly using data and technologies from Big Data, we also show some challenges and risks that analysts could face. We also present an hypothetical example of the use of administrative records with health information both for diagnoses and patients.

  4. Speaking sociologically with big data: symphonic social science and the future for big data research

    OpenAIRE

    Halford, Susan; Savage, Mike

    2017-01-01

    Recent years have seen persistent tension between proponents of big data analytics, using new forms of digital data to make computational and statistical claims about ‘the social’, and many sociologists sceptical about the value of big data, its associated methods and claims to knowledge. We seek to move beyond this, taking inspiration from a mode of argumentation pursued by Putnam (2000), Wilkinson and Pickett (2009) and Piketty (2014) that we label ‘symphonic social science’. This bears bot...

  5. Application and Exploration of Big Data Mining in Clinical Medicine.

    Science.gov (United States)

    Zhang, Yue; Guo, Shu-Li; Han, Li-Na; Li, Tie-Ling

    2016-03-20

    To review theories and technologies of big data mining and their application in clinical medicine. Literatures published in English or Chinese regarding theories and technologies of big data mining and the concrete applications of data mining technology in clinical medicine were obtained from PubMed and Chinese Hospital Knowledge Database from 1975 to 2015. Original articles regarding big data mining theory/technology and big data mining's application in the medical field were selected. This review characterized the basic theories and technologies of big data mining including fuzzy theory, rough set theory, cloud theory, Dempster-Shafer theory, artificial neural network, genetic algorithm, inductive learning theory, Bayesian network, decision tree, pattern recognition, high-performance computing, and statistical analysis. The application of big data mining in clinical medicine was analyzed in the fields of disease risk assessment, clinical decision support, prediction of disease development, guidance of rational use of drugs, medical management, and evidence-based medicine. Big data mining has the potential to play an important role in clinical medicine.

  6. Big Data in Public Health: Terminology, Machine Learning, and Privacy.

    Science.gov (United States)

    Mooney, Stephen J; Pejaver, Vikas

    2018-04-01

    The digital world is generating data at a staggering and still increasing rate. While these "big data" have unlocked novel opportunities to understand public health, they hold still greater potential for research and practice. This review explores several key issues that have arisen around big data. First, we propose a taxonomy of sources of big data to clarify terminology and identify threads common across some subtypes of big data. Next, we consider common public health research and practice uses for big data, including surveillance, hypothesis-generating research, and causal inference, while exploring the role that machine learning may play in each use. We then consider the ethical implications of the big data revolution with particular emphasis on maintaining appropriate care for privacy in a world in which technology is rapidly changing social norms regarding the need for (and even the meaning of) privacy. Finally, we make suggestions regarding structuring teams and training to succeed in working with big data in research and practice.

  7. Big Sites, Big Questions, Big Data, Big Problems: Scales of Investigation and Changing Perceptions of Archaeological Practice in the Southeastern United States

    Directory of Open Access Journals (Sweden)

    Cameron B Wesson

    2014-08-01

    Full Text Available Since at least the 1930s, archaeological investigations in the southeastern United States have placed a priority on expansive, near-complete, excavations of major sites throughout the region. Although there are considerable advantages to such large–scale excavations, projects conducted at this scale are also accompanied by a series of challenges regarding the comparability, integrity, and consistency of data recovery, analysis, and publication. We examine the history of large–scale excavations in the southeast in light of traditional views within the discipline that the region has contributed little to the ‘big questions’ of American archaeology. Recently published analyses of decades old data derived from Southeastern sites reveal both the positive and negative aspects of field research conducted at scales much larger than normally undertaken in archaeology. Furthermore, given the present trend toward the use of big data in the social sciences, we predict an increased use of large pre–existing datasets developed during the New Deal and other earlier periods of archaeological practice throughout the region.

  8. A proposed framework of big data readiness in public sectors

    Science.gov (United States)

    Ali, Raja Haslinda Raja Mohd; Mohamad, Rosli; Sudin, Suhizaz

    2016-08-01

    Growing interest over big data mainly linked to its great potential to unveil unforeseen pattern or profiles that support organisation's key business decisions. Following private sector moves to embrace big data, the government sector has now getting into the bandwagon. Big data has been considered as one of the potential tools to enhance service delivery of the public sector within its financial resources constraints. Malaysian government, particularly, has considered big data as one of the main national agenda. Regardless of government commitment to promote big data amongst government agencies, degrees of readiness of the government agencies as well as their employees are crucial in ensuring successful deployment of big data. This paper, therefore, proposes a conceptual framework to investigate perceived readiness of big data potentials amongst Malaysian government agencies. Perceived readiness of 28 ministries and their respective employees will be assessed using both qualitative (interview) and quantitative (survey) approaches. The outcome of the study is expected to offer meaningful insight on factors affecting change readiness among public agencies on big data potentials and the expected outcome from greater/lower change readiness among the public sectors.

  9. Big data analytics to improve cardiovascular care: promise and challenges.

    Science.gov (United States)

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  10. Academic Training Lecture | Big Data Challenges in the Era of Data Deluge | 9 - 10 March

    CERN Multimedia

    2015-01-01

    Big Data Challenges in the Era of Data Deluge, by Ilya Volvovski (Senior Software Architect, Cleversafe, USA).   Monday, 9 March 2015 from 11:00 to 12:00 and Tuesday, 10 March 2015 from 11:00 to 12:00 at CERN ( 4-3-006 - TH Conference Room ) Description: For better or for worse, the amount of data generated in the world grows exponentially. The year of 2012 was dubbed the year of 'Big Data' and 'Data Deluge'; in 2013, the petabyte scale was referenced matter­-of-­factly; and exabyte size is now in the vocabulary of storage providers and large organisations. Traditional copy-based technology doesn’t scale into this size territory: relational DBs give up after many billions of rows in tables; typical file systems are not designed to store trillions of objects; Disks fail; networks are not always available. Yet individuals, businesses and academic institutions demand 100% availability with no data loss. Is this the final dead end? ...

  11. The role of big laboratories

    CERN Document Server

    Heuer, Rolf-Dieter

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward.

  12. The role of big laboratories

    International Nuclear Information System (INIS)

    Heuer, R-D

    2013-01-01

    This paper presents the role of big laboratories in their function as research infrastructures. Starting from the general definition and features of big laboratories, the paper goes on to present the key ingredients and issues, based on scientific excellence, for the successful realization of large-scale science projects at such facilities. The paper concludes by taking the example of scientific research in the field of particle physics and describing the structures and methods required to be implemented for the way forward. (paper)

  13. Pre-big bang bubbles from the gravitational instability of generic string vacua

    CERN Document Server

    Buonanno, A; Veneziano, Gabriele

    1999-01-01

    We formulate the basic postulate of pre-big bang cosmology as one of ``asymptotic past triviality'', by which we mean that the initial state is a generic perturbative solution of the tree-level low-energy effective action. Such a past-trivial ``string vacuum'' is made of an arbitrary ensemble of incoming gravitational and dilatonic waves, and is generically prone to gravitational instability, leading to the possible formation of many black holes hiding singular space-like hypersurfaces. Each such singular space-like hypersurface of gravitational collapse becomes, in the string-frame metric, the usual big-bang t=0 hypersurface, i.e. the place of birth of a baby Friedmann universe after a period of dilaton-driven inflation. Specializing to the spherically-symmetric case, we review and reinterpret previous work on the subject, and propose a simple, scale-invariant criterion for collapse/inflation in terms of asymptotic data at past null infinity. Those data should determine whether, when, and where collapse/infl...

  14. BIG´s italesættelse af BIG

    DEFF Research Database (Denmark)

    Brodersen, Anne Mygind; Sørensen, Britta Vilhelmine; Seiding, Mette

    2008-01-01

    Since Bjarke Ingels established the BIG (Bjarke Ingels Group) architectural firm in 2006, the company has succeeded in making itself heard and in attracting the attention of politicians and the media. BIG did so first and foremost by means of an overall approach to urban development that is both...... close to the political powers that be, and gain their support, but also to attract attention in the public debate. We present the issues this way: How does BIG speak out for itself? How can we explain the way the company makes itself heard, based on an analysis of the big.dk web site, the Clover Block...... by sidestepping the usual democratic process required for local plans. Politicians declared a positive interest in both the building project and a rapid decision process. However, local interest groups felt they were excluded from any influence regarding the proposal and launched a massive resistance campaign...

  15. Probing the pre-big bang universe

    International Nuclear Information System (INIS)

    Veneziano, G.

    2000-01-01

    Superstring theory suggests a new cosmology whereby a long inflationary phase preceded a non singular big bang-like event. After discussing how pre-big bang inflation naturally arises from an almost trivial initial state of the Universe, I will describe how present or near-future experiments can provide sensitive probes of how the Universe behaved in the pre-bang era

  16. CERN: A big year for LEP

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In April this year's data-taking period for CERN's big LEP electron-positron collider got underway, and is scheduled to continue until November. The immediate objective of the four big experiments - Aleph, Delphi, L3 and Opal - will be to increase considerably their stock of carefully recorded Z decays, currently totalling about three-quarters of a million

  17. Research on the Impact of Big Data on Logistics

    Directory of Open Access Journals (Sweden)

    Wang Yaxing

    2017-01-01

    Full Text Available In the context of big data development, a large amount of data will appear at logistics enterprises, especially in the aspect of logistics, such as transportation, warehousing, distribution and so on. Based on the analysis of the characteristics of big data, this paper studies the impact of big data on the logistics and its action mechanism, and gives reasonable suggestions. Through building logistics data center by using the big data technology, some hidden value information behind the data will be digged out, in which the logistics enterprises can benefit from it.

  18. Concurrence of big data analytics and healthcare: A systematic review.

    Science.gov (United States)

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  19. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration; Klimentov, Alexei; Korchuganova, Tatiana

    2017-01-01

    BigPanDA monitoring is a web based application which provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analyzing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this wor...

  20. Big Data Analytics in Healthcare.

    Science.gov (United States)

    Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan

    2015-01-01

    The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.